paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:4d08cdb2de2044bcb574a425b42963b83fbebfbc
[ "This paper investigates kernel ridge-less regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridge-less regression are still sparse. The present study fills this gap, which, in my opinion, is also one of the main contributions of the present study. " ]
We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm minimizes a bound on CVloo stability, which in turn is controlled by the condition number of the empirical kernel matrix. The latter can be characterized in the asymptotic regime where both the dimension and cardinality of the data go to infinity. Under the assumption of random kernel matrices, the corresponding test error should be expected to follow a double descent curve.
[]
[ { "authors": [ "Jerzy K Baksalary", "Oskar Maria Baksalary", "Götz Trenkler" ], "title": "A revisitation of formulae for the moore–penrose inverse of modified matrices", "venue": "Linear Algebra and Its Applications,", "year": 2003 }, { "authors": [ "Peter L. Bartlett", "Philip M. Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "CoRR, abs/1906.11300,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Stéphane Boucheron", "Olivier Bousquet", "Gábor Lugosi" ], "title": "Theory of classification: A survey of some recent advances", "venue": "ESAIM: probability and statistics,", "year": 2005 }, { "authors": [ "O. Bousquet", "A. Elisseeff" ], "title": "Stability and generalization", "venue": "Journal Machine Learning Research,", "year": 2001 }, { "authors": [ "Peter Bühlmann", "Sara Van De Geer" ], "title": "Statistics for high-dimensional data: methods, theory and applications", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Noureddine El Karoui" ], "title": "The spectrum of kernel random matrices", "venue": "arXiv e-prints, art", "year": 2010 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J. Tibshirani" ], "title": "Surprises in HighDimensional Ridgeless Least Squares Interpolation", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "S. Kutin", "P. Niyogi" ], "title": "Almost-everywhere algorithmic stability and generalization error", "venue": "Technical report TR-2002-03,", "year": 2002 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin", "Xiyu Zhai" ], "title": "On the Risk of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin" ], "title": "Just interpolate: Kernel “ridgeless” regression can generalize", "venue": "Annals of Statistics,", "year": 2020 }, { "authors": [ "V.A. Marchenko", "L.A. Pastur" ], "title": "Distribution of eigenvalues for some sets of random matrices", "venue": "Mat. Sb. (N.S.),", "year": 1967 }, { "authors": [ "Song Mei", "Andrea Montanari" ], "title": "The generalization error of random features regression: Precise asymptotics and double descent curve", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Carl Meyer" ], "title": "Generalized inversion of modified matrices", "venue": "SIAM J. Applied Math,", "year": 1973 }, { "authors": [ "C.A. Micchelli" ], "title": "Interpolation of scattered data: distance matrices and conditionally positive definite functions", "venue": "Constructive Approximation,", "year": 1986 }, { "authors": [ "Sayan Mukherjee", "Partha Niyogi", "Tomaso Poggio", "Ryan Rifkin" ], "title": "Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization", "venue": "Advances in Computational Mathematics,", "year": 2006 }, { "authors": [ "T. Poggio", "R. Rifkin", "S. Mukherjee", "P. Niyogi" ], "title": "General conditions for predictivity in learning theory", "venue": "Nature,", "year": 2004 }, { "authors": [ "T. Poggio", "G. Kur", "A. Banburski" ], "title": "Double descent in the condition number", "venue": "Technical report, MIT Center for Brains Minds and Machines,", "year": 2019 }, { "authors": [ "Tomaso Poggio" ], "title": "Stable foundations for learning. Center for Brains, Minds and Machines", "venue": "(CBMM) Memo No", "year": 2020 }, { "authors": [ "Alexander Rakhlin", "Xiyu Zhai" ], "title": "Consistency of Interpolation with Laplace Kernels is a HighDimensional Phenomenon", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Lorenzo Rosasco", "Silvia Villa" ], "title": "Learning with incremental iterative regularization", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding Machine Learning: From Theory to Algorithms", "venue": null, "year": 2014 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Nathan Srebro", "Karthik Sridharan" ], "title": "Learnability, stability and uniform convergence", "venue": "J. Mach. Learn. Res.,", "year": 2010 }, { "authors": [ "Ingo Steinwart", "Andreas Christmann" ], "title": "Support vector machines", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "CV" ], "title": "Note that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi", "venue": null, "year": 2006 }, { "authors": [ "Mukherjee" ], "title": "Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM", "venue": null, "year": 2002 }, { "authors": [ "Mukherjee" ], "title": "For ERM and bounded loss functions, CVloo stability in probability with β", "venue": null, "year": 2006 }, { "authors": [ "Mukherjee" ], "title": "zi)− V (fS", "venue": null, "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Statistical learning theory studies the learning properties of machine learning algorithms, and more fundamentally, the conditions under which learning from finite data is possible. In this context, classical learning theory focuses on the size of the hypothesis space in terms of different complexity measures, such as combinatorial dimensions, covering numbers and Rademacher/Gaussian complexities (Shalev-Shwartz & Ben-David, 2014; Boucheron et al., 2005). Another more recent approach is based on defining suitable notions of stability with respect to perturbation of the data (Bousquet & Elisseeff, 2001; Kutin & Niyogi, 2002). In this view, the continuity of the process that maps data to estimators is crucial, rather than the complexity of the hypothesis space. Different notions of stability can be considered, depending on the data perturbation and metric considered (Kutin & Niyogi, 2002). Interestingly, the stability and complexity approaches to characterizing the learnability of problems are not at odds with each other, and can be shown to be equivalent as shown in Poggio et al. (2004) and Shalev-Shwartz et al. (2010).\nIn modern machine learning overparameterized models, with a larger number of parameters than the size of the training data, have become common. The ability of these models to generalize is well explained by classical statistical learning theory as long as some form of regularization is used in the training process (Bühlmann & Van De Geer, 2011; Steinwart & Christmann, 2008). However, it was recently shown - first for deep networks (Zhang et al., 2017), and more recently for kernel methods (Belkin et al., 2019) - that learning is possible in the absence of regularization, i.e., when perfectly fitting/interpolating the data. Much recent work in statistical learning theory has tried to find theoretical ground for this empirical finding. Since learning using models that interpolate is not exclusive to deep neural networks, we study generalization in the presence of interpolation in the case of kernel methods. We study both linear and kernel least squares problems in this paper.\nOur Contributions:\n• We characterize the generalization properties of interpolating solutions for linear and kernel least squares problems using a stability approach. While the (uniform) stability properties of regularized kernel methods are well known (Bousquet & Elisseeff, 2001), we study interpolating solutions of the unregularized (\"ridgeless\") regression problems.\n• We obtain an upper bound on the stability of interpolating solutions, and show that this upper bound is minimized by the minimum norm interpolating solution. This also means that among all interpolating solutions, the minimum norm solution has the best test error. In\nparticular, the same conclusion is also true for gradient descent, since it converges to the minimum norm solution in the setting we consider, see e.g. Rosasco & Villa (2015). • Our stability bounds show that the average stability of the minimum norm solution is\ncontrolled by the condition number of the empirical kernel matrix. It is well known that the numerical stability of the least squares solution is governed by the condition number of the associated kernel matrix (see the discussion of why overparametrization is “good” in Poggio et al. (2019)). Our results show that the condition number also controls stability (and hence, test error) in a statistical sense.\nOrganization: In section 2, we introduce basic ideas in statistical learning and empirical risk minimization, as well as the notation used in the rest of the paper. In section 3, we briefly recall some definitions of stability. In section 4, we study the stability of interpolating solutions to kernel least squares and show that the minimum norm solutions minimize an upper bound on the stability. In section 5 we discuss our results in the context of recent work on high dimensional regression. We conclude in section 6." }, { "heading": "2 STATISTICAL LEARNING AND EMPIRICAL RISK MINIMIZATION", "text": "We begin by recalling the basic ideas in statistical learning theory. In this setting, X is the space of features, Y is the space of targets or labels, and there is an unknown probability distribution µ on the product space Z = X × Y . In the following, we consider X = Rd and Y = R. The distribution µ is fixed but unknown, and we are given a training set S consisting of n samples (thus |S| = n) drawn i.i.d. from the probability distribution on Zn, S = (zi)ni=1 = (xi, yi) n i=1. Intuitively, the goal of supervised learning is to use the training set S to “learn” a function fS that evaluated at a new value xnew should predict the associated value of ynew, i.e. ynew ≈ fS(xnew). The loss is a function V : F × Z → [0,∞), where F is the space of measurable functions from X to Y , that measures how well a function performs on a data point. We define a hypothesis space H ⊆ F where algorithms search for solutions. With the above notation, the expected risk of f is defined as I[f ] = EzV (f, z) which is the expected loss on a new sample drawn according to the data distribution µ. In this setting, statistical learning can be seen as the problem of finding an approximate minimizer of the expected risk given a training set S. A classical approach to derive an approximate solution is empirical risk minimization (ERM) where we minimize the empirical risk IS [f ] = 1 n ∑n i=1 V (f, zi).\nA natural error measure for our ERM solution fS is the expected excess risk ES [I[fS ]−minf∈H I[f ]]. Another common error measure is the expected generalization error/gap given by ES [I[fS ]− IS [fS ]]. These two error measures are closely related since, the expected excess risk is easily bounded by the expected generalization error (see Lemma 5)." }, { "heading": "2.1 KERNEL LEAST SQUARES AND MINIMUM NORM SOLUTION", "text": "The focus in this paper is on the kernel least squares problem. We assume the loss function V is the square loss, that is, V (f, z) = (y − f(x))2. The hypothesis space is assumed to be a reproducing kernel Hilbert space, defined by a positive definite kernel K : X ×X → R or an associated feature map Φ : X → H, such that K(x,x′) = 〈Φ(x),Φ(x′)〉H for all x,x′ ∈ X , where 〈·, ·〉H is the inner product in H. In this setting, functions are linearly parameterized, that is there exists w ∈ H such that f(x) = 〈w,Φ(x)〉H for all x ∈ X . The ERM problem typically has multiple solutions, one of which is the minimum norm solution:\nf†S = arg min f∈M ‖f‖H , M = arg min f∈H\n1\nn n∑ i=1 (f(xi)− yi)2. (1)\nHere ‖·‖H is the norm onH induced by the inner product. The minimum norm solution can be shown to be unique and satisfy a representer theorem, that is for all x ∈ X:\nf†S(x) = n∑ i=1 K(x,xi)cS [i], cS = K †y (2)\nwhere cS = (cS [1], . . . , cS [n]),y = (y1 . . . yn) ∈ Rn, K is the n by n matrix with entries Kij = K(xi,xj), i, j = 1, . . . , n, and K† is the Moore-Penrose pseudoinverse of K. If we assume n ≤ d and that we have n linearly independent data features, that is the rank of X is n, then it is possible to show that for many kernels one can replace K† by K−1 (see Remark 2). Note that invertibility is necessary and sufficient for interpolation. That is, if K is invertible, f†S(xi) = yi for all i = 1, . . . , n, in which case the training error in (1) is zero.\nRemark 1 (Pseudoinverse for underdetermined linear systems) A simple yet relevant example are linear functions f(x) = w>x, that correspond toH = Rd and Φ the identity map. If the rank of X ∈ Rd×n is n, then any interpolating solution wS satisfies w>S xi = yi for all i = 1, . . . , n, and the minimum norm solution, also called Moore-Penrose solution, is given by (w†S)\n> = y>X† where the pseudoinverse X† takes the form X† = X>(XX>)−1.\nRemark 2 (Invertibility of translation invariant kernels) Translation invariant kernels are a family of kernel functions given by K(x1,x2) = k(x1 − x2) where k is an even function on Rd. Translation invariant kernels are Mercer kernels (positive semidefinite) if the Fourier transform of k(·) is non-negative. For Radial Basis Function kernels (K(x1,x2) = k(||x1 − x2||)) we have the additional property due to Theorem 2.3 of Micchelli (1986) that for distinct points x1,x2, . . . ,xn ∈ Rd the kernel matrix K is non-singular and thus invertible.\nThe above discussion is directly related to regularization approaches.\nRemark 3 (Stability and Tikhonov regularization) Tikhonov regularization is used to prevent potential unstable behaviors. In the above setting, it corresponds to replacing Problem (1) by minf∈H 1 n ∑n i=1(f(xi) − yi)2 + λ ‖f‖ 2 H where the corresponding unique solution is given by\nfλS (x) = ∑n i=1K(x,xi)c[i], c = (K + λIn)\n−1y. In contrast to ERM solutions, the above approach prevents interpolation. The properties of the corresponding estimator are well known. In this paper, we complement these results focusing on the case λ→ 0.\nFinally, we end by recalling the connection between minimum norm and the gradient descent.\nRemark 4 (Minimum norm and gradient descent) In our setting, it is well known that both batch and stochastic gradient iterations converge exactly to the minimum norm solution when multiple solutions exist, see e.g. Rosasco & Villa (2015). Thus, a study of the properties of the minimum norm solution explains the properties of the solution to which gradient descent converges. In particular, when ERM has multiple interpolating solutions, gradient descent converges to a solution that minimizes a bound on stability, as we show in this paper." }, { "heading": "3 ERROR BOUNDS VIA STABILITY", "text": "In this section, we recall basic results relating the learning and stability properties of Empirical Risk Minimization (ERM). Throughout the paper, we assume that ERM achieves a minimum, albeit the extension to almost minimizer is possible (Mukherjee et al., 2006) and important for exponential-type loss functions (Poggio, 2020). We do not assume the expected risk to achieve a minimum. Since we will be considering leave-one-out stability in this section, we look at solutions to ERM over the complete training set S = {z1, z2, . . . , zn} and the leave one out training set Si = {z1, z2, . . . , zi−1, zi+1, . . . , zn} The excess risk of ERM can be easily related to its stability properties. Here, we follow the definition laid out in Mukherjee et al. (2006) and say that an algorithm is Cross-Validation leave-one-out (CVloo) stable in expectation, if there exists βCV > 0 such that for all i = 1, . . . , n,\nES [V (fSi , zi)− V (fS , zi)] ≤ βCV . (3) This definition is justified by the following result that bounds the excess risk of a learning algorithm by its average CVloo stability (Shalev-Shwartz et al., 2010; Mukherjee et al., 2006).\nLemma 5 (Excess Risk & CVloo Stability) For all i = 1, . . . , n, ES [I[fSi ]− inf\nf∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (4)\nRemark 6 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z |V (fSi , z)− V (fS , z)| ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nWe recall the proof of Lemma 5 in Appendix A.2 due to lack of space. In Appendix A, we also discuss other definitions of stability and their connections to concepts in statistical learning theory like generalization and learnability.\n4 CVloo STABILITY OF KERNEL LEAST SQUARES\nIn this section we analyze the expected CVloo stability of interpolating solutions to the kernel least squares problem, and obtain an upper bound on their stability. We show that this upper bound on the expected CVloo stability is smallest for the minimum norm interpolating solution (1) when compared to other interpolating solutions to the kernel least squares problem.\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping f ∈ H, that minimizes the empirical least squares risk. Here H is a reproducing kernel hilbert space (RKHS) defined by a positive definite kernel K : X × X → R. All interpolating solutions are of the form f̂S(·) =∑n j=1 ĉS [j]K(xj , ·), where ĉS = K†y + (I −K†K)v. Similarly, all interpolating solutions on\nthe leave one out dataset Si can be written as f̂Si(·) = ∑n j=1,j 6=i ĉSi [j]K(xj , ·), where ĉSi = K†Siyi + (I−K † Si KSi)vi. Here K,KSi are the empirical kernel matrices on the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nTheorem 7 (Main Theorem) Consider the kernel least squares problem with a bounded kernel and bounded outputs y, that is there exist κ,M > 0 such that\nK(x,x′) ≤ κ2, |y| ≤M, (5)\nalmost surely. Then for any interpolating solutions f̂Si , f̂S ,\nES [V (f̂Si , zi)− V (f̂S , zi)] ≤ βCV (K†,y,v,vi) (6) This bound βCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . For the minimum norm solutions we have\nβCV = C1β1 + C2β2, where β1 = ES [ ||K 12 ||op||K†||op × cond(K)× ||y|| ] and, β2 =\nES [ ||K 12 ||2op||K†||2op × (cond(K))2 × ||y||2 ] , andC1, C2 are absolute constants that do not depend\non either d or n.\nIn the above theorem ||K||op refers to the operator norm of the kernel matrix K, ||y|| refers to the standard `2 norm for y ∈ Rn, and cond(K) is the condition number of the matrix K. We can combine the above result with Lemma 5 to obtain the following bound on excess risk for minimum norm interpolating solutions to the kernel least squares problem:\nCorollary 8 The excess risk of the minimum norm interpolating kernel least squares solution can be bounded as: ES [ I[f†Si ]− inff∈H I[f ] ] ≤ C1β1 + C2β2\nwhere β1, β2 are as defined previously.\nRemark 9 (Underdetermined Linear Regression) In the case of underdetermined linear regression, ie, linear regression where the dimensionality is larger than the number of samples in the training set, we can prove a version of Theorem 7 with β1 = ES [∥∥X†∥∥ op ‖y‖ ] and\nβ2 = ES [∥∥X†∥∥2 op ‖y‖2 ] . Due to space constraints, we present the proof of the results in the linear regression case in Appendix B." }, { "heading": "4.1 KEY LEMMAS", "text": "In order to prove Theorem 7 we make use of the following lemmas to bound the CVloo stability using the norms and the difference of the solutions.\nLemma 10 Under assumption (5), for all i = 1. . . . , n, it holds that ES [V (f̂Si , zi)− V (f̂S , zi)] ≤ ES [( 2M + κ (∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H))× κ∥∥∥f̂S − f̂Si∥∥∥H]\nProof We begin, recalling that the square loss is locally Lipschitz, that is for all y, a, a′ ∈ R, with |(y − a)2 − (y − a′)2| ≤ (2|y|+ |a|+ |a′|))|a− a′|.\nIf we apply this result to f, f ′ in a RKHSH, |(y − f(x))2 − (y − f ′(x))2| ≤ κ(2M + κ (‖f‖H + ‖f ′‖H)) ‖f − f ′‖H .\nusing the basic properties of a RKHS that for all f ∈ H |f(x)| ≤ ‖f‖∞ = supx|f(x)| = supx|〈f,Kx〉H| ≤ κ ‖f‖H (7)\nIn particular, we can plug f̂Si and f̂S into the above inequality, and the almost positivity of ERM (Mukherjee et al., 2006) will allow us to drop the absolute value on the left hand side. Finally the desired result follows by taking the expectation over S.\nNow that we have bounded the CVloo stability using the norms and the difference of the solutions, we can find a bound on the difference between the solutions to the kernel least squares problem. This is our main stability estimate.\nLemma 11 Let f̂S , f̂Si be any interpolating kernel least squares solutions on the full and leave one out datasets (as defined at the top of this section), then ∥∥∥f̂S − f̂Si∥∥∥H ≤ BCV (K†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . Also for some absolute constant C,∥∥∥f†S − f†Si∥∥∥H ≤ C × ∥∥∥K 12 ∥∥∥op ∥∥K†∥∥op × cond(K)× ‖y‖ (8) Since the minimum norm interpolating solutions minimize both ∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H and ∥∥∥f̂S − f̂Si∥∥∥H (from lemmas 10, 11), we can put them together to prove theorem 7. In the following section we provide the proof of Lemma 11.\nRemark 12 (Zero training loss) In Lemma 10 we use the locally Lipschitz property of the squared loss function to bound the leave one out stability in terms of the difference between the norms of the solutions. Under interpolating conditions, if we set the term V (f̂S , zi) = 0, the leave one\nout stability reduces to ES [ V (f̂Si , zi)− V (f̂S , zi) ] = ES [ V (f̂Si , zi) ] = ES [(f̂Si(xi)− yi)2] =\nES [(f̂Si(xi)− f̂S(xi))2] = ES [〈f̂Si(·)− f̂S(·),Kxi(·)〉2] ≤ ES [ ||f̂S − f̂Si ||2H × κ2 ] . We can plug\nin the bound from Lemma 11 to obtain similar qualitative and quantitative (up to constant factors) results as in Theorem 7.\nSimulation: In order to illustrate that the minimum norm interpolating solution is the best performing interpolating solution we ran a simple experiment on a linear regression problem. We synthetically generated data from a linear model y = w>X, where X ∈ Rd×n was i.i.d N (0, 1). The dimension of the data was d = 1000 and there were n = 200 samples in the training dataset. A held out dataset of 50 samples was used to compute the test mean squared error (MSE). Interpolating solutions were computed as ŵ> = y>X†+v>(I−XX†) and the norm of v was varied to obtain the plot. The results are shown in Figure 1, where we can see that the training loss is 0 for all interpolants, but test MSE increases as ||v|| increases, with (w†)> = y>X† having the best performance. The figure reports results averaged over 100 trials." }, { "heading": "4.2 PROOF OF LEMMA 11", "text": "We can write any interpolating solution to the kernel regression problem as f̂S(x) =∑n i=1 ĉS [i]K(xi,x) where ĉS = K\n†y + (I − K†K)v, and K ∈ Rn×n is the kernel matrix K on S and v is any vector in Rn. i.e. Kij = K(xi,xj), and y ∈ Rn is the vector y = [y1 . . . yn]>. Similarly, the coefficient vector for the corresponding interpolating solution to the problem over the leave one out dataset Si is ĉSi = (KSi)\n†yi + (I− (KSi)†KSi)vi. Where yi = [y1, . . . , 0, . . . yn]> and KSi is the kernel matrix K with the i\nth row and column set to zero, which is the kernel matrix for the leave one out training set.\nWe define a = [−K(x1,xi), . . . ,−K(xn,xi)]> ∈ Rn and b ∈ Rn as a one-hot column vector with all zeros apart from the ith component which is 1. Let a∗ = a +K(xi,xi)b. Then, we have:\nK∗ = K + ba > ∗\nKSi = K∗ + ab > (9)\nThat is, we can write KSi as a rank-2 update to K. This can be verified by simple algebra, and using the fact that K is a symmetric kernel. Now we are interested in bounding ||f̂S − f̂Si ||H. For a function h(·) = ∑m i=1 piK(xi, ·) ∈ H we have ||h||H = √ p>Kp = ||K 12p||. So we have:\n||f̂S − f̂Si ||H = ||K 1 2 (ĉS − ĉSi)||\n= ||K 12 (K†y + (I−K†K)v − (KSi)†yi − (I− (KSi)†KSi)vi)||\n= ||K 12 (K†y − (KSi)†y + yi(KSi)†b + (I−K†K)(v − vi) + (K†K− (KSi)†KSi)vi)||\n= ||K 12 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi]|| (10)\nHere we make use of the fact that (KSi) †b = 0. If K has full rank (as in Remark 2), we see that b lies in the column space of K and a∗ lies in the column space of K>. Furthermore, β∗ = 1 +a>∗K †b = 1 +a>K†b+K(xi,xi)b >K†b = Kii(K †)ii 6= 0. Using equation 2.2 of Baksalary\net al. (2003) we obtain:\nK†∗ = K † − (Kii(K†)ii)−1K†ba>∗K†\n= K† − (Kii(K†)ii)−1K†ba>K† − ((K†)ii)−1K†bb>K†\n= K† + (Kii(K †)ii) −1K†bb> − ((K†)ii)−1K†bb>K† (11)\nHere we make use of the fact that a>K† = −b. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have K†∗K∗ = K†K.\nNext, we see that since K∗ has the same rank as K, a lies in the column space of K∗, and b lies in the column space of K>∗ . Furthermore β = 1 + b\n>K∗a = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (KSi) †, with k = K†∗a and h = b>K † ∗.\n(KSi) † = K†∗ − kk†K†∗ −K†∗h†h + (k†K†∗h†)kh\n=⇒ (KSi)† −K†∗ = (k†K†∗h†)kh− kk†K†∗ −K†∗h†h =⇒ ||(KSi)† −K†∗||op ≤ 3||K†∗||op\n(12)\nAbove, we use the fact that the operator norm of a rank 1 matrix is given by ||uv>||op = ||u|| × ||v||. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have:\n(KSi) †KSi = K † ∗K∗ − kk†\n=⇒ K†K− (KSi)†KSi = kk† (13)\nPutting the two parts together we obtain the bound on ∥∥(KSi)† −K†∥∥op:\n||K† − (KSi)†||op = ||K† −K†∗ + K†∗ − (KSi)†||op ≤ 3||K†∗||op + ||K† −K†∗||op ≤ 3||K†||op + 4(Kii(K†)ii)−1||K†||op + 4((K†)ii)−1||K†||2op ≤ ||K†||op(3 + 8||K†||op||K||op)\n(14)\nThe last step follows from (Kii)−1 ≤ ||K†||op and ((K†)ii)−1 ≤ ||K||op. Plugging in these calculations into equation 10 we get:\n||f̂S − f̂Si ||H = ||K 1 2 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi]|| ≤ ||K 12 ||op ( ||(K† − (KSi)†)y||+ ||(I−K†K)(v − vi)||+ ||kk†vi|| ) ≤ ||K 12 ||op(B0 + ||I−K†K||op||v − vi||+ ||vi||)\n(15)\nWe see that the right hand side is minimized when v = vi = 0. We have also computed B0 = C × ||K†||op × cond(K)× ||y||, which concludes the proof of Lemma 11." }, { "heading": "5 REMARK AND RELATED WORK", "text": "In the previous section we obtained bounds on the CVloo stability of interpolating solutions to the kernel least squares problem. Our kernel least squares results can be compared with stability bounds for regularized ERM (see Remark 3). Regularized ERM has a strong stability guarantee in terms of a uniform stability bound which turns out to be inversely proportional to the regularization parameter λ and the sample size n (Bousquet & Elisseeff, 2001). However, this estimate becomes vacuous as λ→ 0. In this paper, we establish a bound on average stability, and show that this bound is minimized when the minimum norm ERM solution is chosen. We study average stability since one can expect\n20 40 60 80 100 120 140\nn\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nC on\ndi tio\nn nu\nm be\nr\nd=nd>n d<n\nFigure 2: Typical double descent of the condition number (y axis) of a radial basis function kernel K(x, x′) = exp ( − ||x−x ′||2\n2σ2\n) built from a random data matrix distributed asN (0, 1): as in the linear\ncase, the condition number is worse when n = d, better if n > d (on the right of n = d) and also better if n < d (on the left of n = d). The parameter σ was chosen to be 5. From Poggio et al. (2019)\nworst case scenarios where the minimum norm is arbitrarily large (when n ≈ d). One of our key findings is the relationship between minimizing the norm of the ERM solution and minimizing a bound on stability.\nThis leads to a second observation, namely, that we can consider the limit of our risk bounds as both the sample size (n) and the dimensionality of the data (d) go to infinity, but the ratio dn → γ > 1 as n, d→∞ . This is a classical setting in statistics which allows us to use results from random matrix theory (Marchenko & Pastur, 1967). In particular, for linear kernels the behavior of the smallest eigenvalue of the kernel matrix (which appears in our bounds) can be characterized in this asymptotic limit. In fact, under appropriate distributional assumptions, our bound for linear regression can be computed as (||X†|| × ||y||)2 ≈ √ n√\nd− √ n → 1√γ−1 . Here the dimension of the data coincides with\nthe number of parameters in the model. Interestingly, analogous results hold for more general kernels (inner product and RBF kernels) (El Karoui, 2010) where the asymptotics are taken with respect to the number and dimensionality of the data. These results predict a double descent curve for the condition number as found in practice, see Figure 2. While it may seem that our bounds in Theorem 7 diverge if d is held constant and n→∞, this case is not covered by our theorem, since when n > d we no longer have interpolating solutions.\nRecently, there has been a surge of interest in studying linear and kernel least squares models, since classical results focus on situations where constraints or penalties that prevent interpolation are added to the empirical risk. For example, high dimensional linear regression is considered in Mei & Montanari (2019); Hastie et al. (2019); Bartlett et al. (2019), and “ridgeless” kernel least squares is studied in Liang et al. (2019); Rakhlin & Zhai (2018) and Liang et al. (2020). While these papers study upper and lower bounds on the risk of interpolating solutions to the linear and kernel least squares problem, ours are the first to derived using stability arguments. While it might be possible to obtain tighter excess risk bounds through careful analysis of the minimum norm interpolant, our simple approach helps us establish a link between stability in statistical and in numerical sense.\nFinally, we can compare our results with observations made in Poggio et al. (2019) on the condition number of random kernel matrices. The condition number of the empirical kernel matrix is known to control the numerical stability of the solution to a kernel least squares problem. Our results show that the statistical stability is also controlled by the condition number of the kernel matrix, providing a natural link between numerical and statistical stability." }, { "heading": "6 CONCLUSIONS", "text": "In summary, minimizing a bound on cross validation stability minimizes the expected error in both the classical and the modern regime of ERM. In the classical regime (d < n), CVloo stability implies generalization and consistency for n→∞. In the modern regime (d > n), as described in this paper, CVloo stability can account for the double descent curve in kernel interpolants (Belkin et al., 2019) under appropriate distributional assumptions. The main contribution of this paper is characterizing stability of interpolating solutions, in particular deriving excess risk bounds via a stability argument. In the process, we show that among all the interpolating solutions, the one with minimum norm also minimizes a bound on stability. Since the excess risk bounds of the minimum norm interpolant depend on the pseudoinverse of the kernel matrix, we establish here an elegant link between numerical and statistical stability. This also holds for solutions computed by gradient descent, since gradient descent converges to minimum norm solutions in the case of “linear” kernel methods. Our approach is simple and combines basic stability results with matrix inequalities." }, { "heading": "A EXCESS RISK, GENERALIZATION, AND STABILITY", "text": "We use the same notation as introduced in Section 2 for the various quantities considered in this section. That is in the supervised learning setup V (f, z) is the loss incurred by hypothesis f on the sample z, and I[f ] = Ez[V (f, z)] is the expected error of hypothesis f . Since we are interested in different forms of stability, we will consider learning problems over the original training set S = {z1, z2, . . . , zn}, the leave one out training set Si = {z1, . . . , zi−1, zi+1, . . . , zn}, and the replace one training set (Si, z) = {z1, . . . , zi−1, zi+1, . . . , zn, z}\nA.1 REPLACE ONE AND LEAVE ONE OUT ALGORITHMIC STABILITY\nSimilar to the definition of expected CVloo stability in equation (3) of the main paper, we say an algorithm is cross validation replace one stable (in expectation), denoted as CVro, if there exists βro > 0 such that\nES,z[V (fS , z)− V (f(Si,z), z)] ≤ βro.\nWe can strengthen the above stability definition by introducing the notion of replace one algorithmic stability (in expectation) Bousquet & Elisseeff (2001). There exists αro > such that for all i = 1, . . . , n,\nES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ αro.\nWe make two observations: First, if the loss is Lipschitz, that is if there exists CV > 0 such that for all f, f ′ ∈ H\n‖V (f, z)− V (f ′, z)‖ ≤ CV ‖f − f ′‖ ,\nthen replace one algorithmic stability implies CVro stability with βro = CV αro. Moreover, the same result holds if the loss is locally Lipschitz and there exists R > 0, such that ‖fS‖∞ ≤ R almost surely. In this latter case the Lipschitz constant will depend on R. Later, we illustrate this situation for the square loss.\nSecond, we have for all i = 1, . . . , n, S and z, ES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ ES,z[‖fS − fSi‖∞] + ES,z[∥∥f(Si,z) − fSi∥∥∞].\nThis observation motivates the notion of leave one out algorithmic stability (in expectation) Bousquet & Elisseeff (2001)]\nES,z[‖fS − fSi‖∞] ≤ αloo.\nClearly, leave one out algorithmic stability implies replace one algorithmic stability with αro = 2αloo and it implies also CVro stability with βro = 2CV αloo.\nA.2 EXCESS RISK AND CVloo, CVro STABILITY\nWe recall the statement of Lemma 5 in section 3 that bounds the excess risk using the CVloo stability of a solution.\nLemma 13 (Excess Risk & CVloo Stability) For all i = 1, . . . , n,\nES [I[fSi ]− inf f∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (16)\nIn this section, two properties of ERM are useful, namely symmetry, and a form of unbiasedeness.\nSymmetry. A key property of ERM is that it is symmetric with respect to the data set S, meaning that it does not depend on the order of the data in S.\nA second property relates the expected ERM with the minimum of expected risk.\nERM Bias. The following inequality holds.\nE[[IS [fS ]]−min f∈H I[f ] ≤ 0. (17)\nTo see this, note that IS [fS ] ≤ IS [f ]\nfor all f ∈ H by definition of ERM, so that taking the expectation of both sides ES [IS [fS ]] ≤ ES [IS [f ]] = I[f ]\nfor all f ∈ H. This implies ES [IS [fS ]] ≤ min\nf∈H I[f ]\nand hence (17) holds.\nRemark 14 Note that the same argument gives more generally that\nE[ inf f∈H [IS [f ]]− inf f∈H I[f ] ≤ 0. (18)\nGiven the above premise, the proof of Lemma 5 is simple.\nProof [of Lemma 5] Adding and subtracting ES [IS [fS ]] from the expected excess risk we have that ES [I[fSi ]−min\nf∈H I[f ]] = ES [I[fSi ]− IS [fS ] + IS [fS ]−min f∈H I[f ]], (19)\nand since ES [IS [fS ]]−minf∈H I[f ]] is less or equal than zero, see (18), then ES [I[fSi ]−min\nf∈H I[f ]] ≤ ES [I[fSi ]− IS [fS ]]. (20)\nMoreover, for all i = 1, . . . , n\nES [I[fSi ]] = ES [EziV (fSi , zi)] = ES [V (fSi , zi)] and\nES [IS [fS ]] = 1\nn n∑ i=1 ES [V (fS , zi)] = ES [V (fS , zi)].\nPlugging these last two expressions in (20) and in (19) leads to (4).\nWe can prove a similar result relating excess risk with CVro stability.\nLemma 15 (Excess Risk & CVro Stability) Given the above definitions, the following inequality holds for all i = 1, . . . , n,\nES [I[fS ]− inf f∈H I[f ]] ≤ ES [I[fS ]− IS [fS ]] = ES,z[V (fS , z)− V (f(Si,z), z)]. (21)\nProof The first inequality is clear from adding and subtracting IS [fS ] from the expected risk I[fS ] we have that\nES [I[fS ]−min f∈H I[f ]] = ES [I[fS ]− IS [fS ] + IS [fS ]−min f∈H I[f ]],\nand recalling (18). The main step in the proof is showing that for all i = 1, . . . , n,\nE[IS [fS ]] = E[V (f(Si,z), z)] (22)\nto be compared with the trivial equality, E[IS [fS ] = E[V (fS , zi)]. To prove Equation (22), we have for all i = 1, . . . , n,\nES [IS [fS ]] = ES,z[ 1\nn n∑ i=1 V (fS , zi)] = 1 n n∑ i=1 ES,z[V (f(Si,z), z)] = ES,z[V (f(Si,z), z)]\nwhere we used the fact that by the symmetry of the algorithm ES,z[V (f(Si,z), z)] is the same for all i = 1, . . . , n. The proof is concluded noting that ES [I[fS ]] = ES,z[V (fS , z)].\nA.3 DISCUSSION ON STABILITY AND GENERALIZATION\nBelow we discuss some more aspects of stability and its connection to other quantities in statistical learning theory.\nRemark 16 (CVloo stability in expectation and in probability) In Mukherjee et al. (2006), CVloo stability is defined in probability, that is there exists βPCV > 0, 0 < δ P CV ≤ 1 such that\nPS{|V (fSi , zi)− V (fS , zi)| ≥ βPCV } ≤ δPCV .\nNote that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi , zi)− V (fS , zi) > 0. Then CVloo stability in probability and in expectation are clearly related and indeed equivalent for bounded loss functions. CVloo stability in expectation (3) is what we study in the following sections.\nRemark 17 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z |V (fSi , z) − V (fS , z)| ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nRemark 18 (CVloo Stability & Learnability) A natural question is to which extent suitable notions of stability are not only sufficient but also necessary for controlling the excess risk of ERM. Classically, the latter is characterized in terms of a uniform version of the law of large numbers, which itself can be characterized in terms of suitable complexity measures of the hypothesis class. Uniform stability is too strong to characterize consistency while CVloo stability turns out to provide a suitably weak definition as shown in Mukherjee et al. (2006), see also Kutin & Niyogi (2002), Mukherjee et al. (2006). Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM:\nTheorem 19 Mukherjee et al. (2006) For ERM and bounded loss functions, CVloo stability in probability with βPCV converging to zero for n→∞ is equivalent to consistency and generalization of ERM.\nRemark 20 (CVloo stability & in-sample/out-of-sample error) Let (S, z) = {z1, . . . , zn, z}, (z is a data point drawn according to the same distribution) and the corresponding ERM solution f(S,z), then (4) can be equivalently written as,\nES [I[fS ]− inf f∈F I[f ]] ≤ ES,z[V (fS , z)− V (f(S,z), z)].\nThus CVloo stability measures how much the loss changes when we test on a point that is present in the training set and absent from it. In this view, it can be seen as an average measure of the difference between in-sample and out-of-sample error.\nRemark 21 (CVloo stability and generalization) A common error measure is the (expected) generalization gap ES [I[fS ]−IS [fS ]]. For non-ERM algorithms, CVloo stability by itself not sufficient to control this term, and further conditions are needed Mukherjee et al. (2006), since\nES [I[fS ]− IS [fS ]] = ES [I[fS ]− IS [fSi ]] + ES [IS [fSi ]− IS [fS ]].\nThe second term becomes for all i = 1, . . . , n,\nES [IS [fSi ]− IS [fS ]] = 1\nn n∑ i=1 ES [V (fSi , zi)− V (fS , zi)] = ES [V (fSi , zi)− V (fS , zi)]\nand hence is controlled by CV stability. The first term is called expected leave one out error in Mukherjee et al. (2006) and is controlled in ERM as n→∞, see Theorem 19 above.\nB CVloo STABILITY OF LINEAR REGRESSION\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping w ∈ R d, that minimizes the empirical least squares risk. All interpolating solutions are of the form ŵS = y>X†+v>(I−XX†). Similarly, all interpolating solutions on the leave one out dataset Si can be written as ŵSi = y>i (Xi) † + v>i (I−Xi(Xi)†). Here X,Xi ∈ R d×n are the data matrices for the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nIn this section we want to estimate the CVloo stability of the minimum norm solution to the ERM problem in the linear regression case. This is the case outlined in Remark 9 of the main paper. In order to prove Remark 9, we only need to combine Lemma 10 with the linear regression analogue of Lemma 11. We state and prove that result in this section. This result predicts a double descent curve for the norm of the pseudoinverse as found in practice, see Figure 3.\nLemma 22 Let ŵS , ŵSi be any interpolating least squares solutions on the full and leave one out datasets S, Si, then ‖ŵS − ŵSi‖ ≤ BCV (X†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions w†S ,w † Si . Also, ∥∥∥w†S −w†Si∥∥∥ ≤ 3∥∥X†∥∥op × ‖y‖ (23) As mentioned before in section 2.1 of the main paper, linear regression can be viewed as a case of the kernel regression problem whereH = Rd, and the feature map Φ is the identity map. The inner product and norms considered in this case are also the usual Euclidean inner product and 2-norm for vectors in Rd. The notation ‖·‖ denotes the Euclidean norm for vectors both in Rd and Rn. The usage of the norm should be clear from the context. Also, ‖A‖op is the left operator norm for a matrix A ∈ Rn×d, that is ‖A‖op = supy∈Rn,||y||=1 ||y>A||.\nWe have n samples in the training set for a linear regression problem, {(xi, yi)}ni=1. We collect all the samples into a single matrix/vector X = [x1x2 . . .xn] ∈ Rd×n, and y = [y1y2 . . . yn]> ∈ Rn. Then any interpolating ERM solution wS satisfies the linear equation\nw>SX = y > (24)\nAny interpolating solution can be written as:\n(ŵS) > = y>X† + v>(I−XX†). (25)\nIf we consider the leave one out training set Si we can find the minimum norm ERM solution for Xi = [x1 . . .0 . . .xn] and yi = [y1 . . . 0 . . . yn]> as\n(ŵSi) > = y>i (Xi) † + v>i (I−Xi(Xi)†). (26) We can write Xi as:\nXi = X + ab > (27)\nwhere a ∈ Rd is a column vector representing the additive change to the ith column, i.e, a = −xi, and b ∈ Rn×1 is the i−th element of the canonical basis in Rn (all the coefficients are zero but the i−th which is one). Thus ab> is a d × n matrix composed of all zeros apart from the ith column which is equal to a.\nWe also have yi = y − yib. Now per Lemma 10 we are interested in bounding the quantity ||ŵSi − ŵS || = ||(ŵSi)> − (ŵS)>||. This simplifies to:\n||ŵSi − ŵS || = ||y>i (Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||(y> − yib>)(Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + yib>(Xi)† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†)|| (28)\nIn the above equation we make use of the fact that b>(Xi)† = 0. We use an old formula (Meyer, 1973; Baksalary et al., 2003) to compute (Xi)† from X†. We use the development of pseudo-inverses of perturbed matrices in Meyer (1973). We see that a = −xi is a vector in the column space of X and b is in the range space of XT (provided X has full column rank), with β = 1 + b>X†a = 1− b>X†xi = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (Xi)†\n(Xi) † = X† − kk†X† −X†h†h + (k†X†h†)kh (29)\nwhere k = X†a, and h = b>X†, and u† = u >\n||u||2 for any non-zero vector u.\n(Xi) † −X† = (k†X†h†)kh− kk†X† −X†h†h\n= a>(X†)>X†(X†)>b× kh ||k||2||h||2 − kk†X† −X†h†h\n=⇒ ||(Xi)† −X†||op ≤ |a>(X†)>X†(X†)>b| ||X†a||||b>X†|| + 2||X†||op\n≤ ||X †||op||X†a||||b>X†|| ||X†a||||b>X†|| + 2||X†||op\n= 3||X†||op\n(30)\nThe above set of inequalities follows from the fact that the operator norm of a rank 1 matrix is given by ||uv>||op = ||u|| × ||v||\nAlso, from List 2 of Baksalary et al. (2003) we have that Xi(Xi)† = XX† − h†h. Plugging in these calculations into equation 28 we get:\n||ŵSi − ŵS || = ||y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†)|| ≤ B0 + ||I−XX†||op||v − vi||+ ||vi|| × ||h†h||op ≤ B0 + 2||v − vi||+ ||vi|| (31)\nWe see that the right hand side is minimized when v = vi = 0. We can also compute B0 = 3||X†||op||y||, which concludes the proof of Lemma 22." } ]
2,020
null
SP:b80bc890180934092cde037b49d94d6e4e06fad9
[ "This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of gradient diversity to the performance of continual learning, and proposed a regularization that connects metric learning and continual learning. However, there are still some issues to be addressed as below." ]
The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting. In recent studies, several gradientbased approaches have been developed to make more efficient use of compact episodic memories, which constrain the gradients resulting from new samples with those from memorized samples, aiming to reduce the diversity of gradients from different tasks. In this paper, we reveal the relation between diversity of gradients and discriminativeness of representations, demonstrating connections between Deep Metric Learning and continual learning. Based on these findings, we propose a simple yet highly efficient method – Discriminative Representation Loss (DRL) – for continual learning. In comparison with several state-of-theart methods, DRL shows effectiveness with low computational cost on multiple benchmark experiments in the setting of online continual learning.
[]
[ { "authors": [ "Rahaf Aljundi", "Min Lin", "Baptiste Goujaud", "Yoshua Bengio" ], "title": "Gradient based sample selection for online continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Puneet K Dokania", "Thalaiyasingam Ajanthan", "Philip HS Torr" ], "title": "Riemannian walk for incremental learning: Understanding forgetting and intransigence", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-GEM", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Marcus Rohrbach", "Mohamed Elhoseiny", "Thalaiyasingam Ajanthan", "Puneet K Dokania", "Philip HS Torr", "Marc’Aurelio Ranzato" ], "title": "On tiny episodic memories in continual learning", "venue": "arXiv preprint arXiv:1902.10486,", "year": 2019 }, { "authors": [ "Yu Chen", "Tom Diethe", "Neil Lawrence" ], "title": "Facilitating bayesian continual learning by natural gradients and stein gradients", "venue": "Continual Learning Workshop of 32nd Conference on Neural Information Processing Systems (NeurIPS", "year": 2018 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Niannan Xue", "Stefanos Zafeiriou" ], "title": "Arcface: Additive angular margin loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tom Diethe", "Tom Borchert", "Eno Thereska", "Borja de Balle Pigem", "Neil Lawrence" ], "title": "Continual learning in practice", "venue": "In Continual Learning Workshop of 32nd Converence on Neural Information Processing Systems (NeurIPS", "year": 2018 }, { "authors": [ "Mehrdad Farajtabar", "Navid Azizan", "Alex Mott", "Ang Li" ], "title": "Orthogonal gradient descent for continual learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Ching-Yi Hung", "Cheng-Hao Tu", "Cheng-En Wu", "Chien-Hung Chen", "Yi-Ming Chan", "Chu-Song Chen" ], "title": "Compacting, picking and growing for unforgetting continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mahmut Kaya", "Hasan Şakir Bilge" ], "title": "Deep metric learning: A survey. Symmetry", "venue": null, "year": 2019 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Ya Le", "Xuan Yang" ], "title": "Tiny imagenet visual recognition challenge", "venue": "CS 231N,", "year": 2015 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "Christopher JC Burges" ], "title": "MNIST handwritten digit database", "venue": "AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "Timothée Lesort", "Vincenzo Lomonaco", "Andrei Stoian", "Davide Maltoni", "David Filliat", "Natalia Dı́az-Rodrı́guez" ], "title": "Continual learning for robotics", "venue": "arXiv preprint arXiv:1907.00182,", "year": 2019 }, { "authors": [ "Jinlong Liu", "Yunzhi Bai", "Guoqing Jiang", "Ting Chen", "Huayan Wang" ], "title": "Understanding why neural networks generalize well through GSNR of parameters", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Sebastian Mika", "Gunnar Ratsch", "Jason Weston", "Bernhard Scholkopf", "Klaus-Robert" ], "title": "Mullers. Fisher discriminant analysis with kernels. In Neural networks for signal processing", "venue": "IX: Proceedings of the 1999 IEEE signal processing society workshop (cat. no", "year": 1999 }, { "authors": [ "Cuong V Nguyen", "Yingzhen Li", "Thang D Bui", "Richard E Turner" ], "title": "Variational continual learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matthew Riemer", "Ignacio Cases", "Robert Ajemian", "Miao Liu", "Irina Rish", "Yuhai Tu", "Gerald Tesauro" ], "title": "Learning to learn without forgetting by maximizing transfer and minimizing interference", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy Lillicrap", "Gregory Wayne" ], "title": "Experience replay for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Karsten Roth", "Timo Milbich", "Samarth Sinha", "Prateek Gupta", "Bjoern Ommer", "Joseph Paul Cohen" ], "title": "Revisiting training strategies and generalization performance in deep metric learning", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "arXiv preprint arXiv:1805.06370,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Huangshi Tian", "Minchen Yu", "Wei Wang" ], "title": "Continuum: A platform for cost-aware, low-latency continual learning", "venue": "In Proceedings of the ACM Symposium on Cloud Computing,", "year": 2018 }, { "authors": [ "Jian Wang", "Feng Zhou", "Shilei Wen", "Xiao Liu", "Yuanqing Lin" ], "title": "Deep metric learning with angular loss", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Xun Wang", "Xintong Han", "Weilin Huang", "Dengke Dong", "Matthew R Scott" ], "title": "Multi-similarity loss with general pair weighting for deep metric learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kilian Q Weinberger", "John Blitzer", "Lawrence K Saul" ], "title": "Distance metric learning for large margin nearest neighbor classification", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Chao-Yuan Wu", "R Manmatha", "Alexander J Smola", "Philipp Krahenbuhl" ], "title": "Sampling matters in deep embedding learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In International Conference on Machine Learning,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the real world, we are often faced with situations where data distributions are changing over time, and we would like to update our models by new data in time, with bounded growth in system size. These situations fall under the umbrella of “continual learning”, which has many practical applications, such as recommender systems, retail supply chain optimization, and robotics (Lesort et al., 2019; Diethe et al., 2018; Tian et al., 2018). Comparisons have also been made with the way that humans are able to learn new tasks without forgetting previously learned ones, using common knowledge shared across different skills. The fundamental problem in continual learning is catastrophic forgetting (McCloskey & Cohen, 1989; Kirkpatrick et al., 2017), i.e. (neural network) models have a tendency to forget previously learned tasks while learning new ones.\nThere are three main categories of methods for alleviating forgetting in continual learning: i) regularization-based methods which aim in preserving knowledge of models of previous tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2018) ii) architecture-based methods for incrementally evolving the model by learning task-shared and task-specific components (Schwarz et al., 2018; Hung et al., 2019); iii) replay-based methods which focus in preserving knowledge of data distributions of previous tasks, including methods of experience replay by episodic memories or generative models (Shin et al., 2017; Rolnick et al., 2019), methods for generating compact episodic memories (Chen et al., 2018; Aljundi et al., 2019), and methods for more efficiently using episodic memories (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Riemer et al., 2019; Farajtabar et al., 2020).\nGradient-based approaches using episodic memories, in particular, have been receiving increasing attention. The essential idea is to use gradients produced by samples from episodic memories to constrain the gradients produced by new samples, e.g. by ensuring the inner product of the pair of gradients is non-negative (Lopez-Paz & Ranzato, 2017) as follows:\n〈gt, gk〉 = 〈 ∂L(xt, θ)\n∂θ , ∂L(xk, θ) ∂θ\n〉 ≥ 0, ∀k < t (1)\nwhere t and k are time indices, xt denotes a new sample from the current task, and xk denotes a sample from the episodic memory. Thus, the updates of parameters are forced to preserve the performance on previous tasks as much as possible.\nIn Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017), gt is projected to a direction that is closest to it in L2-norm whilst also satisfying Eq. (1): ming̃ 12 ||gt − g̃|| 2 2, s.t.〈g̃, gk〉 ≥ 0, ∀k < t. Optimization of this objective requires a high-dimensional quadratic program and thus is computationally expensive. Averaged-GEM (A-GEM) (Chaudhry et al., 2019a) alleviates the computational burden of GEM by using the averaged gradient over a batch of samples instead of individual gradients of samples in the episodic memory. This not only simplifies the computation, but also obtains comparable performance with GEM. Orthogonal Gradient Descent (OGD) (Farajtabar et al., 2020) projects gt to the direction that is perpendicular to the surface formed by {gk|k < t}. Moreover, Aljundi et al. (2019) propose Gradient-based Sample Selection (GSS), which selects samples that produce most diverse gradients with other samples into episodic memories. Here diversity is measured by the cosine similarity between gradients. Since the cosine similarity is computed using the inner product of two normalized gradients, GSS embodies the same principle as other gradient-based approaches with episodic memories. Although GSS suggests the samples with most diverse gradients are important for generalization across tasks, Chaudhry et al. (2019b) show that the average gradient over a small set of random samples may be able to obtain good generalization as well.\nIn this paper, we answer the following questions: i) Which samples tend to produce diverse gradients that strongly conflict with other samples and why are such samples able to help with generalization? ii) Why does a small set of randomly chosen samples also help with generalization? iii) Can we reduce the diversity of gradients in a more efficient way? Our answers reveal the relation between diversity of gradients and discriminativeness of representations, and further show connections between Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) and continual learning. Drawing on these findings we propose a new approach, Discriminative Representation Loss (DRL), for classification tasks in continual learning. Our methods show improved performance with relatively low computational cost in terms of time and RAM cost when compared to several state-of-theart (SOTA) methods across multiple benchmark tasks in the setting of online continual learning." }, { "heading": "2 A NEW PERSPECTIVE OF REDUCING DIVERSITY OF GRADIENTS", "text": "According to Eq. (1), negative cosine similarities between gradients produced by current and previous tasks result in worse performance in continual learning. This can be interpreted from the perspective of constrained optimization as discussed by Aljundi et al. (2019). Moreover, the diversity of gradients relates to the Gradient Signal to Noise Ratio (GSNR) (Liu et al., 2020), which plays a crucial role in the model’s generalization ability. Intuitively, when more of the gradients point in diverse directions, the variance will be larger, leading to a smaller GSNR, which indicates that reducing the diversity of gradients can improve generalization. This finding leads to the conclusion that samples with the most diverse gradients contain the most critical information for generalization which is consistent with in Aljundi et al. (2019)." }, { "heading": "2.1 THE SOURCE OF GRADIENT DIVERSITY", "text": "We first conducted a simple experiment on classification tasks of 2-D Gaussian distributions, and tried to identify samples with most diverse gradients in the 2-D feature space. We trained a linear model on the first task to discriminate between two classes (blue and orange dots in Fig. 1a). We then applied the algorithm Gradient-based Sample Selection with Interger Quadratic Programming (GSS-IQP) (Aljundi et al., 2019) to select 10% of the samples of training data that produce gradients with the lowest similarity (black dots in Fig. 1a), and denote this set of samples as M̂ = minM ∑ i,j∈M 〈gi,gj〉 ||gi||·||gj || .\nIt is clear from Fig. 1a that the samples in M̂ are mostly around the decision boundary between the two classes. Increasing the size of M̂ results in the inclusion of samples that trace the outer edges of the data distributions from each class. Clearly the gradients can be strongly opposed when samples from different classes are very similar. Samples close to decision boundaries are most likely to exhibit this characteristic. Intuitively, storing the decision boundaries of previously learned classes should be an effective way to preserve classification performance on those classes. However, if the episodic memory only includes samples representing the learned boundaries, it may miss important information when the model is required to incrementally learn new classes. We show this by introducing a second task - training the model above on a third class (green dots). We display the decision boundaries (which split the feature space in a one vs. all manner) learned by the model after\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 M\n(a) Samples with most diverse gradients (M̂ ) after learning task 1, the green line is the decision boundary.\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(b) Learned decision boundaries (purple lines) after task 2. Here the episodic memory includes samples in M̂ .\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(c) Learned decision boundaries (purple lines) after task 2. Here the episodic memory consists of random samples.\n(a) Splitting samples into several subsets in a 3-class classification task. Dots in different colors are from different classes.\n(b) Estimated distributions of β when drawing negative pairs from different subsets of samples.\n(c) Estimated distributions of α− δ when drawing negative pairs from different subsets of samples.\nFigure 2: Illustration of how Pr(2β > α − δ) in Theorem 1 behaves in various cases by drawing negative pairs from different subsets of a 3-class feature space which are defined in Fig. 2a. The classifier is a linear model. y-axis in the right side of (b) & (c) is for the case of x ∈ S1 ∪ S2. We see that α− δ behaves in a similar way with β but in a smaller range which makes β the key in studying Pr(2β > α − δ). In the case of x ∈ S3 the distribution of β has more mass on larger values than other cases because the predicted probabilities are mostly on the two classes in a pair, and it causes all 〈gn,gm〉 having the opposite sign of 〈xn,xm〉 as shown in Tab. 1.\ntask 2 with M̂ (Fig. 1b) and a random set of samples (Fig. 1c) from task 1 as the episodic memory. The random episodic memory shows better performance than the one selected by GSS-IQP, since the new decision boundaries rely on samples not included in M̂ . It explains why randomly selected memories may generalize better in continual learning. Ideally, with M̂ large enough, the model can remember all edges of each class, and hence learn much more accurate decision boundaries sequentially. However, memory size is often limited in practice, especially for high-dimensional data. A more efficient way could be learning more informative representations. The experimental results indicate that: 1) more similar representations in different classes result in more diverse gradients. 2) more diverse representations in a same class help with learning new boundaries incrementally.\nNow we formalise the connection between the diversity of gradients and the discriminativeness of representations for the linear model (proofs are in Appx. A). Notations: Negative pair represents two samples from different classes. Positive pair represents two samples from a same class. Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a one-hot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = W\nTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have: 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nTheorem 1. Suppose yn 6= ym, and let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , ||pn||2 + ||pm||2, β , pn,cm + pm,cn and δ , ||pn − pm||22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β > α− δ),\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have: sign(〈gn, gm〉) = sign(〈xn,xm〉)\nFor a better understanding of the theorems, we conduct empirical study by partitioning the feature space of three classes into several subsets as shown in Fig. 2a and examine four cases of pairwise samples by these subsets: 1). x ∈ S0, both samples in a pair are near the intersection of the three classes; 2). x ∈ S0∪S1, one sample is close to decision boundaries and the other is far away from the boundaries; 3). x ∈ S3, both samples close to the decision boundary between their true classes but away from the third class; 4). x ∈ S1 ∪ S2, both samples are far away from the decision boundaries. Theorem 1 says that for samples from different classes, 〈gn, gm〉 gets an opposite sign of 〈xn,xm〉 with a probability that depends on the predictions pn and pm. This probability of flipping the sign especially depends on β which reflects how likely to misclassify both samples to its opposite class. We show the empirical distributions of β and (α− δ) obtained by a linear model in Figs. 2b and 2c, respectively. In general, (α− δ) shows similar behaviors with β in the four cases but in a smaller range, which makes 2β > (α − δ) tends to be true except when β is around zero. Basically, a subset including more samples close to decision boundaries leads to more probability mass on large values of β, and the case of x ∈ S3 results in largest mass on large values of β because the predicted probabilities mostly concentrate on the two classes in a pair. As shown in Tab. 1, more mass on large values of β leads to larger probabilities of flipping the sign. These results demonstrate that samples with most diverse gradients (which gradients have largely negative similarities with other samples) are close to decision boundaries because they tend to have large β and 〈xn,xm〉 tend to be positive. In the case of x ∈ S1 ∪ S2 the probability of flipping the sign is zero because β concentrates around zero. According to Lemma 1 〈gn, gm〉 are very close to zero in this case because the predictions are close to true labels, hence, such samples are not considered as with most diverse gradients.\nTheorem 2 says 〈gn, gm〉 has the same sign as 〈xn,xm〉 when the two samples from a same class. We can see the results of positive pairs in Tab. 1 matches Theorem 2. In the case of S0 ∪ S1 the two probabilities do not add up to exactly 1 because the implementation of cross-entropy loss in tensorflow smooths the function by a small value for preventing numerical issues which slightly changes the gradients. As 〈xn,xm〉 is mostly positive for positive pairs, 〈gn, gm〉 hence is also mostly positive, which explains why samples with most diverse gradients are not sufficient to preserve information within classes in experiments of Fig. 1. On the other hand, if 〈xn,xm〉 is negative then 〈gn, gm〉 will be negative, which indicates representations within a class should not be too diverse. Extending this theoretical analysis based on a linear model, we also provide empirical study of non-linear models (Multi-layer Perceptrons (MLPs)). As demonstrated in Tab. 1, the probability of flipping the sign in MLPs are very similar with the linear model since it only depends on the predictions and all models have learned reasonable decision boundaries. The probability of getting\nnegative 〈gn, gm〉 is also similar with the linear model except in the case of S1 ∪ S2 for negative pairs, in which the MLP with ReLU gets much less negative 〈gn, gm〉. As MLP with tanh activations is still consistent with the linear model in this case, we consider the difference is caused by the representations always being positive due to ReLU activations. These results demonstrate that non-linear models exhibit similar behaviors with linear models that mostly align with the theorems.\nSince only negative 〈gn, gm〉 may cause conflicts, reducing the diversity of gradients hence relies on reducing negative 〈gn, gm〉. We consider to reduce negative 〈gn, gm〉 by two ways: 1).minimize the representation inner product of negative pairs, which pushes the inner product to be negative or zero (for positive representations); 2).optimize the predictions to decrease the probability of flipping the sign. In this sense, decreasing the representation similarity of negative pairs might help with both ways. In addition, according to Fig. 2b x ∼ S3 gets larger prediction similarity than x ∼ S0 due to the predictions put most probability mass on both classes of a pair, which indicates decreasing the similarity of predictions may decrease the probability of flipping the sign. Hence, we include logits in the representations. We verify this idea by training two binary classifiers for two groups of MNIST classes ({0, 1} and {7, 9}). The classifiers have two hidden layers each with 100 hidden units and ReLU activations. We randomly chose 100 test samples from each group to compute the pairwise cosine similarities. Representations are obtained by concatenating the output of all layers (including logits) of the neural network, gradients are computed by all parameters of the model. We display the similarities in Figs. 3a and 3b. The correlation coefficients between the gradient and representation similarities of negative pairs are -0.86 and -0.85, which of positive pairs are 0.71 and 0.79. In all cases, the similarities of representations show strong correlations with the similarities of gradients. The classifier for class 0 and 1 gets smaller representation similarities and much less negative gradient similarities for negative pairs (blue dots) and it also gains a higher accuracy than the other classifier (99.95% vs. 96.25%), which illustrates the potential of reducing the gradient diversity by decreasing the representation similarity of negative pairs." }, { "heading": "2.2 CONNECTING DEEP METRIC LEARNING TO CONTINUAL LEARNING", "text": "Reducing the representation similarity between classes shares the same concept as learning larger margins which has been an active research area for a few decades. For example, Kernel Fisher Discriminant analysis (KFD) (Mika et al., 1999) and distance metric learning (Weinberger et al., 2006) aim to learn kernels that can obtain larger margins in an implicit representation space, whereas Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) leverages deep neural networks to learn embeddings that maximize margins in an explicit representation space. In this sense, DML has the potential to help with reducing the diversity of gradients in continual learning.\nHowever, the usual concepts in DML may not entirely be appropriate for continual learning, as they also aim in learning compact representations within classes (Schroff et al., 2015; Wang et al., 2017; Deng et al., 2019). In continual learning, the unused information for the current task might be important for a future task, e.g. in the experiments of Fig. 1 the y-dimension is not useful for task 1 but useful for task 2. It indicates that learning compact representations in a current task might omit important dimensions in the representation space for a future task. In this case, even if we\nstore diverse samples into the memory, the learned representations may be difficult to generalize on future tasks as the omitted dimensions can only be relearned by using limited samples in the memory. We demonstrate this by training a model with and without L1 regulariztion on the first two tasks of split-MNIST and split-Fashion MNIST. The results are shown in Tab. 2. We see that with L1 regularization the model learns much more compact representations and gives a similar performance on task 1 but much worse performance on task 2 comparing to without L1 regularization. The results suggest that continual learning shares the interests of maximizing margins in DML but prefers less compact representation space to preserve necessary information for future tasks. We suggest an opposite way regarding the within-class compactness: minimizing the similarities within the same class for obtaining less compact representation space. Roth et al. (2020) proposed a ρ-spectrum metric to measure the information entropy contained in the representation space (details are provided in Appx. D) and introduced a ρ-regularization method to restrain over-compression of representations. The ρ-regularization method randomly replaces negative pairs by positive pairs with a pre-selected probability pρ. Nevertheless, switching pairs is inefficient and may be detrimental to the performance in an online setting because some negative pairs may never be learned in this way. Thus, we propose a different way to restrain the compression of representations which will be introduced in the following." }, { "heading": "3 DISCRIMINATIVE REPRESENTATION LOSS", "text": "Based on our findings in the above section, we propose an auxiliary objective Discriminative Representation Loss (DRL) for classification tasks in continual learning, which is straightforward, robust, and efficient. Instead of explicitly re-projecting gradients during training process, DRL helps with decreasing gradient diversity by optimizing the representations. As defined in Eq. (2), DRL consists of two parts: one is for minimizing the similarities of representations from different classes (Lbt) which can reduce the diversity of gradients from different classes, the other is for minimizing the similarities of representations from a same class (Lwi) which helps preserve discriminative information for future tasks in continual learning.\nmin Θ LDRL = min Θ (Lbt + αLwi), α > 0,\nLbt = 1\nNbt B∑ i=1 B∑ j 6=i,yj 6=yi 〈hi, hj〉, Lwi = 1 Nwi B∑ i=1 B∑ j 6=i,yj=yi 〈hi, hj〉, (2)\nwhere Θ denotes the parameters of the model, B is training batch size. Nbt, Nwi are the number of negative and positive pairs, respectively. α is a hyperparameter controlling the strength of Lwi, hi is the representation of xi, yi is the label of xi. The final loss function combines the commonly used softmax cross entropy loss for classification tasks (L) with DRL (LDRL) as shown in Eq. (3),\nL̂ = L+ λLDRL, λ > 0, (3)\nwhere λ is a hyperparameter controlling the strength of LDRL, which is larger for increased resistance to forgetting, and smaller for greater elasticity. We provide experimental results to verify the effects of DRL and an ablation study on Lbt and Lwi (Tab. 7) in Appx. E, according to which Lbt and Lwi\nhave shown effectiveness on improving forgetting and ρ-spectrum, respectively. We will show the correlation between ρ-spectrum and the model performance in Sec. 5.\nThe computational complexity of DRL isO(B2H), whereB is training batch size,H is the dimension of representations. B is small (10 or 20 in our experiments) and commonly H W , where W is the number of network parameters. In comparison, the computational complexity of A-GEM and GSS-greedy are O(BrW ) and O(BBmW ), respectively, where Br is the reference batch size in A-GEM and Bm is the memory batch size in GSS-greedy. The computational complexity discussed here is additional to the cost of common backpropagation. We compare the training time of all methods on MNIST tasks in Tab. 9 in Appx. H, which shows the representation-based methods require much lower computational cost than gradient-based approaches." }, { "heading": "4 ONLINE MEMORY UPDATE AND BALANCED EXPERIENCE REPLAY", "text": "We follow the online setting of continual learning as was done for other gradient-based approaches with episodic memories (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Aljundi et al., 2019), in which the model only trained with one epoch on the training data.\nWe update the episodic memories by the basic ring buffer strategy: keep the last nc samples of class c in the memory buffer, where nc is the memory size of a seen class c. We have deployed the episodic memories with a fixed size, implying a fixed budget for the memory cost. Further, we maintain a uniform distribution over all seen classes in the memory. The buffer may not be evenly allocated to each class before enough samples are acquired for newly arriving classes. We show pseudo-code of the memory update strategy in Alg. 1 in Appx. B for a clearer explanation. For class-incremental learning, this strategy can work without knowing task boundaries. Since DRL and methods of DML depend on the pairwise similarities of samples, we would prefer the training batch to include as wide a variety of different classes as possible to obtain sufficient discriminative information. Hence, we adjust the Experience Replay (ER) strategy (Chaudhry et al., 2019b) for the needs of such methods. The idea is to uniformly sample from seen classes in the memory buffer to form a training batch, so that this batch can contain as many seen classes as possible. Moreover, we ensure the training batch includes at least one positive pair of each selected class (minimum 2 samples in each class) to enable the parts computed by positive pairs in the loss. In addition, we also ensure the training batch includes at least one class from the current task. We call this Balanced Experience Replay (BER). The pseudo code is in Alg. 2 of Appx. B. Note that we update the memory and form the training batch based on the task ID instead of class ID for instance-incremental tasks (e.g. permuted MNIST tasks), as in this case each task always includes the same set of classes." }, { "heading": "5 EXPERIMENTS", "text": "In this section we evaluate our methods on multiple benchmark tasks by comparing with several baseline methods in the setting of online continual learning.\nBenchmark tasks: We have conducted experiments on the following benchmark tasks: Permuted MNIST (10 tasks and each task includes the same 10 classes with different permutation of features), Split MNIST, Split Fashion-MNIST, and Split CIFAR-10 (all three having 5 tasks with two classes in each task), Split CIFAR-100 (10 tasks with 10 classes in each task), Split TinyImageNet (20 tasks with 10 classes in each task). All split tasks include disjoint classes. For tasks of MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017), the training size is 1000 samples per task, for CIFAR10 (Krizhevsky et al., 2009) the training size is 3000 per task, for CIFAR-100 and TinyImageNet (Le & Yang, 2015) it is 5000 per task. N.B.: We use single-head (shared output) models in all of our experiments, meaning that we do not require a task identifier at testing time. Such settings are more difficult for continual learning but more practical in real applications.\nBaselines: We compare our methods with: two gradient-based approaches (A-GEM (Chaudhry et al., 2019a) and GSS-greedy (Aljundi et al., 2019)), two standalone experience replay methods (ER (Chaudhry et al., 2019b) and BER), two SOTA methods of DML (Multisimilarity (Wang et al., 2019) and R-Margin (Roth et al., 2020)). We also trained a single task over all classes with one epoch for all benchmarks which performance can be viewed as a upper bound of each benchmark. N.B.: We deploy the losses of Multisimilarity and R-Margin as auxiliary objectives as the same as DRL\nbecause using standalone such losses causes difficulties of convergence in our experimental settings. We provide the definitions of these two losses in Appx. D.\nPerformance measures: We use the Average accuracy, Average forgetting, Average intransigence to evaluate the performance of all methods, the definition of these measures are provided in Appx. C\nExperimental settings: We use the vanilla SGD optimizer for all experiments without any scheduling. For tasks on MNIST and Fashion-MNIST, we use a MLP with two hidden layers and ReLU activations, and each layer has 100 hidden units. For tasks on CIFAR datasets and TinyImageNet, we use the same reduced Resnet18 as used in Chaudhry et al. (2019a). All networks are trained from scratch without regularization scheme. For the MLP, representations are the concatenation of outputs of all layers including logits; for reduced Resnet18, representations are the concatenation of the input of the final linear layer and output logits. We concatenate outputs of all layers as we consider they behave like different levels of representation, and when higher layers (layers closer to the input) generate more discriminative representations it would be easier for lower layers to learn more discriminative representations as well. This method also improves the performance of MLPs. For reduced ResNet18 we found that including outputs of all hidden layers performs almost the same as only including the final representations, so we just use the final layer for lower computational cost. We deploy BER as the replay strategy for DRL, Multisimilarity, and R-Margin. The memory size for tasks on MNIST and Fashion-MNIST is 300 samples. For tasks on CIFAR-10 and CIFAR-100 the memory size is 2000 and 5000 samples, respectively. For TinyImageNet it is also 5000 samples. The standard deviation shown in all results are evaluated over 10 runs with different random seeds. We use 10% of training set as validation set for choosing hyperparameters by cross validation. More details of experimental settings and hyperparameters are given in Appx. I.\nTabs. 3 to 5 give the averaged accuracy, forgetting, and intransigence of all methods on all benchmark tasks, respectively. As we can see, the forgetting and intransigence often conflict with each other which is the most common phenomenon in continual learning. Our method DRL is able to get a better trade-off between them and thus outperforms other methods over most benchmark tasks in terms of average accuracy. This could be because DRL facilitates getting a good intransigence and ρ-spectrum by Lwi and a good forgetting by Lbt. In DRL the two terms are complementary to each other and combining them brings benefits on both sides (an ablation study on the two terms are provide in Appx. E). According to Tabs. 4 and 5, Multisimilarity got better avg. intransigence and similar avg. forgetting on CIFAR-10 compared with DRL which indicates Multisimilarity learns better representations to generalize on new classes in this case. Roth et.al. (2020) also suggests Multisimilarity is a very strong baseline in deep metric learning which outperforms the proposed R-Margin on several datasets. And we use the hyperparameters of Multisimilarity recommended in Roth et.al. (2020) which generally perform well on multiple complex datasets. TinyImageNet gets much worse performance than other benchmarks because it has more classes (200), a longer task sequence (20 tasks), a larger feature space (64× 64× 3), and the accuracy of the single task on it is just about 17.8%. According to Tab. 3 the longer task sequence, more classes, and larger feature space all increase the gap between the performance of the single task and continual learning.\nAs shown in Tab. 6 the rho-spectrum shows high correlation to average accuracy on most benchmarks since it may help with learning new decision boundaries across tasks. Split MNIST has shown a low correlation between the ρ-spectrum and avg. accuracy due to the ρ-spectrum highly correlates with the avg. intransigence and consequently affect the avg. forgetting in an opposite direction so that causes a cancellation of effects on avg. accuracy. In addition, we found that GSS often obtains a smaller ρ than other methods without getting a better performance. In general, the ρ-spectrum is the smaller the better because it indicates the representations are more informative. However, it may be detrimental to the performance when ρ is too small as the learned representations are too noisy. DRL is more robust to this issue because ρ keeps relatively stable when α is larger than a certain value as shown in Fig. 4c in Appx. E." }, { "heading": "6 CONCLUSION", "text": "The two fundamental problems of continual learning with small episodic memories are: (i) how to make the best use of episodic memories; and (ii) how to construct most representative episodic memories. Gradient-based approaches have shown that the diversity of gradients computed on data from different tasks is a key to generalization over these tasks. In this paper we demonstrate that the\nmost diverse gradients are from samples that are close to class boundaries. We formally connect the diversity of gradients to discriminativeness of representations, which leads to an alternative way to reduce the diversity of gradients in continual learning. We subsequently exploit ideas from DML for learning more discriminative representations, and furthermore identify the shared and different interests between continual learning and DML. In continual learning we would prefer larger margins between classes as the same as in DML. The difference is that continual learning requires less compact representations for better compatibility with future tasks. Based on these findings, we provide a simple yet efficient approach to solving the first problem listed above. Our findings also shed light on the second problem: it would be better for the memorized samples to preserve as much variance as possible. In most of our experiments, randomly chosen samples outperform those selected by gradient diversity (GSS) due to the limit on memory size in practice. It could be helpful to select memorized samples by separately considering the representativeness of inter- and intra-class samples, i.e., those representing margins and edges. We will leave this for future work." }, { "heading": "A PROOF OF THEOREMS", "text": "Notations: Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a one-hot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = WTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nProof. Let ` ′\nn = ∂L(xn,yn;W)/∂on, by the chain rule, we have:\n〈gn, gm〉 = 〈xn,xm〉〈` ′ n, ` ′ m〉,\nBy the definition of L, we can find:\n` ′\nn = pn − yn, (4)\nTheorem 1. Suppose yn 6= ym, let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , ||pn||2 + ||pm||2, β , pn,cm + pm,cn and δ , ||pn − pm||22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β + δ > α),\nProof. According to Lemma 1 and yn 6= ym, we have\n〈` ′ n, ` ′ m〉 = 〈pn,pm〉 − pn,cm − pm,cn\nAnd\n〈pn,pm〉 = 1\n2 (||pn||2 + ||pm||2 − ||pn − pm||2) =\n1 2 (α− δ)\nwhich gives 〈`′n, ` ′ m〉 = 12 (α− δ)− β. When 2β > α− δ, we must have 〈` ′ n, ` ′\nm〉 < 0. According to Lemma 1, we prove this theorem.\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have:\nsign(〈gn, gm〉) = sign(〈xn,xm〉),\nProof. Because ∑K k=1 pn,k = 1, pn,k ≥ 0,∀k, and cn = cm = c,\n〈` ′ n, ` ′ m〉 = K∑ k 6=c pn,kpm,k + (pn,c − 1)(pm,c − 1) ≥ 0 (5)\nAccording to Lemma 1, we prove the theorem." }, { "heading": "B ALGORITHMS OF ONLINE MEMORY UPDATE", "text": "We provide the details of online ring buffer update and Balanced Experience Replay (BER) in Algs. 1 to 3. We directly load new data batches into the memory buffer without a separate buffer for the current task. The memory buffer works like a sliding window for each class in the data stream and we draw training batches from the memory buffer instead of directly from the data stream. In this case, one sample may not be seen only once as long as it stays in the memory buffer. This strategy is a more efficient use of the memory when |B| < nc, where |B| is the loading batch size of the data stream (i.e. the number of new samples added into the memory buffer at each iteration), we set |B| to 1 in all experiments (see Appx. I for a discussion of this).\nAlgorithm 1 Ring Buffer Update with Fixed Buffer Size\nInput: Bt - current data batch of the data stream, Ct - the set of classes in Bt,M - memory buffer, C - the set of classes in M, K - memory buffer size. for c in Ct do\nGet Bt,c - samples of class c in Bt, Mc - samples of class c inM, if c in C then Mc =Mc ∪ Bc else Mc = Bc, C = C ∪ {c}\nend if end for R = |M|+ |B| −K while R > 0 do c′ = arg maxc |Mc| remove the first sample inMc′ , R = R−1 end while returnM\nAlgorithm 2 Balanced Experience Replay Input: M - memory buffer, C - the set of classes in M, B - training batch size, Θ - model parameters, LΘ - loss function, Bt - current data batch from the data stream, Ct - the set of classes in Bt, K - memory buffer size.\nM←MemoryUpdate(Bt, Ct,M, C,K) nc, Cs, Cr ← ClassSelection(Ct, C, B) Btrain = ∅ for c in Cs do\nif c in Cr then mc = nc + 1 else mc = nc end if GetMc - samples of class c inM, Bc\nmc∼ Mc C sample mc samples fromMc Btrain = Btrain ∪ Bc\nend for Θ← Optimizer(Btrain,Θ,LΘ)\nAlgorithm 3 Class Selection for BER Input: Ct - the set of classes in current data batch Bt, C - the set of classes inM, B - training batch size, mp - minimum number of positive pairs of each selected class (mp ∈ {0, 1}) . Btrain = ∅, nc = bB/|C|c, rc = B mod |C|, if nc > 1 or mp == 0 then Cr\nrc∼ C C sample rc classes from all seen classes without replacement. Cs = C\nelse Cr = ∅, nc = 2, ns = bB/2c − |Ct|, C we ensure the training batch include samples from the current task. Cs\nns∼ (C − Ct) C sample ns classes from all seen classes except classes in Ct. Cs = Cs ⋃ Ct if B mod 2 > 0 then Cr\n1∼ Cs C sample one class in Cs to have an extra sample. end if\nend if Return: nc, Cs, Cr" }, { "heading": "C DEFINITION OF PERFORMANCE MEASURES", "text": "We use the following measures to evaluate the performance of all methods: Average accuracy, which is evaluated after learning all tasks: āt = 1t ∑t i=1 at,i, where t is the index of the latest task, at,i is the accuracy of task i after learning task t.\nAverage forgetting (Chaudhry et al., 2018), which measures average accuracy drop of all tasks after learning the whole task sequence: f̄t = 1t−1 ∑t−1 i=1 maxj∈{i,...,t−1}(aj,i − at,i).\nAverage intransigence (Chaudhry et al., 2018), which measures the inability of a model learning new tasks: Īt = 1t ∑t i=1 a ∗ i − ai, where ai is the accuracy of task i at time i. We use the best accuracy among all compared models as a∗i instead of the accuracy obtained by an extra model that is solely trained on task i." }, { "heading": "D RELATED METHODS FROM DML", "text": "ρ-spectrum metric (Roth et al., 2020): ρ = KL(U||SΦX ), which is proposed to measure the information entropy contained in the representation space. The ρ-spectrum computes the KLdivergence between a discrete uniform distribution U and the spectrum of data representations SΦX , where SΦX is normalized and sorted singular values of Φ(X ) , Φ denotes the representation extractor (e.g. a neural network) and X is input data samples. Lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nMultisimilarity(Wang et al., 2019): we adopt the loss function of Multisimilarity as an auxiliary objective in classfication tasks of continual learning, the batch mining process is omitted because we use labels for choosing positive and negative pairs. So the loss function is L̂ = L+ λLmulti, and:\nLmulti = 1\nB B∑ i=1 1 α log[1 + ∑\nj 6=i,yj=yi\nexp (−α(sc(hi, hj)− γ))]\n+ 1\nβ log [1 + ∑ yj 6=yi exp (β(sc(hi, hj)− γ))]\n (6)\nwhere sc(·, ·) is cosine similarity, α, β, γ are hyperparameters. In all of our experiments we set α = 2, β = 40, γ = 0.5 as the same as in Roth et al. (2020).\nR-Margin(Roth et al., 2020): we similarly deploy R-Margin for continual learning as the above, which uses the Margin loss (Wu et al., 2017) with the ρ regularization (Roth et al., 2020) as introduced in Sec. 2.2. So the loss function is L̂ = L+ λLmargin, and:\nLmargin = B∑ i=1 B∑ j=1 γ + Ij 6=i,yj=yi(d(hi, hj)− β)− Iyj 6=yi(d(hi, hj)− β) (7)\nwhere d(·, ·) is Euclidean distance, β is a trainable variable and γ is a hyperparameter. We follow the setting in Roth et al. (2020): γ = 0.2, the initialization of β is 0.6. We set pρ = 0.2 in ρ regularization." }, { "heading": "E ABLATION STUDY ON DRL", "text": "We verify the effects of LDRL by training a model with/without LDRL on Split-MNIST tasks: Fig. 4a shows that LDRL notably reduces the similarities of representations from different classes while making representations from a same class less similar; Fig. 4b shows the analogous effect on gradients from different classes and a same class. Fig. 4c demonstrates increasing α can effectively decrease ρ-spectrum to a low-value level, where lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nTab. 7 provides the results of an ablation study on the effects of the two terms in DRL. In general, Lbt gets a better performance in terms of forgetting, Lwi gets a better performance in terms of intransigence and a lower ρ-spectrum, and both of them show improvements on BER (without any regularization terms). Overall, combining the two terms obtains a better performance on forgetting than standalone Lbt and keeps the advantage on intransigence that brought by Lwi. It indicates preventing over-compact representations while maximizing margins can improve the learned representations that are easier for generalization over previous and new tasks. In addition, we found that using standalone Lbt we can only use a smaller λ otherwise the gradients will explode, and using Lwi together can stablize the gradients. We notice that the lower ρ-spectrum does not necessarily lead to a higher accuracy as it’s correlation coefficients with accuracy depends on datasets and is usually larger than -1." }, { "heading": "F COMPARING DIFFERENT MEMORY SIZES", "text": "Fig. 5 compares average accuracy of DRL+BER on MNIST tasks with different memory sizes. It appears the fixed memory size is more efficient than the incremental memory size. For example, the\n0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 pairwise similarity of representations\n0\n5\n10\n15\n20\npr ob\nab ilit\ny de\nns e\ndiff class sh diff class sDRh same class sh same class sDRh\n(a) Similarities of representations with and without LDRL\n−1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 pairwise similarity of gradients\n0\n1\n2\n3\n4\n5\n6\n7\n8\npr ob\nab ilit\ny de\nns e\ndiff class sg diff class sDRg same class sg same class sDRg\n(b) Similarities of gradients with and without LDRL\n0 2 4 6 8 10 α\n2\n3\n4\n5\n6\n7\nρ\n(c) Relation between α and ρspectrum.\nFigure 4: Effects of LDRL on reducing diveristy of gradients and ρ-spectrum. (a) and (b) display distributions of similarities of representations and gradients. sDRh and sh denote similarities of representations with and without LDRL, respectively, sDRg and sg denote similarities of gradients with and withoutLDRL, respectively. (c) demonstrates increasing α inLDRL can reduce ρ effectively.\nfixed memory size (M = 300) getting very similar average accuracy with memory M = 50 per class in Disjoint MNIST while it takes less cost of the memory after task 3. Meanwhile, the fixed memory size (M = 300) gets much better performance than M = 50 per task in most tasks of Permuted MNIST and it takes less cost of the memory after task 6. Since the setting of fixed memory size takes larger memory buffer in early tasks, the results indicate better generalization of early tasks can benefit later tasks, especially for more homogeneous tasks such as Permuted MNIST. The results also align with findings about Reservoir sampling (which also has fixed buffer size) in Chaudhry et al. (2019b) and we also believe a hybrid memory strategy can obtain better performance as suggested in Chaudhry et al. (2019b)." }, { "heading": "G COMPARING DIFFERENT REPLAY STRATEGY", "text": "We compare DRL with different memory replay strategies in Tab. 8 to show DRL has general improvement based on the applied replay strategy." }, { "heading": "H COMPARING TRAINING TIME", "text": "Tab. 9 compares the training time of MNIST tasks. All representation-based methods are much faster than gradient-based methods and close to the replay-based methods." }, { "heading": "I HYPER-PARAMETERS IN EXPERIMENTS", "text": "To make a fair comparison of all methods, we use following settings: i) The configurations of GSS-greedy are as suggested in Aljundi et al. (2019), with batch size set to 10 and each batch receives 10 iterations. ii) For the other methods, we use the ring buffer memory as described in Alg. 1, the loading batch size is set to 1, following with one iteration, the training batch size is provided in Tab. 10. More hyperparameters are given in Tab. 10 as well.\nIn the setting of limited training data in online continual learning, we either use a small batch size or iterate on one batch several times to obtain necessary steps for gradient optimization. We chose a small batch size with one iteration instead of larger batch size with multiple iterations because by our memory update strategy (Alg. 1) it achieves similar performance with fewer hyperparameters. Since GSS-greedy has a different strategy for updating memories, we leave it at its default settings.\nRegarding the two terms in DRL, a larger weight on Lwi is for less compact representations within classes, but a too dispersed representation space may include too much noise. For datasets that present more difficulty in learning compact representations, we would prefer a smaller weight on Lwi, we therefore set smaller α for CIFAR datasets in our experiments. A larger weight on Lbt is more resistant to forgetting but may be less capable of transferring to a new task, for datasets that are less compatible between tasks a smaller weight on Lbt would be preferred, as we set the largest λ on Permuted MNIST and the smallest λ on CIFAR-100 in our experiments." } ]
2,020
null
SP:09f2fe6a482bbd6f9bd2c62aa841f995171ba939
[ "This paper proposes a new framework that computes the task-specific representations to modulate the model parameters during the multi-task learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available, in such cases the proposed framework is useful. The proposed framework is evaluated on various datasets spanning multiple modalities, where the MTL model even achieves state-of-the-art results on some datasets. " ]
Existing Multi-Task Learning(MTL) strategies like joint or meta-learning focus more on shared learning and have little to no scope for task-specific learning. This creates the need for a distinct shared pretraining phase and a task-specific finetuning phase. The finetuning phase creates separate models for each task, where improving the performance of a particular task necessitates forgetting some of the knowledge garnered in other tasks. Humans, on the other hand, perform task-specific learning in synergy with general domain-based learning. Inspired by these learning patterns in humans, we suggest a simple yet generic task aware framework to incorporate into existing MTL strategies. The proposed framework computes task-specific representations to modulate the model parameters during MTL. Hence, it performs both shared and task-specific learning in a single phase resulting in a single model for all the tasks. The single model itself achieves significant performance gains over the existing MTL strategies. For example, we train a model on Speech Translation (ST), Automatic Speech Recognition (ASR), and Machine Translation (MT) tasks using the proposed task aware multitask learning approach. This single model achieves a performance of 28.64 BLEU score on ST MuST-C English-German, WER of 11.61 on ASR TEDLium v3, and BLEU score of 23.35 on MT WMT14 English-German tasks. This sets a new state-of-the-art performance (SOTA) on the ST task while outperforming the existing end-to-end ASR systems with a competitive performance on the MT task.
[]
[ { "authors": [ "Rosana Ardila", "Megan Branson", "Kelly Davis", "Michael Henretty", "Michael Kohler", "Josh Meyer", "Reuben Morais", "Lindsay Saunders", "Francis M. Tyers", "Gregor Weber" ], "title": "Common voice: A massivelymultilingual speech", "venue": null, "year": 2020 }, { "authors": [ "Craig Atkinson", "Brendan McCane", "Lech Szymanski", "Anthony V. Robins" ], "title": "Pseudo-recursal: Solving the catastrophic forgetting problem in deep neural networks", "venue": "CoRR, abs/1802.03875,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Computer Science Mathematics CoRR,", "year": 2015 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Brian Cheung", "Alexander Terekhov", "Yubei Chen", "Pulkit Agrawal", "Bruno Olshausen" ], "title": "Superposition of many models into one", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "L. Deng", "G. Hinton", "B. Kingsbury" ], "title": "New types of deep neural network learning for speech recognition and related applications: an overview", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Mattia A. Di Gangi", "Roldano Cattoni", "Luisa Bentivogli", "Matteo Negri", "Marco Turchi" ], "title": "MuSTC: a Multilingual Speech Translation Corpus", "venue": null, "year": 2019 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier" ], "title": "Understanding back-translation at scale", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "R. Girshick" ], "title": "Fast r-cnn", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Jiatao Gu", "Hany Hassan", "Jacob Devlin", "Victor O.K. Li" ], "title": "Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Jiatao Gu", "Yong Wang", "Yun Chen", "Victor O.K. Li", "Kyunghyun Cho" ], "title": "Meta-learning for low-resource neural machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Kazuma Hashimoto", "Caiming Xiong", "Yoshimasa Tsuruoka", "Richard Socher" ], "title": "A joint many-task model: Growing a neural network for multiple NLP", "venue": "tasks. CoRR,", "year": 2016 }, { "authors": [ "Tianxing He", "Jun Liu", "Kyunghyun Cho", "Myle Ott", "Bing Liu", "James Glass", "Fuchun Peng" ], "title": "Analyzing the forgetting problem in the pretrain-finetuning of dialogue response", "venue": null, "year": 2020 }, { "authors": [ "François Hernandez", "Vincent Nguyen", "Sahar Ghannay", "Natalia Tomashenko", "Yannick Estève" ], "title": "Ted-lium 3: Twice as much data and corpus repartition for experiments on speaker adaptation", "venue": "Lecture Notes in Computer Science,", "year": 2018 }, { "authors": [ "S. Indurthi", "H. Han", "N.K. Lakumarapu", "B. Lee", "I. Chung", "S. Kim", "C. Kim" ], "title": "End-end speech-to-text translation with modality agnostic meta-learning", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Sathish Reddy Indurthi", "Insoo Chung", "Sangha Kim" ], "title": "Look harder: A neural machine translation model with hard attention", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Javier Iranzo-Sánchez", "Joan Albert Silvestre-Cerdà", "Javier Jorge", "Nahuel Roselló", "Adrià Giménez", "Albert Sanchis", "Jorge Civera", "Alfons Juan" ], "title": "Europarl-st: A multilingual corpus for speech translation of parliamentary debates", "venue": null, "year": 1911 }, { "authors": [ "Nikhil Kumar Lakumarapu", "Beomseok Lee", "Sathish Reddy Indurthi", "Hou Jeung Han", "Mohd Abbas Zaidi", "Sangha Kim" ], "title": "End-to-end offline speech translation system for IWSLT 2020 using modality agnostic meta-learning", "venue": "In Proceedings of the 17th International Conference on Spoken Language Translation,", "year": 2020 }, { "authors": [ "Z. Li", "D. Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2018 }, { "authors": [ "Pierre Lison", "Jörg Tiedemann", "Milen Kouylekov" ], "title": "Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In LREC 2018", "venue": "Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA),", "year": 2019 }, { "authors": [ "Xiaodong Liu", "Jianfeng Gao", "Xiaodong He", "Li Deng", "Kevin Duh", "Ye-yi Wang" ], "title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval", "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2015 }, { "authors": [ "Xiaodong Liu", "Kevin Duh", "Liyuan Liu", "Jianfeng Gao" ], "title": "Very deep transformers for neural machine translation", "venue": "arXiv preprint arXiv:2008.07772,", "year": 2020 }, { "authors": [ "Yuchen Liu", "Hao Xiong", "Zhongjun He", "Jiajun Zhang", "Hua Wu", "Haifeng Wang", "Chengqing Zong" ], "title": "End-to-end speech translation with knowledge distillation", "venue": "CoRR, abs/1904.08075,", "year": 2019 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "V. Panayotov", "G. Chen", "D. Povey", "S. Khudanpur" ], "title": "Librispeech: An asr corpus based on public domain audio books", "venue": "In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Ngoc-Quan Pham", "Thai-Son Nguyen", "Jan Niehues", "Markus Müller", "Alex Waibel" ], "title": "Very deep self-attention networks for end-to-end speech recognition", "venue": "CoRR, abs/1904.13377,", "year": 2019 }, { "authors": [ "Juan Pino", "Qiantong Xu", "Xutai Ma", "Mohammad Javad Dousti", "Yun Tang" ], "title": "Self-training for end-to-end speech translation", "venue": "arXiv preprint arXiv:2006.02490,", "year": 2020 }, { "authors": [ "Matt Post" ], "title": "A call for clarity in reporting BLEU scores", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Tomasz Potapczyk", "Pawel Przybysz", "Marcin Chochowski", "Artur Szumaczuk" ], "title": "Samsung’s system for the iwslt 2019 end-to-end speech translation task", "venue": "In 16th International Workshop on Spoken Language Translation (IWSLT). Zenodo,", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Bharath Ramsundar", "Steven Kearnes", "Patrick Riley", "Dale Webster", "David Konerding", "Vijay Pande" ], "title": "Massively multitask networks for drug", "venue": null, "year": 2015 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Gjorgji Strezoski", "Nanne van Noord", "Marcel Worring" ], "title": "Many task learning with task", "venue": "routing. CoRR,", "year": 2019 }, { "authors": [ "Shubham Toshniwal", "Tara N Sainath", "Ron J Weiss", "Bo Li", "Pedro Moreno", "Eugene Weinstein", "Kanishka Rao" ], "title": "Multilingual speech recognition with a single end-to-end model", "venue": "In ICASSP,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Lijun Wu", "Yiren Wang", "Yingce Xia", "Fei Tian", "Fei Gao", "Tao Qin", "Jianhuang Lai", "Tie-Yan Liu" ], "title": "Depth growing for neural machine translation", "venue": null, "year": 1907 }, { "authors": [ "Zhanpeng Zhang", "Ping Luo", "Chen Change Loy", "Xiaoou Tang" ], "title": "Facial landmark detection by deep multi-task learning", "venue": "Computer Vision – ECCV", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "The process of Multi-Task Learning (MTL) on a set of related tasks is inspired by the patterns displayed by human learning. It involves a pretraining phase over all the tasks, followed by a finetuning phase. During pretraining, the model tries to grasp the shared knowledge of all the tasks involved, while in the finetuning phase, task-specific learning is performed to improve the performance. However, as a result of the finetuning phase, the model forgets the information about the other tasks that it learnt during pretraining. Humans, on the other hand, are less susceptible to forgetfulness and retain existing knowledge/skills while mastering a new task. For example, a polyglot who masters a new language learns to translate from this language without losing the ability to translate other languages. Moreover, the lack of task-based flexibility and having different finetuning/pretraining phases cause gaps in the learning process due to the following reasons:\nRole Mismatch: Consider the MTL system being trained to perform the Speech Translation(ST), Automatic Speech Recognition(ASR) and Machine Translation(MT) tasks. The Encoder block has a very different role in the standalone ASR, MT and ST models and hence we cannot expect a single encoder to perform well on all the tasks without any cues to identify/use task information. Moreover, there is a discrepancy between pretraining and finetuning hampering the MTL objective.\nTask Awareness: At each step in the MTL, the model tries to optimize over the task at hand. For tasks like ST and ASR with the same source language, it is impossible for the model to identify the task and alter its parameters accordingly, hence necessitating a finetuning phase. A few such examples have been provided in Table 1. Humans, on the other hand, grasp the task they have to perform by means of context or explicit cues.\nAlthough MTL strategies help the finetuned models to perform better than the models directly trained on those tasks, their applicability is limited to finding a good initialization point for the finetuning phase. Moreover, having a separate model for each task increases the memory requirements, which is detrimental in low resource settings.\nIn order to achieve the goal of jointly learning all the tasks, similar to humans, we need to perform shared learning in synergy with task-specific learning. Previous approaches such as Raffel et al. (2019) trained a joint model for a set of related text-to-text tasks by providing the task information along with the inputs during the joint learning phase. However, providing explicit task information is not always desirable, e.g., consider the automatic multilingual speech translation task. In order to ensure seamless user experience, it is expected that the model extracts the task information implicitly.\nThus, a holistic joint learning strategy requires a generic framework which learns task-specific information without any explicit supervision.\nIn this work, we propose a generic framework which can be easily integrated into the MTL strategies which can extract task-based characteristics. The proposed approach helps align existing MTL approaches with human learning processes by incorporating task information into the learning process and getting rid of the issues related to forgetfulness. We design a modulation network for learning the task characteristics and modulating the parameters of the model during MTL. As discussed above, the task information may or may not be explicitly available during the training. Hence, we propose two different designs of task modulation network to learn the task characteristics; one uses explicit task identities while the other uses the examples from the task as input. The model, coupled with the modulation network, jointly learns on all the tasks and at the same time, performs the task-specific learning. The proposed approach tackles issues related to forgetfulness by keeping a single model for all the tasks, and hence avoiding the expensive finetuning phase. Having a single model for all the tasks also reduces memory constraints, improving suitability for low resource devices.\nTo evaluate the proposed framework, we conduct two sets of experiments. First, we include the task information during MTL on text-to-text tasks to show the effect of task information. Secondly, we train a model on tasks with different modalities and end goals, with highly confounding tasks. Our proposed framework allows the model to learn the task characteristics without any explicit supervision, and hence train a single model which performs well on all the tasks. The main contributions of this work are as follows:\n• We propose an approach to tackle the issue of forgetfulness which occurs during the finetuning phase of existing MTL strategies.\n• Our model, without any finetuning, achieves superior performance on all the tasks which alleviates the need to keep separate task-specific models.\n• Our proposed framework is generic enough to be used with any MTL strategy involving tasks with multiple modalities." }, { "heading": "2 TASK-AWARE MULTITASK LEARNING", "text": "An overview of our proposed approach is shown in Figure 1." }, { "heading": "2.1 BASE MODEL", "text": "In general, the sequence-to-sequence architecture consists of two components: (1) an encoder which computes a set of representationsX = {x1, · · · ,xm} ∈ Rm×d corresponding to x, and a decoder coupled with attention mechanism (Bahdanau et al., 2015) dynamically reads encoder’s output and predicts target language sequence Y = {y1, · · · ,yn} ∈ Rn×d. It is trained on a dataset D to maximize the p (Y |X; θ), where θ are parameters of the model. We use the Transformer Vaswani et al. (2017) as our base model. Based on the task modalities, we choose the preprocessing layer in the Transformer, i.e., speech or the text (text-embedding) preprocessing layer. The speech preprocessing layer consists of a stack of k CNN layers with stride 2 for both time and frequency dimensions. This layer compresses the speech sequence and produces the output sequence such that input sequences corresponding to all the tasks have similar dimensions, d. The overview of the base sequence-to-sequence model is shown in the rightmost part of Figure 1." }, { "heading": "2.2 TASK MODULATION NETWORK", "text": "The task modulation network performs two operations. In the first step, it computes the task characteristics (te) using the task characteristics layer. It then modulates the model parameters θ using te in the second step." }, { "heading": "2.2.1 TASK CHARACTERISTICS NETWORK:", "text": "We propose two types of Task Characteristics Networks(TCN) to learn the task characteristics, where one uses explicit task identities while the other uses source-target sequences as input.\nExplicit Task Information: In this approach, the tasks involved are represented using different task identities and fed as input to this TCN as one hot vectors. This network consists of a feed-forward layer which produces the task embedding used for modulating the model parameters.\nte = FFN(e), (1)\nwhere e ∈ Rs is a one-hot encoding of s tasks used during joint learning. Implicit Task Information: The Implicit TCN computes the task embeddings using example sequences from the tasks without any external supervision. It consists of four sub-layers: (1) Sequence Representation Layer, (2) Bi-directional Attention Layer, (3) Sequence Summary Layer, and (4) Task Embedding Layer.\nThe sequence representation sub-layer consists of uni-directional Transformer Encoder (TE) blocks Vaswani et al. (2017). It takes the source and target sequences from the tasks as input and produces\nself-attended source and target sequences.\nXsa = TE(X), Y sa = TE(Y ), (2)\nwhereXsa ∈ RM×d, Y sa ∈ RN×d. This sub-layer computes the contextual representation of the sequences.\nThe Bi-directional Attention (BiA) sub-layer takes the self-attended source and target sequences from the previous layer as input and computes the relation between them using Dot-Product Attention Luong et al. (2015). As a result, we get target aware source (Xat ∈ RM×d) and source aware target (Y asRN×d) representations as outputs.\nXat = BiA(Xsa,Y sa), Y as = BiA(Y sa,Xsa). (3)\nThe sequence summary sub-layer is similar to the sequence representation sub layer and summarizes the sequences. The sequence summaries are given by:\nXs = TEu(X at), Y s = TEu(Y as), (4)\nwhereXs ∈ RM×d, Y s ∈ RN×d. The Equation 4 summarizes the sequencesXat and Y as which contain the contextual and attention information. We take the last tokens from both the xs ∈ Rd and ys ∈ Rd, since the last token can see the whole sequence and acts as a summary of the sequence. The task embedding layer computes te by taking the outputs of the sequence summary sub-layer and applying a feed-forward network:\nte = FFN([x s : ys]). (5)" }, { "heading": "2.2.2 MODULATING MODEL PARAMETERS", "text": "We modulate the parameters (θ) of the network (Section 2.1) to account for the task-specific variation during MTL over a set of tasks. We achieve this by scaling (γ) and shifting (β) the outputs of each layer (e.g., transformer block) including any preprocessing layers in the model adopted based on the Feature-wise Linear Modulation (FiLM; Perez et al. (2018)). The γ and β parameters are obtained from the task embedding te either by using equation 1 or 5.\nγ = te[: d], β = te[d :], (6)\nwhere te ∈ R2d, and d is the hidden dimension of the model. Once we have γ and β, we apply the feature-wise linear modulation (Perez et al., 2018) to compute the modulated output (Ol) for each block of the model.\nOl = γ ∗ fl(vl; θl) + β, l = 1, · · · , L, (7) where L is the total number of blocks in the model and fl represents the lth block of the model with parameters θl ∈ θ and inputs vl." }, { "heading": "2.3 TRAINING", "text": "MTL has been successfully applied across different applications of machine learning such as natural language processing (Hashimoto et al., 2016; Collobert & Weston, 2008), speech recognition (Liu et al., 2019; Deng et al., 2013), computer vision (Zhang et al., 2014; Liu et al., 2015; Girshick, 2015), and drug discovery (Ramsundar et al., 2015). It comes in many forms: joint learning, learning to learn, and learning with auxiliary tasks. We consider two MTL strategies: (1) joint learning and (2) learning to learn to train on set of S tasks, T = {τ1, · · · , τS} with corresponding datasets D = {D1, · · · , DS}. As our first training strategy, we use Joint Learning (JL) (Caruana, 1997), which is the most commonly used training strategy for MTL. In JL, the model parameters, including the output layer, are shared across all the tasks involved in the training. For the second training strategy under the learning-tolearn approach, we use a variant of meta-learning, Modality Agnostic Meta Learning (MAML) (Finn et al., 2017a). Even though MAML is mostly used in few-shot learning settings, we use it since it\nallows for task-specific learning during the meta-train step and it has also been shown to provide improvements in the field of speech translation(Indurthi et al., 2020).\nWe resolve the source-target vocabulary mismatch across different tasks in MTL by using a vocabulary of subwords (Sennrich et al., 2016) computed from all the tasks. We sample a batch of examples from Ds and use this as input to the TCN and the Transformer model. To ensure that each training example uses the task embedding computed using another example, we randomly shuffle this batch while using them as input to the TCN. This random shuffling improves the generalization performance by forcing the network to learn task-specific characteristics (te) in Equation 1 or 5. We compute the task embedding in the meta-train step as well; however, the parameters of the TCN are updated only during the meta-test step. During inference time, we use the precomputed task embeddings using a batch of examples randomly sampled from the training set." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 TASKS AND DATASETS", "text": "We conduct two sets of experiments, one with the tasks having the same input modality, i.e., text and another over tasks having different input modalities, i.e., speech and text. The main motivation behind the text-based experiments is to establish the importance of providing task information in MTL. Our main experiments, containing different input modalities involve highly confusing tasks. These experiments help us demonstrate the effectiveness of our approach in a generic setup. We incorporate the proposed task modulation framework into joint and meta-learning strategies and analyze its effects." }, { "heading": "3.1.1 SINGLE MODALITY EXPERIMENTS", "text": "We perform the small scale text-to-text machine translation task over three language pairs EnglishGerman/Romanian/Turkish (En-De/Ro/Tr). We keep English as the source language, which makes it crucial to use task information and produce different outputs from the same input. Since it is easier to provide task identity through one-hot vectors in text, we provide the task information by simply prepending the task identity to the source sequence of each task, e.g., ”translate from English to German”, ”translate from English to Turkish” similar to Raffel et al. (2019). We also train models using our proposed framework to learn the task information and shared knowledge jointly.\nFor En-De, we use 1.9M training examples from the Europarl v7 dataset. Europarl dev2006 and News Commentary nc-dev2007 are used as the dev and Europarl devtest2006, Europarl test2006 and News Commentary nc-devtest2007 as the test sets. For En-Tr we train using 200k training examples from the setimes2 dataset. We use newsdev2016 as the dev and newstest2017 as the test set. For En-Ro, we use 600k training examples from Europarl v8 and setimes2 datasets. We use newsdev2016 as dev and newstest2016 as the test set." }, { "heading": "3.1.2 MULTIPLE MODALITY EXPERIMENTS", "text": "To alleviate the data scarcity issue in Speech Translation (ST), several MTL strategies have been proposed to jointly train the ST task with Automatic Speech Recognition (ASR) and Machine Translation (MT) tasks. These MTL approaches lead to significant performance gains on both ST and ASR tasks after the finetuning phase. We evaluate our proposed framework based on this multimodal MTL setting since passing the task information explicitly via prepending labels(like the text-to-text case) in the source sequence is not possible. We use the following datasets for ST English-German, ASR English, MT English-German tasks:\nMT En-De: We use the Open Subtitles (Lison et al., 2019) and WMT 19 corpora. WMT 19 consists of Common Crawl, Europarl v9, and News Commentary v14 datasets(22M training examples).\nASR English: We used five different datasets namely LibriSpeech (Panayotov et al., 2015), MuST-C (Di Gangi et al., 2019), TED-LIUM (Hernandez et al., 2018), Common Voice (Ardila et al., 2020) and filtered IWSLT 19 (IWS, 2019) to train the English ASR task.\nST Task: We use the Europarl ST (Iranzo-Sánchez et al., 2019), IWSLT 2019 (IWS, 2019) and MuST-C (Di Gangi et al., 2019) datasets. Since ST task has lesser training examples, we use data augmentation techniques (Lakumarapu et al., 2020) to increase the number of training examples.\nPlease refer to the appendix for more details about the data statistics and data augmentation techniques used. All the models reported in this work use the same data settings for training and evaluation." }, { "heading": "3.2 IMPLEMENTATION DETAILS AND METRICS", "text": "We implemented all the models using Tensorflow 2.2 framework. For all our experiments, we use the Transformer(Vaswani et al., 2017) as our base model. The hyperparameter settings such as learning rate, scheduler, optimization algorithm, and dropout have been kept similar to the Transformer, other than the ones explicitly stated to be different. The ASR performance is measured using Word Error Rate (WER) while ST and MT performances are calculated using the detokenized cased BLEU score (Post, 2018). We generate word-piece based universal vocabulary (Gu et al., 2018a) of size 32k using source and target text sequences of all the tasks. For the task aware MTL strategies, we choose a single model to report the results rather than finding the best model for each task separately.\nWe train the text-to-text translation models using 6 Encoder and Decoder layers with a batch size of 2048 text tokens. The training is performed using NVIDIA P40 GPU for 400k steps.\nIn multi-modality experiments, the speech signals are represented using 80-dimensional log-Mel features and use 3 CNN layers in the preprocessing layer described in Section 2.1. We use 12 Encoder and Decoder layers and train for 600k steps using 8 NVIDIA V100 GPUs. For the systems without TCN, we perform finetuning for 10k steps on each task." }, { "heading": "3.3 RESULTS", "text": "" }, { "heading": "3.3.1 SINGLE MODALITY EXPERIMENTS", "text": "The results for the text-to-text translation models trained with different MTL strategies have been provided in Table 2. The MTL models with prepended task label (Raffel et al., 2019) are referred to as OHV (One Hot Vector). Unlike T5, we don’t initialize the models with the text embeddings from large pretrained language model (Devlin et al., 2018). Instead, we focus on establishing the importance of task information during MTL and having a single model for all the tasks. As we can see from the results, providing the task information via text labels or implicitly using the proposed task aware MTL leads to significant performance improvements compared to the MTL without the task information. The models trained using OHV have better performance than those trained using implicit TCN. However, providing OHV via text labels is not always possible for tasks involving non-text modalities such as speech and images." }, { "heading": "3.3.2 MULTI MODALITY EXPERIMENTS", "text": "We evaluate the proposed two TCNs and compare them with the vanilla MTL strategies. The performance of all the models is reported in Table 3. We also extended the T5 (Raffel et al., 2019) approach to the multi modality experiments and compare it with our approach.\nEffect of Task Information: The models trained using task aware MTL achieve significant performance gains over the models trained using vanilla MTL approach. Our single model achieves superior performance compared to the vanilla MTL models even after the finetuning. This shows that not only is the task information essential to identify the task, but also helps to extract the shared knowledge better. Our JL and MAML models trained with task aware MTL achieve improvements of (+2.65, +2.52) for ST, (-1.34, -1.18) for ASR, and (+0.72, +1.26) for the MT task. MAML has some scope for task-specific learning during its meta train step, which explains why the improvements for MAML are slightly lesser than JL for ST and ASR tasks.\nWe also report results using Direct Learning (DL) approach, where separate models are trained for each task, to compare with MTL models. All the MTL models outperform the DL models on ST and ASR tasks have comparable performance on MT task.\nExplicit v/s Implicit TCN: Our proposed implicit TCN learns the task characteristics directly from the examples of each task and achieves a performance comparable to the models trained using explicit TCN. This indicates that it is better to learn the task information implicitly, specifically for tasks having overlapping characteristics. Figure 2 contains the tSNE plots for task embeddings obtained from the implicit TCN for single and multi-modality experiments. We can observe that the implicit TCN is also able to separate all the three tasks effectively without any external supervision.\nSingle model for all tasks: We select one single model for reporting the results for our approach, since, having a single model for multiple tasks is favourable in low resource settings. However, we also report the best models corresponding to each task (row 8 and 11 of Table 3). We observe that choosing a single model over task-specific models did not result in any significant performance loss.\nFeature-wise v/s Input based modulation: We also implemented the input based conditioning (Toshniwal et al., 2018; Raffel et al., 2019) where we prepend the TCN output, i.e., task information to the source and target sequences. As compared to our approach, this approach provides a comparable performance on the ASR task. However, the ST performance is erratic and the output is mixed between ST and ASR tasks. This shows that the feature-wise modulation is more efficient way to carry out task-based conditioning for highly confusing tasks like ST and ASR.\nNumber of parameters added: The Explicit TCN, which is a dense layer, roughly 1500 new parameters are added. For the Implicit TCN, roughly 1 million new additional parameters are added. However, simply increasing the number of parameters is not sufficient to to improve the performance. For e.g., we trained several models by increasing the number of layers for encoder and decoder upto 16. However, these models gave inferior performance as compared to the reported models with 12 encoder and decoder layers.\nScaling with large number of tasks: The t-sne plots in the Figure 2b are drawn using the three test datasets. However, we used multiple datasets for each of the ASR(Librispeech, Common voice, TEDLIUM, MuSTC-ASR), ST (MuSTC, IWSLT20, Europarl), and MT (WMT19, OpenSubtitles) tasks in the multi-modality experiments. We analyze whether or not our proposed approach is able to separate the data coming from these different distributions. As compared to data coming from different tasks, separating the data coming from the same task(generated from different distributions) is more difficult. Earlier, in Figure 2b, we observed that the output is clustered based on the tasks. Figure 2c shows that within these task-based clusters, there are sub-clusters based on the source dataset. Hence, the model is able to identify each sub-task based on the source dataset. The model also gives decent performances on all of them. For example, the single model achieves a WER of 7.5 on the Librispeech tst-clean, 10.35 on MuSTC, 11.65 on the TEDLIUM v3 and 20.36 on the commonvoice test set. For the ST task, the same model gives a BLEU score of 28.64 on the MuSTC test set, 27.61 on the IWSLT tst-2010, and 27.57 on the Europarl test set. This shows that our proposed approach scales well with the total number of tasks.\nComparison with existing works: The design of our system, i.e., the parameters and the related tasks were fixed keeping the ST task in mind. We compare the results of our best systems(after checkpoint averaging) with the recent works in Table 4. We set a new state-of-the-art (SOTA) on the\nST En-De MuST-C task. For the ASR task, we outperform the very deep Transformer based model Pham et al. (2019). We achieve a 19.2% improvement in the WER as compared to the model with the same number of Encoder and Decoder blocks. The best transformer-based MT model achieves a BLEU score of 30.10, however, it uses 60 Encoder blocks. The performance drop on the MT task is attributed to simply training a bigger model without using any additional initialization techniques proposed in Liu et al. (2015); Wu et al. (2019). However, the MT task helps the other tasks and improves the overall performance of the system." }, { "heading": "4 RELATED WORK", "text": "Various MTL techniques have been widely used to improve the performance of end-to-end neural networks. These techniques are known to solve issues like overfitting and data scarcity. Joint learning (Caruana, 1997) improves the generalization by leveraging the shared information contained in the training signals of related tasks. MAML (Finn et al., 2017b) was proposed for training a joint model on a variety of tasks, such that it can quickly adapt to new tasks. Both the learning approaches require a finetuning phase resulting in different models for each task. Moreover, during finetuning phase the model substantially forgets the knowledge acquired during the large-scale pretraining.\nOne of the original solutions to this problem is pseudo-rehearsal, which involves learning the new task while rehearsing generated items representative of the previous task. This has been investigated and addressed to a certain extent in Atkinson et al. (2018) and Li & Hoiem (2018). He et al. (2020) address this by using a mix-review finetuning strategy, where they include the pretraining objective during the finetuning phase. Raffel et al. (2019) take a different approach by providing the task information to the model and achieve performance improvements on different text-to-text tasks. Although this alleviates the need for finetuning, it cannot be extended to the tasks involving complex modalities. In our work, we propose a generic framework on top of MTL to provide task information to the model which can be applied irrespective of the task modalities. It also removes the need for finetuning, tackling the issue of forgetfulness at its root cause.\nA few approaches have also tried to train multiple tasks with a single model, Cheung et al. (2019) project the input to orthogonal sub-spaces based on the task information. In the approach proposed by Li & Hoiem (2018), the model is trained on various image classification tasks having the same input modality. They preserve the output of the model on the training example such that the parameters don’t deviate much from the original tasks. This is useful when the tasks share the same goal, e.g. classification. However, we train on a much more varied set of tasks, which might also have the same inputs with different end goals. Strezoski et al. (2019) propose to apply a fixed mask based on the task identity. Our work can be seen as a generalization of this work. As compared to all these approaches, our model is capable of performing both task identification and the corresponding task learning simultaneously. It learns to control the interactions among various tasks based on the inter-task similarity without any explicit supervision.\nIn the domain of neural machine translation, several MTL approaches have been proposed (Gu et al., 2018a;b). Similarly, recent works have shown that jointly training ST, ASR, and MT tasks improved the overall performance (Liu et al., 2019; Indurthi et al., 2020). However, all these require a separate finetuning phase." }, { "heading": "5 CONCLUSION", "text": "This work proposes a task-aware framework which helps to improve the learning ability of the existing multitask learning strategies. It addresses the issues faced during vanilla multitask learning, which includes forgetfulness during finetuning and the problems associated with having separate models for each task. The proposed approach helps to align better the existing multitask learning strategies with human learning. It achieves significant performance improvements with a single model on a variety of tasks which is favourable in low resource settings." }, { "heading": "6 APPENDIX", "text": "" }, { "heading": "6.1 DATASETS", "text": "" }, { "heading": "6.1.1 DATA AUGMENTATION FOR SPEECH TRANSLATION", "text": "Table 5 provides details about the datasets used for the multi-modality experiments. Since En-De ST task has relatively fewer training examples compared to ASR and MT tasks, we augment the ST dataset with synthetic training examples. We generate the synthetic speech sequence and pair it with the synthetic German text sequences. obtained by using the top two beam search results of the two trained English-to-German NMT models. For speech sequence, we use the Sox library to generate the speech signal using different values of speed, echo, and tempo parameters similar to (Potapczyk et al., 2019). The parameter values are uniformly sampled using these ranges : tempo ∈ (0.85, 1.3), speed ∈ (0.95, 1.05), echo delay ∈ (20, 200), and echo decay ∈ (0.05, 0.2). We also train two NMT models on EN-De language pair to generate synthetic German sequence. The first model is based on Edunov et al. (2018) and the second model (Indurthi et al., 2019) is trained on WMT’18 En-De and OpenSubtitles datsets. We increase the size of the IWSLT 19(filtered) ST dataset to five times of the original size by augmenting 4x data – four text sequences using the top two beam results from each EN-De NMT model and four speech signals using the Sox parameter ranges. For the Europarl-ST, we augment 2x examples to triple the size. The TED-LIUM 3 dataset does not contain speech-to-text translation examples originally; hence, we create 2x synthetic speech-to-text translations using speech-to-text transcripts. Finally, for the MuST-C dataset, we only create synthetic speech and pair it with the original translation to increase the dataset size to 4x. The Overall, we created the synthetic training data of size approximately equal to four times of the original data for the ST task." }, { "heading": "6.1.2 TASK IDENTIFICATION WITHOUT TASK INFORMATION", "text": "Under the multi-modality setting, we conducted smaller scale experiments using only one dataset for each ST, ASR, and ST tasks. The details of the datasets used have been provided in Table 7. We trained on single p40 GPU for 400k steps. The corresponding results have been reported in Table 6. All the results have been obtained without any finetuning. Even though our task-aware MTL model achieves significant performance improvement over vanilla MTL models, we can observe that the vanilla MTL models are also able to give a decent performance on all tasks without any finetuning. An explanation for this is that we used MuST-C dataset for the En-De ST task and TEDLium v3 for the ASR task, which means that the source speech is coming from 2 different sources. However, if we use the same datasets for both the tasks(after data augmentation), the MTL models get confused and the ST, ASR outputs are mixed. The MTL models might be able to learn the task identities simply based on the source speech sequences, since these sequence are coming from different datasets for each task type–MuST-C for ST and TED-LIUM v3 for ASR. However, this does not mean that vanilla MTL models perform joint learning effectively. A human who can perform multiple tasks from the same input is aware of the task he has to perform beforehand. Similarly, it is unreasonable to expect different outputs (translation, transcription) from a model to the same type of input (English speech) without any explicit task information." }, { "heading": "6.1.3 IMPLEMENTATION DETAILS", "text": "The detailed hyperparameters settings used for the single modality and multi modality experiments have been provided in the Table 8.\nS No. MTL Strategy MT BLEU (↑) ASR(WER (↓) ST(BLEU (↑)Test Dev Test Dev Test 1 Joint Learning 14.77 29.56 30.87 13.10 12.70 2 Meta Learning 14.74 28.58 29.92 13.89 13.67\nThis Work" } ]
2,020
null
SP:a1e2218e6943bf138aeb359e23628676b396ed66
[ "This work proposes a deep reinforcement learning-based optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A continuous time representation of the problem is also used compared to conventional techniques which mostly use a discrete time formulation. " ]
This paper deals with the fuel optimization problem for hybrid electric vehicles in reinforcement learning framework. Firstly, considering the hybrid electric vehicle as a completely observable non-linear system with uncertain dynamics, we solve an open-loop deterministic optimization problem to determine a nominal optimal state. This is followed by the design of a deep reinforcement learning based optimal controller for the non-linear system using concurrent learning based system identifier such that the actual states and the control policy are able to track the optimal state and optimal policy, autonomously even in the presence of external disturbances, modeling errors, uncertainties and noise and signigicantly reducing the computational complexity at the same time, which is in sharp contrast to the conventional methods like PID and Model Predictive Control (MPC) as well as traditional RL approaches like ADP, DDP and DQN that mostly depend on a set of pre-defined rules and provide sub-optimal solutions under similar conditions. The low value of the H-infinity (H∞) performance index of the proposed optimization algorithm addresses the robustness issue. The optimization technique thus proposed is compared with the traditional fuel optimization strategies for hybrid electric vehicles to illustate the efficacy of the proposed method.
[]
[ { "authors": [ "R. Akrour", "A. Abdolmaleki", "H. Abdulsamad", "G. Neumann" ], "title": "Model Free Trajectory Optimization for Reinforcement Learning", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "A. Barto", "R. Sutton", "C. Anderson" ], "title": "Neuron-like adaptive elements that can solve difficult learning control problems", "venue": "IEEE Transaction on Systems, Man, and Cybernetics,", "year": 1983 }, { "authors": [ "R. Bellman" ], "title": "The theory of dynamic programming", "venue": "DTIC Document, Technical Representations", "year": 1954 }, { "authors": [ "D. Bertsekas" ], "title": "Dynamic Programming and Optimal Control", "venue": "Athena Scientific,", "year": 2007 }, { "authors": [ "S. Bhasin", "R. Kamalapurkar", "M. Johnson", "K. Vamvoudakis", "F.L. Lewis", "W. Dixon" ], "title": "A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems", "venue": null, "year": 2013 }, { "authors": [ "R.P. Bithmead", "V. Wertz", "M. Gerers" ], "title": "Adaptive Optimal Control: The Thinking Man’s G.P.C", "venue": "Prentice Hall Professional Technical Reference,", "year": 1991 }, { "authors": [ "A. Bryson", "H.Y.-C" ], "title": "Applied Optimal Control: Optimization, Estimation and Control. Washington: Hemisphere", "venue": "Publication Corporation,", "year": 1975 }, { "authors": [ "G.V. Chowdhary", "E.N. Johnson" ], "title": "Theory and flight-test validation of a concurrent-learning adaptive controller", "venue": "Journal of Guidance Control and Dynamics,34:,", "year": 2011 }, { "authors": [ "G. Chowdhary", "T. Yucelen", "M. Mühlegg", "E.N. Johnson" ], "title": "Concurrent learning adaptive control of linear systems with exponentially convergent bounds", "venue": "International Journal of Adaptive Control and Signal Processing,", "year": 2013 }, { "authors": [ "P. Garcı́a", "J.P. Torreglosa", "L.M. Fernández", "F. Jurado" ], "title": "Viability study of a FC-batterySC tramway controlled by equivalent consumption minimization strategy", "venue": "International Journal of Hydrogen Energy,", "year": 2012 }, { "authors": [ "A. Gosavi" ], "title": "Simulation-based optimization: Parametric optimization techniques and reinforcement learning", "venue": null, "year": 2003 }, { "authors": [ "J. Han", "Y. Park" ], "title": "A novel updating method of equivalent factor in ECMS for prolonging the lifetime of battery in fuel cell hybrid electric vehicle", "venue": "In IFAC Proceedings,", "year": 2012 }, { "authors": [ "J. Han", "J.F. Charpentier", "T. Tang" ], "title": "An Energy Management System of a Fuel Cell/Battery", "venue": "Hybrid Boat. Energies,", "year": 2014 }, { "authors": [ "J. Han", "Y. Park", "D. Kum" ], "title": "Optimal adaptation of equivalent factor of equivalent consumption minimization strategy for fuel cell hybrid electric vehicles under active state inequality constraints", "venue": "Journal of Power Sources,", "year": 2014 }, { "authors": [ "R. Kamalapurkar", "L. Andrews", "P. Walters", "W.E. Dixon" ], "title": "Model-based reinforcement learning for infinite-horizon approximate optimal tracking", "venue": "In Proceedings of the IEEE Conference on Decision and Control (CDC),", "year": 2014 }, { "authors": [ "R. Kamalapurkar", "H. Dinh", "S. Bhasin", "W.E. Dixon" ], "title": "Approximate optimal trajectory tracking for continuous-time nonlinear systems", "venue": null, "year": 2015 }, { "authors": [ "S.G. Khan" ], "title": "Reinforcement learning and optimal adaptive control: An overview and implementation examples", "venue": "Annual Reviews in Control,", "year": 2012 }, { "authors": [ "M.J. Kim", "H. Peng" ], "title": "Power management and design optimization of fuel cell/battery hybrid vehicles", "venue": "Journal of Power Sources,", "year": 2007 }, { "authors": [ "D. Kirk" ], "title": "Optimal Control Theory: An Introduction", "venue": "Mineola, NY,", "year": 2004 }, { "authors": [ "V. Konda", "J. Tsitsiklis" ], "title": "On actor-critic algorithms", "venue": "SIAM Journal on Control and Optimization,", "year": 2004 }, { "authors": [ "S. Levine", "P. Abbeel" ], "title": "Learning Neural Network Policies with Guided Search under Unknown Dynamics", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "F.L. Lewis", "S. Jagannathan", "A. Yesildirak" ], "title": "Neural network control of robot manipulators and nonlinear systems", "venue": null, "year": 1998 }, { "authors": [ "F.L. Lewis", "D. Vrabie", "V.L. Syrmos" ], "title": "Optimal Control, 3rd edition", "venue": null, "year": 2012 }, { "authors": [ "H. Li", "A. Ravey", "A. N’Diaye", "A. Djerdir" ], "title": "A Review of Energy Management Strategy for Fuel Cell Hybrid Electric Vehicle", "venue": "In IEEE Vehicle Power and Propulsion Conference (VPPC),", "year": 2017 }, { "authors": [ "W.S. Lin", "C.H. Zheng" ], "title": "Energy management of a fuel cell/ultracapacitor hybrid power system using an adaptive optimal control method", "venue": "Journal of Power Sources,", "year": 2011 }, { "authors": [ "P. Mehta", "S. Meyn" ], "title": "Q-learning and pontryagin’s minimum principle", "venue": "In Proceedings of IEEE Conference on Decision and Control,", "year": 2009 }, { "authors": [ "D. Mitrovic", "S. Klanke", "S. Vijayakumar" ], "title": "Adaptive Optimal Feedback Control with Learned Internal Dynamics Models", "venue": null, "year": 2010 }, { "authors": [ "H. Modares", "F.L. Lewis" ], "title": "Optimal tracking control of nonlinear partially-unknown constrainedinput systems using integral reinforcement", "venue": "learning. Automatica,", "year": 2014 }, { "authors": [ "S.J. Moura", "D.S. Callaway", "H.K. Fathy", "J.L. Stein" ], "title": "Tradeoffs between battery energy capacity and stochastic optimal power management in plug-in hybrid electric vehicles", "venue": "Journal of Power Sources,", "year": 1959 }, { "authors": [ "S.N. Motapon", "L. Dessaint", "K. Al-Haddad" ], "title": "A Comparative Study of Energy Management Schemes for a Fuel-Cell Hybrid Emergency Power System of More-Electric Aircraft", "venue": "IEEE Transactions on Industrial Electronics,", "year": 2014 }, { "authors": [ "G. Paganelli", "S. Delprat", "T.M. Guerra", "J. Rimaux", "J.J. Santin" ], "title": "Equivalent consumption minimization strategy for parallel hybrid powertrains", "venue": "In IEEE 55th Vehicular Technology Conference, VTC Spring 2002 (Cat. No.02CH37367),", "year": 2002 }, { "authors": [ "F. Segura", "J.M. Andújar" ], "title": "Power management based on sliding control applied to fuel cell systems: A further step towards the hybrid control concept", "venue": "Applied Energy,", "year": 2012 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 1998 }, { "authors": [ "E. Theoddorou", "Y. Tassa", "E. Todorov" ], "title": "Stochastic Differential Dynamic Programming", "venue": "In Proceedings of American Control Conference,", "year": 2010 }, { "authors": [ "E. Todorov", "Y. Tassa" ], "title": "Iterative Local Dynamic Programming", "venue": "In Proceedings of the IEEE International Symposium on ADP and RL,", "year": 2009 }, { "authors": [ "J.P. Torreglosa", "P. Garcı́a", "L.M. Fernández", "F. Jurado" ], "title": "Predictive Control for the Energy Management of a Fuel-Cell-Battery-Supercapacitor Tramway", "venue": "IEEE Transactions on Industrial Informatics,", "year": 2014 }, { "authors": [ "D. Vrabie" ], "title": "Online adaptive optimal control for continuous-time systems", "venue": "Ph.D. dissertation, University of Texas at Arlington,", "year": 2010 }, { "authors": [ "Dan Yu", "Mohammadhussein Rafieisakhaei", "Suman Chakravorty" ], "title": "Stochastic Feedback Control of Systems with Unknown Nonlinear Dynamics", "venue": "In IEEE Conference on Decision and Control,", "year": 2017 }, { "authors": [ "M.K. Zadeh" ], "title": "Stability Analysis Methods and Tools for Power Electronics-Based DC Distribution Systems: Applicable to On-Board Electric Power Systems and Smart Microgrids", "venue": null, "year": 2016 }, { "authors": [ "X. Zhang", "C.C. Mi", "A. Masrur", "D. Daniszewski" ], "title": "Wavelet transform-based power management of hybrid vehicles with multiple on-board energy sources including fuel cell, battery and ultracapacitor", "venue": "Journal of Power Sources,", "year": 2008 }, { "authors": [ "C. Zheng", "S.W. Cha", "Y. Park", "W.S. Lim", "G. Xu" ], "title": "PMP-based power management strategy of fuel cell hybrid vehicles considering multi-objective optimization", "venue": "International Journal Precision Engineering and Manufacturing,", "year": 2013 }, { "authors": [ "C.H. Zheng", "G.Q. Xu", "Y.I. Park", "W.S. Lim", "S.W. Cha" ], "title": "Prolonging fuel cell stack lifetime based on Pontryagin’s Minimum Principle in fuel cell hybrid vehicles and its economic influence evaluation", "venue": "Journal Power Sources,", "year": 2014 }, { "authors": [ "X. Zhong", "H. He", "H. Zhang", "Z. Wang" ], "title": "Optimal Control for Unknown Diiscrete-Time Nonlinear Markov Jump Systems Using Adaptive Dynamic Programming", "venue": "IEEE Transactions on Neural networks and learning systems,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Hybrid electric vehicles powered by fuel cells and batteries have attracted great enthusiasm in modern days as they have the potential to eliminate emissions from the transport sector. Now, both the fuel cells and batteries have got several operational challenges which make the separate use of each of them in automotive systems quite impractical. HEVs and PHEVs powered by conventional diesel engines and batteries merely reduce the emissions, but cannot eliminate completely. Some of the drawbacks include carbon emission causing environmental pollution from fuel cells and long charging times, limited driving distance per charge, non-availability of charging stations along the driving distance for the batteries. Fuel Cell powered Hybrid Electric Vehicles (FCHEVs) powered by fuel cells and batteries offer emission-free operation while overcoming the limitations of driving distance per charge and long charging times. So, FCHEVs have gained significant attention in recent years. As we find, most of the existing research which studied and developed several types of Fuel and Energy Management Systems (FEMS) for transport applications include Sulaiman et al. (2018) who has presented a critical review of different energy and fuel management strategies for FCHEVs. Li et al. (2017) has presented an extensive review of FMS objectives and strategies for FCHEVs. These strategies, however can be divided into two groups, i.e., model-based and modelfree. The model-based methods mostly depend on the discretization of the state space and therefore suffers from the inherent curse of dimensionality. The coumputational complexity increases in an exponential fashion with the increase in the dimension of the state space. This is quite evident in the methods like state-based EMS (Jan et al., 2014; Zadeh et al., 2014; 2016), rule-based fuzzy logic strategy (Motapon et al., 2014), classical PI and PID strategies (Segura et al., 2012), Potryagin’s minimum principle (PMP) (Zheng et al., 2013; 2014), model predictive control (MPC) (Kim et al., 2007; Torreglosa et al., 2014) and differential dynamic programming (DDP) (Kim et al., 2007). Out of all these methods, differential dynamic programming is considered to be computationally quite\nefficient which rely on the linearization of the non-linear system equations about a nominal state trajectory followed by a policy iteration to improve the policy. In this approach, the control policy for fuel optimization is used to compute the optimal trajectory and the policy is updated until the convergence is achieved.\nThe model-free methods mostly deal with the Adaptive Dynamic Programming (Bithmead et al., 1991; Zhong et al., 2014) and Reinforcement Learning (RL) based strategies (Mitrovic et al., 2010; Khan et al., 2012) icluding DDP (Mayne et al., 1970). Here, they tend to compute the control policy for fuel optimization by continous engagement with the environment and measuring the system response thus enabling it to achieve at a solution of the DP equation recursively in an online fashion. In deep reinforcement learning, multi-layer neural networks are used to represent the learning function using a non-linear parameterized approximation form. Although a compact paremeterized form do exist for the learning function, the inability to know it apriori renders the method suffer from the curse of dimensionality (O(d2) where, d is the dimension of the state space), thus making it infeasible to apply to a high-dimemsional fuel managememt system.\nThe problem of computational complexity of the traditional RL methods like policy iteration (PI) and value iteration (VI) (Bellman et al., 1954; 2003; Barto et al., 1983; Bartsekas, 2007) can be overcome by a simulation based approach (Sutton et al., 1998) where the policy or the value function can be parameterized with sufficient accuracy using a small number of parameters. Thus, we will be able to transform the optimal control problem to an approximation problem in the parameter space (Bartesekas et al., 1996; Tsitsiklis et al., 2003; Konda et al., 2004) side stepping the need for model knowledge and excessive computations. However, the convergence requires sufficient exploration of the state-action space and the optimality of the obtained policy depends primarily on the accuracy of the parameterization scheme.\nAs a result, a good approximation of the value function is of utmost importance to the stability of the closed-loop system and it requires convergence of the unknown parameters to their optimal values. Hence, this sufficient exploration condition manifests itself as a persistence of excitation (PE) condition when RL is implemented online (Mehta et al., 2009; Bhasin et al., 2013; Vrabie, 2010) which is impossible to be guaranteed a priori.\nMost of the traditional approaches for fuel optimization are unable to adrress the robustness issue. The methods described in the literature including those of PID (Segura et al.,2012), Model Predictive Control (MPC) (Kim et al.,2007;Torreglosa et al., 2014) and Adaptive Dynamic Programming (Bithmead et al.,1991;Zhong et al., 2014) as well as the simulation based RL strategies (Bartesekas et al., 1996; Tsitsiklis et al., 2003; Konda et al., 2004 ) suffer from the drawback of providing a suboptimal solution in the presence of external disturbances and noise. As a result, application of these methods for fuel optimization for hybrid electric vehicles that are plagued by various disturbances in the form of sudden charge and fuel depletion, change in the environment and in the values of the parameters like remaining useful life, internal resistance, voltage and temperature of the battery, are quite impractical.\nThe fuel optimization problem for the hybrid electric vehicle therefore have been formulated as a fully observed stochastic Markov Decision Process (MDP). Instead of using Trajectory-optimized LQG (T-LQG) or Model Predictive Control (MPC) to provide a sub-optimal solution in the presence of disturbances and noice, we propose a deep reinforcement learning-based optimization strategy using concurrent learning (CL) that uses the state-derivative-action-reward tuples to present a robust optimal solution. The convergence of the weight estimates of the policy and the value function to their optimal values justifies our claim. The two major contributions of the proposed approch can be therefore be summarized as follows:\n1) The popular methods in RL literature including policy iteration and value iteration suffers from the curse of dimensionality owing to the use of a simulation based technique which requires sufficient exploration of the state space (PE condition). Therefore, the proposed model-based RL scheme aims to relax the PE condition by using a concurrent learning (CL)-based system identifier to reduce the computational complexity. Generally, an estimate of the true controller designed using the CLbased method introduces an approximate estimation error which makes the stability analysis of the system quite intractable. The proposed method, however, has been able to establish the stability of the closed-loop system by introducing the estimation error and analyzing the augmented system trajectory obtained under the influnece of the control signal.\n2) The proposed optimization algorithm implemented for fuel management in hybrid electric vehicles will nullify the limitations of the conventional fuel management approaches (PID, Model Predictive Control, ECMS, PMP) and traditional RL approaches (Adaptive Dynamic Proagramming, DDP, DQN), all of which suffers from the problem of sub-optimal behaviour in the presence of external disturbances, model-uncertainties, frequent charging and discharging, change of enviroment and other noises. The H-infinity (H∞) performance index defined as the ratio of the disturbance to the control energy has been established for the RL based optimization technique and compared with the traditional strategies to address the robustness issue of the proposed design scheme.\nThe rest of the paper is organised as follows: Section 2 presents the problem formulation including the open-loop optimization and reinforcement learning-based optimal controller design which have been described in subsections 2.1 and 2.2 respectively. The parametric system identification and value function approximation have been detailed in subsections 2.2.1 and 2.2.2. This is followed by the stability and robustness analysis (using the H-infinity (H∞) performance index ) of the closed loop system in subsection 2.2.4. Section 3 provides the simulation results and discussion followed by the conclusion in Section 4." }, { "heading": "2 PROBLEM FORMULATION", "text": "Considering the fuel management system of a hybrid electric vehicle as a continous time affine non-linear dynamical system:\nẋ = f(x,w) + g(x)u, y = h(x, v) (1)\nwhere, x ∈ Rnx , y ∈ Rny , u ∈ Rnu are the state, output and the control vectors respectively, f(.) denotes the drift dynamics and g(.) denotes the control effectivenss matrix. The functions f and h are assumed to be locally Lipschitz continuous functions such that f(0) = 0 and∇f(x) is continous for every bounded x ∈Rnx . The process noise w and measurement noise v are assumed to be zero-mean, uncorrelated Gausssian white noise with covariances W and V, respectively.\nAssumption 1: We consider the system to be fully observed:\ny = h(x, v) = x (2)\nRemark 1: This assumption is considered to provide a tractable formulation of the fuel management problem to side step the need for a complex treatment which is required when a stochastic control problem is treated as partially observed MDP (POMDP).\nOptimal Control Problem: For a continous time system with unknown nonlinear dynamics f(.), we need to find an optimal control policy πt in a finite time horizon [0, t] where πt is the control policy at\ntime t such that πt = u(t) to minimize the cost function given by J = ∫ t\n0\n(xTQx+uRuT )dt+xTFx\nwhere, Q,F > 0 and R ≥ 0." }, { "heading": "2.1 OPEN LOOP OPTIMIZATION", "text": "Considering a noise-free non-linear stochastic dynamical system with unknown dynamics:\nẋ = f(x, 0) + g(x)u, y = h(x, v) = x (3)\nwhere, x0 ∈ Rnx , y ∈ Rny , u ∈ Rnu are the initial state, output and the control vectors respectively, f(.) have their usual meanings and the corresponding cost function is given by Jd (x0, ut) =∫ t\n0\n(xTQx+ uRuT )dt+ xTFx.\nRemark: We have used piecewise convex function to approximate the non-convex fuel function globally which has been used to formulate the cost function for the fuel optimization.\nThe open loop optimization problem is to find the control sequence ut such that for a given initial state x0,\nūt = arg min Jd(x0, ut),\nsubject to ẋ = f(x, 0) + g(x)u,\ny = h(x, v) = x.\n(4)\nThe problem is solved using the gradient descent approach (Bryson et al., 1962; Gosavi et al., 2003), and the procedure is illustrated as follows: Starting from a random initial value of the control sequence U(0) = [ut(0)] the control policy is updated iteratively as U (n+1) = U (n) − α∇UJd(x0, U (n)), (5) until the convergence is achieved upto a certain degree of accuracy where U (n) denotes the control value at the nth iteration and α is the step size parameter. The gradient vector is given by:\n∇UJd(x0, U (n)) = ( ∂Jd ∂u0 , ∂Jd ∂u1 , ∂Jd ∂u2 , ....., ∂Jd ∂ut )|(x0,ut) (6)\nThe Gradient Descent Algorithm showing the approach has been detailed in the Appendix A.1.\nRemark 2: The open loop optimization problem is thus solved using the gradient descent approach considering a black-box model of the underlying system dynamics using a sequence of input-output tests without having the perfect knowlegde about the non-linearities in the model at the time of the design. This method proves to be a very simple and useful strategy for implementation in case of complex dynamical systems with complicated cost-to-go functions and suitable for parallelization." }, { "heading": "2.2 REINFORCEMENT LEARNING BASED OPTIMAL CONTROLLER DESIGN", "text": "Considering the affine non-linear dynamical system given by equation (1), our objective is to design a control law to track the optimal time-varying trajectory x̄(t) ∈ Rnx . A novel cost function is formulated in terms of the tracking error defined by e = x(t) − x̄(t) and the control error defined by the difference between the actual control signal and the desired optimal control signal. This formulation helps to overcome the challenge of the infinte cost posed by the cost function when it is defined in terms of the tarcking error e(t) and the actual control signal signal u(t) only (Zhang et al., 2011; Kamalapurkar et al., 2015). The following assumptions is made to determine the desired steady state control.\nAssumption 2: (Kamalapurkar et al., 2015) The function g(x) in equation (1) is bounded, the matrix g(x) has full column rank for all x(t) ∈ Rnx and the function g+ : Rn→ RmXn which is defined as g+ = (gT g)−1 is bounded and locally Lipschitz.\nAssumption 3: (Kamalapurkar et al., 2015) The optimal trajectory is bounded by a known positive constant b R such that ‖x̄‖ ≤ b and there exists a locally Lipschitz function hd such that ˙̄x = hd (x̄) and g(x̄) g+(x̄)(hd(x̄) - f(x̄)) = hd(x̄) - f(x̄).\nUsing the Assumption 2 and Assumption 3, the control signal ud required to track the desired trajectory x̄(t) is given as ud(x̄) = g+d (hd(x̄) − fd) where fd = f(x̄) and g + d = g\n+(x̄). The control error is given by µ = u(t) - ud(x̄). The system dynamics can now be expressed as\nζ̇ = F (ζ) +G(ζ)µ (7)\nwhere, the merged state ζ(t) ∈ R2n is given by ζ(t) = [eT , x̄T ]T and the functions F (ζ) and G(ζ) are defined as F (ζ) = [fT (e+ x̄)− hTd + uTd (x̄)gT (e+ x̄), hTd ]T and G(ζ) = [gT (e+ x̄), 0mXn]T where, 0mXn denotes a matrix of zeroes. The control error µ is treated hereafter as the design variable. The control objective is to solve a finite-horizon optimal tracking problem online, i.e., to design a control signal µ that will minimize the cost-to-go function, while tracking the desired\ntrajectory, is given by J(ζ, µ) = ∫ t\n0\nr(ζ(τ), µ(τ))dτ where, the local cost r : R2nXRm → R is\ngiven as r(ζ, τ) = Q(e) + µTRµ, R RmXm is a positive definite symmetric matrix and Q : Rn → R is a continous positive definite function. Based on the assumption of the existence of an optimal policy, it can be characterized in terms of the value function V ∗ : R2n → R which is defined as V ∗(ζ) =\nminµ(τ) U |τ Rt>0 ∫ t 0 r(φu(π, t, ζ), µ(τ))dτ , where U ∈ Rm is the action space and φu(t; t0, ζ0) is the trajectory of the system defined by equation (10) with the control effort µ : R>0 → Rm with the initial condition ζ0 ∈ R2n and the initial time t0 ∈ R>0. Taking into consideration that an optimal policy exists and that V ∗ is continously differentiable everywhere, the closed-form solution\n(Kirk, 2004) is given as µ∗(ζ) = -1/2 R−1GT (ζ)(∇ζV ∗(ζ))T where, ∇ζ(.) = ∂(.)\n∂x . This satisfies\nthe Hamilton-Jacobi-Bellman (HJB) equation (Kirk, 2004) given as ∇ζV ∗(ζ)(F (ζ) +G(ζ)µ∗(ζ)) + Q̄(ζ) + µ∗T (ζ)Rµ∗(ζ) = 0 (8) where, the initial condition V ∗ = 0, and the funtion Q̄ : R2n → R is defined as Q̄([eT , x̂T ]T ) = Q(e) where, (e(t), x̂(t)) ∈ Rn. Since, a closed-form solution of the HJB equation is generally infeasible to obtain, we sought an approximate solution. Therefore, an actor-critic based method is used to obtain the parametric estimates of the optimal value function and the optimal policy which are given as V̂ (ζ, Ŵc) and µ̂(ζ, Ŵa) where, Ŵc ∈ RL and Ŵa ∈ RL define the vector paramater estimates. The task of the actor and critic is to learn the corresponding parameters. Replacing the estimates V̂ and µ̂ for V ∗ and µ̂∗ in the HJB equation, we obtain the residual error, also known as the Bell Error (BE) as δ(ζ, Ŵc, Ŵa) = Q̄(ζ) + µ̂T (ζ, Ŵa)Rµ̂(ζ, Ŵa) +∇ζ V̂ (ζ, Ŵc)(F (ζ) +G(ζ)µ̂(ζ, Ŵa)) where, δ : R2n X RL X RL → R. The solution of the problem requires the actor and the critic to find a set of parameters Ŵa and Ŵc respectively such that δ(ζ, Ŵc, Ŵa) = 0 and µ̂T (ζ, Ŵa) = -1/2 R−1GT (ζ)(∇ζV ∗(ζ))T where, ∀ζ ∈ Rn. As the exact basis fucntion for the approximation is not known apriori, we seek to find a set of approximate parameters that minimizes the BE. However, an uniform approximation of the value function and the optimal control policy over the entire operating domain requires to find parameters that will able to minimize the error Es : RL X RL → R defined as Es(Ŵc, Ŵa) = supζ(|δ, Ŵc, Ŵa|) thus, making it necessary to have an exact knowledge of the system model. Two of the most popular methods used to render the design of the control strategy robust to system uncertainties in this context are integral RL (Lewis et al., 2012; Modares et al., 2014) and state derivative estimation (Bhasin et al., 2013; Kamalapurkar et al., 2014). Both of these methods suffer from the persistence of exitation(PE) condition that requires the state trajectory φû(t; t0, ζ0) to cover the entire operating domain for the convergence of the parameters to their optimal values. We have relaxed this condition where the integral technique is used in augmentation with the replay of the experience where every evaluation of the BE is intuitively formalized as a gained experience, and these experiences are kept in a history stack so that they can be iteratively used by the learning algorithm to improve data efficiency.\nTherefore, to relax the PE condition, the we have developed a CL-based system identifier which is used to model the parametric estimate of the system drift dynamics and is used to simulate the experience by extrapolating the Bell Error (BE) over the unexplored territory in the operating domain thereby, prompting an exponential convergence of the parameters to their optimal values." }, { "heading": "2.2.1 PARAMETRIC SYSTEM IDENTIFICATION", "text": "Defined by any compact set C ⊂ R, the function f can be defined using a neural network (NN) as f(x) = θTσf (Y Tx1) + 0(x) where, x1 = [1, xT ]T ∈ Rn+1, θ ∈ Rn+1Xp and Y ∈ Rn+1Xp indicates the constant unknown output-layer and hidden-layer NN weight, σf : Rp→ Rp+1 denotes a bounded NN activation function, θ: Rn → Rnis the function reconstruction error, p ∈ N denotes the number of NN neurons. Using the universal functionional approximation property of single layer NNs, given a constant matrix Y such that the rows of σf (Y Tx1) form a proper basis, there exist constant ideal weights θ and known constants θ̄, ̄θ, ̄′θ ∈ R such that ||θ|| < θ̄ <∞, supx C || θ(x)|| < ̄θ, supx C ||∇x θ(x)|| < ̄θ where, ||.|| denotes the Euclidean norm for vectors and the Frobenius norm for matrix (Lewis et al., 1998). Taking into consideration an estimate θ̂ ∈ Rp+1Xn of the weight matrix θ , the function f can be approximated by the function f̂ : R2n X Rp+1Xn → Rn which is defined as f̂(ζ, θ̂) = θ̂Tσθ(ζ), where σθ : R2n → Rp+1 can be defined as σθ(ζ) = σf (Y T [1, eT + x̄T ]T ). An estimator for online identification of the drift dynamics is developed\n˙̂x = θ̂Tσθ(ζ) + g(x)u+ kx̃ (9)\nwhere, x̃ = x− x̂ and k R R is a positive constant learning gain.\nAssumption 4: A history stack containing recorded state-action pairs {xj , uj}Mj=1 along with numerically computed state derivatives { ˙̄xj} M j=1 that satisfies λmin (∑M j=1 σfjσ T fj ) = σθ >\n0, ‖ ˙̄xj − ẋj‖ < d̄,∀j is available a priori, where σfj , σf ( Y T [ 1, xTj ]T) , d̄ ∈ R is a known\npositive constant, ẋj = f (xj) + g (xj)uj and λmin(·) denotes the minimum eigenvalue.\nThe weight estimates θ̂ are updated using the following CL based update law:\n˙̂ θ = Γθσf ( Y Tx1 ) x̃T + kθΓθ M∑ j=1 σfj ( ˙̄xj − gjuj − θ̂Tσfj )T (10)\nwhere kθ ∈ R is a constant positive CL gain, and Γθ ∈ Rp+1×p+1 is a constant, diagonal, and positive definite adaptation gain matrix. Using the identifier, the BE in (3) can be approximated as\nδ̂ ( ζ, θ̂, Ŵc, Ŵa ) = Q̄(ζ) + µ̂T ( ζ, Ŵa ) Rµ̂ ( ζ, Ŵa ) +∇ζ V̂ ( ζ, Ŵa )( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa\n)) (11) The BE is now approximated as\nδ̂ ( ζ, θ̂, Ŵc, Ŵa ) = Q̄(ζ) + µ̂T ( ζ, Ŵa ) Rµ̂ ( ζ, Ŵa ) +∇ζ V̂ ( ζ, Ŵa )( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa\n)) (12) In equation (12), Fθ(ζ, θ̂) = θ̂Tσθ(ζ)− g(x)g+ (xd) θ̂Tσθ ([ 0n×1xd ])\n0n×1 , and F1(ζ) =[ (−hd + g (e+ xd) g+ (xd)hd) T , hTd ]T ." }, { "heading": "2.2.2 VALUE FUNCTION APPROXIMATION", "text": "As V ∗ and µ∗ are functions of the state ζ, the optimization problem as defined in Section 2.2 is quite an intractable one, so the optimal value function is now represented as C ⊂ R2n using a NN as V ∗(ζ) = WTσ(ζ)+ (ζ),whereW ∈ RL denotes a vector of unknown NN weights, σ : R2n →RL indicates a bounded NN activation function, : R2n → R defines the function reconstruction error, and L ∈ N denotes the number of NN neurons. Considering the universal function approximation property of single layer NNs, for any compact set C ⊂ R2n, there exist constant ideal weights W and known positive constants W̄ , ̄, and ′ ∈ R such that ‖W‖ ≤ W̄ <∞ supζ∈C ‖ (ζ)‖ ≤ ̄, and supζ∈C ‖∇ζ (ζ)‖ ≤ ̄′ (Lewis et al., 1998). A NN representation of the optimal policy is obtained as\nµ∗(ζ) = −1 2 R−1GT (ζ)\n( ∇ζσT (ζ)W +∇ζ T (ζ) ) (13)\nTaking the estimates Ŵc and Ŵa for the ideal weightsW , the optimal value function and the optimal policy are approximated as V̂ ( ζ, Ŵc ) = ŴTc σ(ζ), µ̂ ( ζ, Ŵa ) = − 12R\n−1GT (ζ)∇ζσT (ζ)Ŵa. The optimal control problem is therefore recast as to find a set of weights Ŵc and Ŵa online to minimize the error Êθ̂ ( Ŵc, Ŵa ) = supζ∈χ\n∣∣∣δ̂ (ζ, θ̂, Ŵc, Ŵa)∣∣∣ for a given θ̂, while simultaneously improving θ̂ using the CL-based update law and ensuring stability of the system using the control law\nu = µ̂ ( ζ, Ŵa ) + ûd(ζ, θ̂) (14)\nwhere, ûd(ζ, θ̂) = g+d ( hd − θ̂Tσθd ) , and σθd = σθ ([ 01×n x T d ]T) . σθ ([ 01×n x T d ]T) . The error between ud and ûd is included in the stability analysis based on the fact that the error trajectories generated by the system ė = f(x)+g(x)u− ẋd under the controller in (14) are identical to the error trajectories generated by the system ζ̇ = F (ζ) + G(ζ)µ under the control law µ = µ̂ ( ζ, Ŵa ) + g+d θ̃ Tσθd + g + d θd, where θd , θ (xd)." }, { "heading": "2.2.3 EXPERIENCE SIMULATION", "text": "The simulation of experience is implemented by minimizing a squared sum of BEs over finitely many points in the state space domain as the calculation of the extremum (supremum) in Êθ̂ is not tractable. The details of the analysis has been explained in Appendix A.2 which facilitates the aforementioned approximation." }, { "heading": "2.2.4 STABILITY AND ROBUSTNESS ANALYSIS", "text": "To perform the stability analysis, we take the non-autonomous form of the value function (Kamalapurkar et al., 2015) defined by V ∗t : Rn X R → R which is defined as V ∗t (e, t) = V ∗ ([ eT , xTd (t) ]T) ,∀e ∈ Rn, t ∈ R, is positive definite and decrescent. Now, V ∗t (0, t) = 0,∀t ∈ R and there exist class K functions v : R → R and v̄ : R → R such that v(‖e‖) ≤ V ∗t (e, t) ≤ v̄(‖e‖), for all e ∈ Rn and for all t ∈ R. We take an augemented state given as Z ∈ R2n+2L+n(p+1) is defined as\nZ = [ eT , W̃Tc , W̃ T a , x̃ T , (vec(θ̃))T ]T\n(15)\nand a candidate Lyapunov function is defined as\nVL(Z, t) = V ∗ t (e, t) +\n1 2 W̃Tc Γ −1W̃c + 1 2 W̃Ta W̃a 1 2 x̃T x̃+ 1 2 tr ( θ̃TΓ−1θ θ̃ ) (16)\nwhere, vec (·) denotes the vectorization operator. From the weight update in Appendix A.2 we get positive constants γ, γ̄ ∈ R such that γ ≤ ∥∥Γ−1(t)∥∥ ≤ γ̄,∀t ∈ R. Taking the bounds on Γ and V ∗t and the fact that tr ( θ̃TΓ−1θ θ̃ ) = (vec(θ̃))T ( Γ−1θ ⊗ Ip+1 ) (vec(θ̃)) the candidate Lyapunov function be bounded as vl(‖Z‖) ≤ VL(Z, t) ≤ v̄l(‖Z‖) (17) for all Z ∈ R2n+2L+n(p+1) and for all t ∈ R, where vl : R → R and vl : R → R are class K functions. Now, Using (1) and the fact that V ∗t (e(t), t) = V̇\n∗(ζ(t)),∀t ∈ R, the time-derivative of the candidate Lyapunov function is given by\nV̇L = ∇ζV ∗ (F +Gµ∗)− W̃Tc Γ−1 ˙̂ Wc −\n1 2 W̃Tc Γ −1Γ̇Γ−1W̃c\n−W̃Ta ˙̂ Wa + V̇0 +∇ζV ∗Gµ−∇ζV ∗Gµ∗\n(18)\nUnder sufficient gain conditions (Kamalapurkar et al., 2014), using (9), (10)-(13), and the update laws given by Ŵc, Γ̇ and Ŵa the time-derivative of the candidate Lyapunov function can be bounded as V̇L ≤ −vl(‖Z‖),∀‖Z‖ ≥ v−1l (ι),∀Z ∈ χ (19) where ι is a positive constant, and χ ⊂ R2n+2L+n(p+1) is a compact set. Considering (13) and (15), the theorem 4.18 in (Khalil., 2002) can be used to establish that every trajectory Z(t) satisfying ‖Z (t0)‖ ≤ vl−1 (vl(ρ)) , where ρ is a positive constant, is bounded for all t ∈ R and satisfies lim supt→∞ ‖Z(t)‖ ≤ vl−1 ( vl ( v−1l (ι) )) . This aforementioned analysis addresses the stability issue of the closed loop system.\nThe robustness criterion requires the algorithm to satisfy the following inequality (Gao et al., 2014) in the presence of external disturbances with a pre-specified performance index γ known as the H-infinity (H∞) performance index, given by∫ t\n0\n‖y(t)‖2dt < γ2 ∫ t\n0\n‖w(t)‖dt (20)\nwhere, y(t) is the output of the system, w(t) is the factor that accounts for the modeling errors, parameter uncertainties and external disturbances and γ is the ratio of the output energy to the disturbance in the system.\nUsing (1) and the fact that V ∗t (e(t), t) = V̇ ∗(ζ(t)),∀t ∈ R, the time-derivative of the candidate Lyapunov function is given by\nV̇L = ∇ζV ∗ (F +Gµ∗)− W̃Tc Γ−1 ˙̂ Wc −\n1 2 W̃Tc Γ −1Γ̇Γ−1W̃c\n−W̃Ta ˙̂ Wa + V̇0 +∇ζV ∗Gµ−∇ζV ∗Gµ∗\n(21)\nGao et al. (2014) has shown if (22) and (23) is satisfied, then it can written that\n0 < VL(T ) = ∫ t 0 V̇L(t) ≤ − ∫ t 0 yT (t)y(t)dt+ γ2 ∫ t 0 wT (t)w(t)dt (22)\nThus, the performance inequality constraint given by ∫ t\n0 ‖y(t)‖2dt < γ2 ∫ t 0 ‖w(t)‖dt in terms of γ\nis satisfied." }, { "heading": "3 SIMULATION RESULTS AND DISCUSSION", "text": "Here, we are going to present the simulation results to demonstrate the performance of the proposed method with the fuel management system of the hybrid electric vehicle. The proposed concurrent learning based RL optimization architecture has been shown in the Figure 1.\nIn this architecture, the simulated state-action-derivative triplets performs the action of concurrent learning to approximate the value function weight estimates to minimize the bell error (BE). The history stack is used to store the evaluation of the bell error which is carried out by a dynamic system identifier as a gained experience so that it can iteratively used to reduce the computational burden.\nA simple two dimensional model of the fuel management system is being considered for the simulation purpose to provide a genralized solution that can be extended in other cases of high dimensional system.\nWe consider a two dimensional non-linear model given by\nf = [ x1 x2 0 0 0 0 x1 x2(1− (cos(2x1 + 2)2)) ] ∗ abc d , g = [ 0cos(2x1 + 2) ] , w(t) = sin(t) (23) where a, b, c, d ∈ R are unknown positive parameters whose values are selected as a=−1, b= 1, c = −0.5, d = −0.5, x1 and x2 are the two states of the hybrid electric vehicle given by the charge present in the battery and the amount of fuel in the car respectively and w(t) = sin(t) is a sinusoidal disturbance that is used to model the external disturbance function. The control\nobjective is to minimize the cost function given by J(ζ, µ) = ∫ t\n0\nr(ζ(τ), µ(τ))dτ where, the local\ncost r : R2nXRm → R is given as r(ζ, τ) = Q(e) + µTRµ, R RmXm is a positive definite symmetric matrix and Q : Rn → R is a continous positive definite function, while following the desired trajectory x̄ We chhose Q = I2x2 and R = 1. The optimal value function and optimal control for the system (15) are V ∗(x) = 1\n2 x21 +\n1 2 x22 and u ∗(x) = −cos(2(x1) + 2)x2. The basis\nfunction σ : R2 → R3 for value function approximation is σ = [x21, x21x22, x22]. The ideal weights are W = [0.5, 0, 1]. The initial value of the policy and the value function weight estimates are Ŵc = Ŵa = [1, 1, 1]T , least square gain is Γ(0) = 100I3X3 and that of the system states are x(0) = [−1,−1]T . The state estimates x̂ and θ̂ are initialized to 0 and 1 respectively while the history stack for the CL is updated online. Here, Figure 2 and Figure 3 shows the state trajectories obtained by the\ntraditional RL methods and that obtained by the CL-based RL optimization technique respectively in the presence of disturbances. It can be stated that settling time of trajectories obtained by the proposed method is significantly less (almost 40 percent) as compared with that of the conventional RL strategies thus justifying the uniqueness of the method and causing a saving in fuel consumption by about 40-45 percent. Figure 4 shows the corresponding control inputs whereas Figure 5 and\nFigure 6 indicates the convergence of the NN weight functions to their optimal values. The H∞ performance index in Figure 7 shows a value of 0.3 for the CL-based RL method in comparison to 0.45 for the traditional RL-based control design which clearly establishes the robustness of our proposed design." }, { "heading": "4 CONCLUSION", "text": "In this paper, we have proposed a robust concurrent learning based deep Rl optimization strategy for hybrid electric vehicles. The uniqueness of this method lies in use of a concurrent learning based RL optimization strategy that reduces the computational complexity significanty in comparison to the traditional RL approaches used for the fuel management system mentioned in the literature. Also, the use of the the H-infinity (H∞) performance index in case of RL optimization for the first time takes care of the robustness problems that most the fuel optimization nethods suffer from. The simulation results validate the efficacy of the method over the conventional PID, MPC as well as traditional RL based optimization techniques. Future work will generalize the approach for largescale partially observed uncertain systems and it will also incorporate the movement of neighbouring RL agents." }, { "heading": "A APPENDIX", "text": "A.1 THE GRADIENT DESCENT ALGORITHM\nThe Gradient Descent Algorithm has been explained as follows:\nAlgorithm; Gradient Descent\nInput : Design Parameters U (0) = u0t , α, h R\nOutput : Optimal control sequence {ūt} 1. n← 0,∇UJd ( x0, U (0) ) ←\n2. while∇UJd ( x0, U (n) ) ≥ do\n3. Evaluate the cost function with control U (n)\n4. Perturb each control variable u(n)i by h, i = 0, · · · , t, and calculate the gradient vector ∇UJd ( x0, U (n) ) using (7) and (8)\n5. Update the control policy: U (n+1) ← U (n) − α∇UJd ( x0, U (n) )\n6. n← n+ 1" }, { "heading": "7. end", "text": "8. {ūt} ← U (n)\nA.2 EXPERIENCE SIMULATION\nAssumption 5: (Kamalapurkar et al., 2014) There exists a finite set of points {ζi ∈ C | i = 1, · · · , N} and a constant c ∈ R such that 0 < c = 1N ( inft∈R≥t0 ( λmin {∑N i=1 ωiω T i ρi })) where ρi =1 +\nνωTi Γωi ∈ R, and ωi =∇ζσ (ζi) ( Fθ ( ζi, θ̂ ) + F1 (ζi) +G (ζi) µ̂ ( ζi, Ŵa )) .\nUsing Assumption 5, simulation of experience is implemented by the weight update laws given by\nŴc = −ηc1Γ ω\nρ δ̂t − ηc2 N Γ N∑ i=1 ωi ρi δ̂ti (24)\nΓ̇ = ( βΓ− ηc1Γ ωωT\nρ2 Γ\n) 1{‖Γ‖≤Γ̄}, ‖Γ (t0)‖ ≤ Γ̄, (25)\n˙̂ Wa = −ηa1 ( Ŵa − Ŵc ) − ηa2Ŵa + ( ηc1G T σ Ŵaω T\n4ρ + N∑ i=1 ηc2G T σiŴaω T i 4Nρi\n) Ŵc (26)\nwhere, ω = ∇ζσ(ζ) ( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa )) , Γ ∈ RL×L is the least-squares gain\nmatrix, Γ̄ ∈ R denotes a positive saturation constant, β ∈ R indicates a constant forgetting factor, ηc1, ηc2, ηa1, ηa2 ∈ R defines constant positive adaptation gains, 1{·} denotes the indicator function of the set {·}, Gσ = ∇ζσ(ζ)G(ζ)R−1GT (ζ)∇ζσT (ζ), and ρ = 1 + νωTΓω, where ν ∈ R is a positive normalization constant. In the above weight update laws, for any function ξ(ζ, ·), the notation ξi, is defined as ξi = ξ (ζi, ·) , and the instantaneous BEs δ̂t and δ̂ti are given as δ̂t = δ̂ ( ζ, Ŵc, Ŵa, θ̂ ) and δ̂ti = δ̂ ( ζi, Ŵc, Ŵa, θ̂ ) ." } ]
2,020
A ROBUST FUEL OPTIMIZATION STRATEGY FOR HY- BRID ELECTRIC VEHICLES: A DEEP REINFORCEMENT LEARNING BASED CONTINUOUS TIME DESIGN AP-
SP:43e525fb3fa611df7fd44bd3bc9843e57b154c66
[ "This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RNA secondary structure) for the generation of RNA secondary structures. They test each model on 3 benchmark tasks: unsupervised generation, semi-supervised learning and targeted generation. This paper has many interesting contributions — a comparison of VAE models that use different RNA secondary structure encoding schemes, including traditional dot-bracket notation and a more complex hierarchical encoding, and they also introduce various decoding schemes to encourage valid secondary structures. " ]
Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions. The design of large scale and complex biological structures spurs dedicated graph-based deep generative modeling techniques, which represents a key but underappreciated aspect of computational drug discovery. In this work, we investigate the principles behind representing and generating different RNA structural modalities, and propose a flexible framework to jointly embed and generate these molecular structures along with their sequence in a meaningful latent space. Equipped with a deep understanding of RNA molecular structures, our most sophisticated encoding and decoding methods operate on the molecular graph as well as the junction tree hierarchy, integrating strong inductive bias about RNA structural regularity and folding mechanism such that high structural validity, stability and diversity of generated RNAs are achieved. Also, we seek to adequately organize the latent space of RNA molecular embeddings with regard to the interaction with proteins, and targeted optimization is used to navigate in this latent space to search for desired novel RNA molecules.
[ { "affiliations": [], "name": "Zichao Yan" }, { "affiliations": [], "name": "William L. Hamilton" } ]
[ { "authors": [ "Bronwen L Aken", "Premanand Achuthan", "Wasiu Akanni", "M Ridwan Amode", "Friederike Bernsdorff", "Jyothish Bhai", "Konstantinos Billis", "Denise Carvalho-Silva", "Carla Cummins", "Peter Clapham" ], "title": "Ensembl 2017", "venue": "Nucleic Acids Research,", "year": 2016 }, { "authors": [ "Yuri Burda", "Roger B. Grosse", "Ruslan Salakhutdinov" ], "title": "Importance Weighted Autoencoders", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural Ordinary Differential Equations", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Xi Chen", "Diederik P. Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Variational Lossy Autoencoder", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Xinshi Chen", "Yu Li", "Ramzan Umarov", "Xin Gao", "Le Song" ], "title": "RNA secondary structure prediction by learning unrolled algorithms", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "venue": "In Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Alexander Churkin", "Matan Drory Retwitzer", "Vladimir Reinharz", "Yann Ponty", "Jérôme Waldispühl", "Danny Barash" ], "title": "Design of RNAs: comparing programs for inverse RNA folding", "venue": "Briefings in Bioinformatics, 19(2):350–358,", "year": 2017 }, { "authors": [ "K.B. Cook", "S. Vembu", "K.C.H. Ha", "H. Zheng", "K.U. Laverty", "T.R. Hughes", "D. Ray", "Q.D. Morris" ], "title": "RNAcompete-S: Combined RNA sequence/structure preferences for RNA binding proteins derived from a single-step in vitro", "venue": "selection. Methods,", "year": 2017 }, { "authors": [ "Robin D. Dowell", "Sean R. Eddy" ], "title": "Evaluation of several lightweight stochastic context-free grammars for RNA secondary structure prediction", "venue": "BMC Bioinformatics,", "year": 2004 }, { "authors": [ "David Duvenaud", "Dougal Maclaurin", "Jorge Aguilera-Iparraguirre", "Rafael Gómez-Bombarelli", "Timothy Hirzel", "Alán Aspuru-Guzik", "Ryan P. Adams" ], "title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2015 }, { "authors": [ "Sean R. Eddy", "Richard Durbin" ], "title": "RNA sequence analysis using covariance models", "venue": "Nucleic Acids Research, 22(11):2079–2088,", "year": 1994 }, { "authors": [ "Ahmed Elnaggar", "Michael Heinzinger", "Christian Dallago", "Ghalia Rehawi", "Yu Wang", "Llion Jones", "Tom Gibbs", "Tamas Feher", "Christoph Angerer", "Martin Steinegger", "DEBSINDHU BHOWMIK", "Burkhard Rost" ], "title": "ProtTrans: Towards Cracking the Language of Life’s Code Through SelfSupervised Deep Learning and High Performance Computing", "venue": "bioRxiv,", "year": 2020 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural Message Passing for Quantum Chemistry", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamı́n Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "Will Grathwohl", "Ricky T.Q. Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "FFJORD: free-form continuous dynamics for scalable reversible generative models", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computing,", "year": 1997 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi S. Jaakkola" ], "title": "Junction Tree Variational Autoencoder for Molecular Graph Generation", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Ioanna Kalvari", "Joanna Argasinska", "Natalia Quinones-Olvera", "Eric P Nawrocki", "Elena Rivas", "Sean R Eddy", "Alex Bateman", "Robert D Finn", "Anton I Petrov" ], "title": "Rfam 13.0: shifting to a genome-centric resource for non-coding RNA families", "venue": "Nucleic Acids Research, 46(D1):D335–D342,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Matt J. Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar Variational Autoencoder", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard S. Zemel" ], "title": "Gated Graph Sequence Neural Networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander L. Gaunt" ], "title": "Constrained Graph Variational Autoencoders for Molecule Design", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "R. Lorenz", "S.H. Bernhart", "C. Honer Zu Siederdissen", "H. Tafer", "C. Flamm", "P.F. Stadler", "I.L. Hofacker" ], "title": "ViennaRNA Package 2.0", "venue": "Algorithms for Molecular Biology,", "year": 2011 }, { "authors": [ "David H. Mathews", "Matthew D. Disney", "Jessica L. Childs", "Susan J. Schroeder", "Michael Zuker", "Douglas H. Turner" ], "title": "Incorporating chemical modification constraints into a dynamic programming algorithm for prediction of RNA secondary structure", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 2004 }, { "authors": [ "Eric P. Nawrocki", "Sean R. Eddy" ], "title": "Infernal 1.1: 100-fold faster RNA homology searches", "venue": "Bioinformatics, 29(22):2933–2935,", "year": 2013 }, { "authors": [ "Carlos Oliver", "Vincent Mallet", "Roman Sarrazin Gendron", "Vladimir Reinharz", "William L Hamilton", "Nicolas Moitessier", "Jérôme Waldispühl" ], "title": "Augmented base pairing networks encode RNA-small molecule binding preferences", "venue": "Nucleic Acids Research, 48(14):7690–7699,", "year": 2020 }, { "authors": [ "Norbert Pardi", "Michael J. Hogan", "Frederick W. Porter", "Drew Weissman" ], "title": "mRNA vaccines — a new era in vaccinology", "venue": "Nature Reviews Drug Discovery,", "year": 2018 }, { "authors": [ "Lorena G. Parlea", "Blake A. Sweeney", "Maryam Hosseini-Asanjan", "Craig L. Zirbel", "Neocles B. Leontis" ], "title": "The RNA 3D Motif Atlas: Computational methods for extraction, organization and evaluation of RNA motifs", "venue": null, "year": 2016 }, { "authors": [ "D. Ray", "H. Kazan", "E.T. Chan", "L. Pena Castillo", "S. Chaudhry", "S. Talukder", "B.J. Blencowe", "Q. Morris", "T.R. Hughes" ], "title": "Rapid and systematic analysis of the RNA recognition specificities of RNAbinding proteins", "venue": "Nature Biotechnology,", "year": 2009 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Vladimir Reinharz", "Antoine Soulé", "Eric Westhof", "Jérôme Waldispühl", "Alain Denise" ], "title": "Mining for recurrent long-range interactions in RNA structures reveals embedded hierarchies in network families", "venue": "Nucleic Acids Research, 46(8):3841–3851,", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational Inference with Normalizing Flows", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Alexander Rives", "Joshua Meier", "Tom Sercu", "Siddharth Goyal", "Zeming Lin", "Demi Guo", "Myle Ott", "C. Lawrence Zitnick", "Jerry Ma", "Rob Fergus" ], "title": "Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences", "venue": "bioRxiv,", "year": 2019 }, { "authors": [ "Frederic Runge", "Danny Stoll", "Stefan Falkner", "Frank Hutter" ], "title": "Learning to Design RNA", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Roman Sarrazin-Gendron", "Hua-Ting Yao", "Vladimir Reinharz", "Carlos G. Oliver", "Yann Ponty", "Jérôme Waldispühl" ], "title": "Stochastic Sampling of Structural Contexts Improves the Scalability and Accuracy of RNA 3D Module Identification", "venue": "In Russell Schwartz (ed.), Research in Computational Molecular Biology,", "year": 2020 }, { "authors": [ "Thomas Schlake", "Andreas Thess", "Mariola Fotin-Mleczek", "Karl-Josef Kallen" ], "title": "Developing mRNAvaccine technologies", "venue": "RNA Biology,", "year": 2012 }, { "authors": [ "Michael Sejr Schlichtkrull", "Thomas N. Kipf", "Peter Bloem", "Rianne van den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling Relational Data with Graph Convolutional Networks", "venue": "In The Semantic Web 15th International Conference,", "year": 2018 }, { "authors": [ "Jaswinder Singh", "Jack Hanson", "Kuldip Paliwal", "Yaoqi Zhou" ], "title": "RNA secondary structure prediction using an ensemble of two-dimensional deep neural networks and transfer learning", "venue": "Nature Communications,", "year": 2019 }, { "authors": [ "Richard Stefl", "Lenka Skrisovska", "Frédéric H.T. Allain" ], "title": "RNA sequence- and shape-dependent recognition by proteins in the ribonucleoprotein particle", "venue": "EMBO reports,", "year": 2005 }, { "authors": [ "Teague Sterling", "John J. Irwin" ], "title": "ZINC 15 – Ligand Discovery for Everyone", "venue": "Journal of Chemical Information and Modeling,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph Attention Networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Zichao Yan", "William L. Hamilton", "Mathieu Blanchette" ], "title": "Graph neural representational learning of RNA secondary structures for predicting RNA-protein interactions", "venue": "Bioinformatics, 36 (Supplement 1):i276–i284,", "year": 2020 }, { "authors": [ "Guandao Yang", "Xun Huang", "Zekun Hao", "Ming-Yu Liu", "Serge J. Belongie", "Bharath Hariharan" ], "title": "PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Zhitao Ying", "Vijay S. Pande", "Jure Leskovec" ], "title": "Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Ĝk ∈ N(Ĝi" ], "title": "The internal structure of T-GRU is equivalent to the tree encoder employed in Jin et al. (2018), which is essentially a neural analogue of the belief propagation algorithm on junction trees. Nevertheless, we write down the message passing formulas of T-GRU here: sĜi,Ĝj", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "There is an increasing interest in developing deep generative models for biochemical data, especially in the context of generating drug-like molecules. Learning generative models of biochemical molecules can facilitate the development and discovery of novel treatments for various diseases, reducing the lead time for discovering promising new therapies and potentially translating in reduced costs for drug development (Stokes et al., 2020). Indeed, the study of generative models for molecules has become a rich and active subfield within machine learning, with standard benchmarks (Sterling & Irwin, 2015), a set of well-known baseline approaches (Gómez-Bombarelli et al., 2018; Kusner et al., 2017; Liu et al., 2018; Jin et al., 2018), and high-profile cases of real-world impact 1.\nPrior work in this space has focused primarily on the generation of small molecules (with less than 100 atoms), leaving the development of generative models for larger and more complicated biologics and biosimilar drugs (e.g., RNA and protein peptides) an open area for research. Developing generative models for larger biochemicals is critical in order to expand the frontiers of automated treatment design. More generally, developing effective representation learning for such complex biochemicals will allow machine learning systems to integrate knowledge and interactions involving these biologically-rich structures.\nIn this work, we take a first step towards the development of deep generative models for complex biomolecules, focusing on the representation and generation of RNA structures. RNA plays a crucial\n1e.g. LambdaZero project for exascale search of drug-like molecules.\nrole in protein transcription and various regulatory processes within cells which can be influenced by its structure (Crick, 1970; Stefl et al., 2005), and RNA-based therapies are an increasingly active area of research (Pardi et al., 2018; Schlake et al., 2012), making it a natural focus for the development of deep generative models. The key challenge in generating RNA molecules—compared to the generation of small molecules—is that RNA involves a hierarchical, multi-scale structure, including a primary sequential structure based on the sequence of nucleic acids as well as more complex secondary and tertiary structures based on the way that the RNA strand folds onto itself. An effective generative model for RNA must be able to generate sequences that give rise to these more complex emergent structures.\nThere have been prior works on optimizing or designing RNA sequences—using reinforcement learning or blackbox optimization—to generate particular RNA secondary structures (Runge et al., 2019; Churkin et al., 2017). However, these prior works generally focus on optimizing sequences to conform to a specific secondary structure. In contrast, our goal is to define a generative model, which can facilitate the sampling and generation of diverse RNA molecules with meaningful secondary structures, while also providing a novel avenue for targeted RNA design via search over a tractable latent space.\nKey contributions. We propose a series of benchmark tasks and deep generative models for the task of RNA generation, with the goal of facilitating future work on this important and challenging problem. We propose three interrelated benchmark tasks for RNA representation and generation:\n1. Unsupervised generation: Generating stable, valid, and diverse RNAs that exhibit complex secondary structures.\n2. Semi-supervised learning: Learning latent representations of RNA structure that correlate with known RNA functional properties.\n3. Targeted generation: Generating RNAs that exhibit particular functional properties.\nThese three tasks build upon each other, with the first task only requiring the generation of stable and valid molecules, while the latter two tasks involve representing and generating RNAs that exhibit particular properties. In addition to proposing these novel benchmarks for the field, we introduce and evaluate three generative models for RNA. All three models build upon variational autoencoders (VAEs) (Kingma & Welling, 2014) augmented with normalizing flows (Rezende & Mohamed, 2015; Kingma et al., 2016), and they differ in how they represent the RNA structure. To help readers better understand RNA structures and properties, a self-contained explanation is provided in appendix B.\nThe simplest model (termed LSTMVAE) learns using a string-based representation of RNA structure. The second model (termed GraphVAE) leverages a graph-based representation and graph neural network (GNN) encoder approach (Gilmer et al., 2017). Finally, the most sophisticated model (termed HierVAE) introduces and leverages a novel hierarchical decomposition of the RNA structure. Extensive experiments on our newly proposed benchmarks highlight how the hierarchical approach allows more effective representation and generation of complex RNA structures, while also highlighting important challenges for future work in the area." }, { "heading": "2 TASK DESCRIPTION", "text": "Given a dataset of RNA molecules, i.e. sequences of nucleotides and corresponding secondary structures, our goals are to: (a) learn to generate structurally stable, diverse, and valid RNA molecules that reflect the distribution in this training dataset; (b) learn latent representations that reflect the functional properties of RNA. A key factor in both these representation and generation processes is that we seek to jointly represent and generate both the primary sequence structure as well as the secondary structure conformation. Together, these two goals lay the foundations for generating novel RNAs that satisfy certain functional properties. To meet these goals, we create two types of benchmark datasets, each one focusing on one aspect of the above mentioned goals:\nUnlabeled and variable-length RNA. The first dataset contains unlabeled RNA with moderate and highly-variable length (32-512 nts), obtained from the human transcriptome (Aken et al., 2016) and through which we focus on the generation aspect of structured RNA and evaluate the validity, stability and diversity of generated RNA molecules. In particular, our goal with this dataset is to jointly generate RNA sequences and secondary structures that are biochemically feasible (i.e., valid), have\nlow free energy (i.e., stable), and are distinct from the training data (i.e., diverse). We will give an extended assessment of the generation aspect under different circumstances, e.g., when constraining the generation procedures with explicit rules.\nLabeled RNA. The second dataset is pulled and processed from a previous study on in vitro RNAprotein interaction, which features labeled RNAs with shorter and uniform length (40 nts) (Cook et al., 2017). With this dataset, our objective is slightly expanded (to include obj. a), so that the latent space is adequately organized and reflective of the interaction with proteins. Therefore, key assessment for the latent space includes AUROC for the classification of protein binding, which is crucial for the design of desired novel RNA molecules.\nEssentially, this creates slight variations in the task formulation, with the first dataset suited to unsupervised learning of a generative model, while the second datasets involves additional supervision (e.g., for a semi-supervised model or targeted generation). Our specific modeling choices, to be introduced in section 3, are invariant to different task formulations, and flexible enough to handle different representations of RNA secondary structures. We refer readers to appendix C for detailed explanation for the dataset and evaluation metrics on the generated molecules and latent embeddings." }, { "heading": "3 METHODS", "text": "In this section, we introduce three different generative models for RNA. All three models are based upon the variational autoencoder (VAE) framework, involving three key components:\n1. A probabilistic encoder network qφ(z|x), which generates a distribution over latent states given an input representation of an RNA. We experiment with three different types of input encodings for RNA sequence and secondary structures (see Figure S1: a dot-bracket annotated string, a graph with adjacency matrix representing base-pairings, and a graph augmented with a hierarchical junction tree annotation for the secondary structure.\n2. A probabilistic decoder network pθ(x|z), which defines a joint distribution over RNA sequences and secondary structures, conditioned on a latent input. As with the encoder network, we design architectures based on a linearized string decoding and a graph-based hierarchical junction-tree decoding approach.\n3. A parameterized prior pψ(z), which defines a prior distribution over latent states and is learned based on a continuous normalizing flow (CNF) (Chen et al., 2018).\nFor all the approaches we propose, the model is optimized via stochastic gradient descent to minimize the evidence lower bound (ELBO): L = −Eqφ(z|x)[pθ(x|z)] + β KL(qφ(z|x)|pψ(z)) where β is a term to allow KL-annealing over the strength of the prior regularization.\nIn the following sections, we explain our three different instantiations of the encoder (section 3.1), decoder (section 3.2), as well as our procedures to structurally constrain the decoding process using domain knowledge (section 3.3) and our procedures to avoid posterior collapse (section 3.4)." }, { "heading": "3.1 ENCODING RNA SECONDARY STRUCTURES", "text": "The input to the encoder is a structured RNA molecule, with its sequence given by an ordered array of nucleotides x1 . . . xL, with xi ∈ {A,C,G,U}, where L is the length of the sequence, and its secondary structure, either rep-\nresented as (1) a dot-bracket string S = ẋ1 . . . ẋL with ẋi ∈ {., (, )}; (2) or as a graph G with two types of edges — covalent bonds along the RNA backbone, and hydrogen bonds between the base-\npairs 2. We use xuv to denote edge features between nucleotides u and v; (3) or as a hypergraph T — a depth-first ordered array of subgraphs Ĝ1 . . . ĜD with L(Ĝi) ∈ {S,H, I,M} indicating the subgraph label, and I(Ĝi) = {j|j ∈ {1 . . . L}} indicating the assignment of nucleotides to each subgraph.\nEncoding RNA secondary structure as sequence. First, we obtain a joint encoding over the nucleotide and the dot-bracket annotation, using the joint sequence-structure vocabulary {A,C,G,U} × {., (, )}. Then, these one-hot encodings are processed by a stacked bidirectional LSTM (Hochreiter & Schmidhuber, 1997), followed by a multi-head self-attention module (Vaswani et al., 2017) to weigh different positions along the RNA backbone. A global max-pooling is used to aggregate the information into hS , and then we obtain mean µS and log variance log σS from hS through linear transformations, and draw latent encoding zS from N (µS , σS) using the reparameterization trick (Kingma & Welling, 2014).\nLearning graph representation of RNA secondary structure. To encode the graph view G of an RNA secondary structure, we pass rounds of neural messages along the RNA structure, which falls into the framework of Message Passing Neural Network (MPNN) as originally discussed in Gilmer et al. (2017) and similarly motivated by Jin et al. (2018).\nFor much longer RNAs, it is conceptually beneficial to pass more rounds of messages so that a nucleotide may receive information on its broader structural context. However, this may introduce undesired effects such as training instability and over-smoothing issues. Therefor , we combine our MPNN network with gating mechanism, which is collectively referred as the G-MPNN:\nv̂t−1uv = σ(W g local[xu ||xuv] +W g msg ∑ w∈N(u) vt−1wu ) (1) vtuv = GRU(v̂ t−1 uv , v t−1 uv ) (2) where [. . . || . . . ] denotes concatenation, σ denotes the activation function and GRU indicates the gated recurrent unit (Cho et al., 2014). Then, after T iterations of message passing, the final nucleotide level embedding is given by: hu = σ(W g emb[xu || ∑ v∈N(u) v T vu]). Before pooling the nucleotide level embeddings into the graph level, we pass h1 . . . hL through a single bidirectional LSTM layer, obtaining ĥ1 . . . ĥL at each step, and hg = max({ĥi|i ∈ 1...L}). The latent encoding zG is similarly obtained from hG using the reparameterization trick.\nHierarchical encoding of the RNA hypergraph. To encode the junction tree T of RNA, we employ a type of GRU specifically suited to tree-like structures, which has previously been applied in works such as GGNN (Li et al., 2016) and JTVAE (Jin et al., 2018). We refer to this tree encoding network as T-GRU, and the format of its input is shown in Figure 1.\nOne major distinction between our RNA junction tree and the one used for chemical compounds (Jin et al., 2018) is that an RNA subgraph assumes more variable nucleotide composition such that it is impossible to enumerate based on the observed data. Therefore, we need to dynamically compute the features for each node in an RNA junction tree based on its contained nucleotides, in a hierarchical manner to leverage the nucleotide level embeddings learnt by G-MPNN.\nConsidering a subgraph Ĝi in the junction tree T , we initialize its node feature with: xĜi = [L(Ĝi) ||maxu∈I(Ĝi) hu]. Notably, maxu∈Ĝi hu is a max-pooling over all nucleotides assigned to Ĝi, and nucleotide embedding hu comes from G-MPNN. To compute and pass neural messages between adjacent subgraphs in the RNA junction tree T , we use the T-GRU network in Eq.3\nvtĜi,Ĝj = T-GRU(xĜi , {v t−1 Ĝk,Ĝi | Ĝk ∈ N(Ĝi)}) (3) hĜi = σ(W t emb[xĜi || ∑ Ĝ∈N(Ĝi) hĜ ]) (4) with details of T-GRU provided in the appendix D, and compute the embeddings for subgraphs with Eq. 4. Further, we obtain a depth-first traversal of the subgraph embeddings hĜ1 . . . hĜD′ which is also the order for hierarchical decoding to be discussed later. This ordered array of embeddings is processed by another bi-directional LSTM , and the final tree level representation hT is again given by the max-pooling over the bi-LSTM outputs. Likewise, latent encoding zT is obtained from hT .\n2We do not differentiate the number of hydrogen bonds, which can be different depending on the base-pairs. For example, G-C has three hydrogen bonds whereas A-U only contains two." }, { "heading": "3.2 RNA MOLECULAR GENERATION", "text": "Decoding linearized sequence and structure. In this setting, the decoder simply autoregressively decodes a token at each step, from the joint sequence-structure vocabulary mentioned before in section 3.1, plus one additional symbol to signal the end of decoding. To simplify the design choice, we use a single-layered forward-directional LSTM, and its hidden state is initialized with the latent encoding z, which can be either zS , zG or zT .\nHierarchically decoding hypergraph and nucleotide segments. The input to this more sophisticated hierarchical decoder are latent encodings zG which contains order and basic connectivity information of the nucleotides, and zT which contains higher order information about the arrangements of nucleotide branches and their interactions. We give a concise description of the decoding procedures here, along with a detailed algorithm in appendix E. On a high level, we hierarchically decode the tree structure in a depth-first manner, and autoregressively generate a nucleotide segment for each visited tree branch. For these purposes, we interleave three types of prediction (Figure 2).\nDenote the current tree node at decode step t and at the i-th visit as Ĝt,i, whose features include (1) its node label L(Ĝt,i) and, (2) a summary over the already existing i − 1 nucleotide segments max{hl,ju |u ∈ Ĝt,i and l < t and j < i}, with l denoting the nucleotide is decoded at step l, and j indicating the nucleotide belongs to the j-th branch (this feature is simply zeros when i = 1). Then, its local feature xĜt,i is defined as the concatenation of (1) and (2).\nWe make use of a notion called node state: hĜt,i , which is obtained by: hĜt,i = T-GRU(xĜt,i , {vĜ,Ĝt,i | Ĝ ∈ N(Ĝt,i)}). Note its similarity to Eq. 3, and hĜt,i is used to make:\n• topological prediction in Figure 2 (A), to determine if the decoder should expand to a new tree node or backtrack to its parent node, based on MLPtopo(hĜt,i); • tree node prediction in Figure 2 (B), on condition that a new tree node is needed due to a possible topological expansion. This procedure determines the label of the new tree node from the set of {S,H, I,M}, based on MLPnode(hĜt,i); • nucleotide segment decoding in Figure 2 (C), using a single-layered LSTM, whose initial hidden state is MLPdec([hĜt,i || zT || zG ]). The start token is the last nucleotide from the last segment.\nOur hierarchical decoder starts off by predicting the label of the root node using zT , followed by topological prediction on the root node and decoding the first nucleotide segment. The algorithm terminates upon revisiting the root node, topologically predicted to backtrack and finishing the last segment of the root node. The decoded junction tree naturally represents an RNA secondary structure that can be easily transformed to the dot-bracket annotation, and the RNA sequence is simply recovered by connecting nucleotide segments along the depth-first traversal of the tree nodes." }, { "heading": "3.3 STRUCTURALLY CONSTRAINED DECODING", "text": "To better regulate the decoding process so that generated RNAs have valid secondary structures, a set of constraints can be added to the decoding procedures at the inference stage. Essentially, a valid RNA secondary structure needs to observe the following rules: (1) base-pairing complementarity,\nwhich means only the canonical base-pairs and Wobble base-pairs are allowed, i.e. [A-U], [G-C] and [G-U]; (2) hairpin loop should have a minimum of three unpaired nucleotides, i.e. for any two paired bases at position i and j, |i− j| > 3; (3) each nucleotide can only be paired once, and overlapping pairs are disallowed.\nWe will translate the above rules into specific and applicable constraints, depending on specific decoders. For the sake of space, we only give a broad remark and leave more details in the appendix.\nLinearized decoding constraints. Since the linearized decoder simply proceeds in an autoregressive fashion, constraints can be easily enforced in a way that at each step, a nucleotide with an appropriate structural annotation is sampled by making use of masks and re-normalizing the probabilities. Likewise, a stop token can only sampled when all opening nucleotides have been closed. More details to follow in appendix F.\nHierarchical decoding constraints. The specific set of constraints for hierarchical decoding is discussed in appendix G. Overall, considering the different natures of the three associated types of prediction, each one should require a set of different strategies, which are once again applicable by adding proper masks before sampling. As shown in the algorithm in appendix E, the set of constraints are applied to line 13, 24 and 14 with marked asterisk." }, { "heading": "3.4 AVOIDING POSTERIOR COLLAPSE", "text": "As discussed in a line of previous works, VAEs with strong autoregressive decoders are susceptible to posterior collapse, an issue where the decoder simply ignores the latent encoding of the encoder (He et al., 2019). Therefore, to avoid posterior collapsing, we make use of a carefully chosen KL annealing schedule during training to help the encoder adapt its information content in the latent encoding and in coordination with the decoder. This schedule is detailed in section 4. We also learn a parameterized prior as suggested in Chen et al. (2017), but using a CNF instead, following a similar implementation to Yang et al. (2019), with details given in appendix H.\nOur KL annealing schedule is chosen based on empirical observations, as to our knowledge, there has yet to exist any principled methods of selecting such schedule. We have used diagnostic metrics such as mutual information (He et al., 2019) and active units (Burda et al., 2016) along with a validation set to select a proper KL annealing schedule which is to be described later in section 4" }, { "heading": "4 RESULTS", "text": "We consider three modes of evaluation: (1) unsupervised RNA generation; (2) generation using semi-supervised VAE models and (3) targeted RNA design from an organized latent space. Results are presented below, and relevant hyperparameters can be found in Table S1.\nUnsupervised RNA generation. Here, we evaluate generated RNAs from models trained on the unlabeled RNA dataset for 20 epochs using a KL annealing schedule including 5 epochs of warm-up,\nfollowed by gradually increasing the KL annealing term to 3e-3 (for LSTMVAE and GraphVAE), or 2e-3 (for HierVAE). The KL annealing schedule was chosen using a validation set of 1,280 RNAs.\nTable 1 compares the generation capability of different models, from the posterior as well as the prior distribution, and in scenarios such as applying structural constraints to the decoding process or not. It clearly shows that our most advanced model, HierVAE which employs a hierarchical view of the structure in its encoding/decoding aspects, achieves the best performance across different evaluation regimes, generating valid and stable RNAs even when the decoding processed is unconstrained. It is also observed that despite having structural constraints, the validity of our generated RNAs are always slightly below 100%. This can be explained by the threshold hyperparameter which sets the maximum number of steps for topological prediction as well as the maximal length of each nucleotide segment, as shown in Algorithm 1 in appendix E.\nTo further demonstrate the benefits of model training from structural constraints, we sample RNAs from the prior of an untrained HierVAE model. With structural constraints, the validity amounts to 66.34% with an extremely high free energy deviation of 22.613. Without structural constraints, the validity translates to a mere 9.37% and the model can only decode short single stranded RNAs as it lacks the knowledge of constructing more complex structures. This comparison illustrates that model training is essential for obtaining stable RNA folding.\nThe junction tree hierarchy of RNAs developed in our work shares certain modelling similarities with the probabilistic context free grammar (Dowell & Eddy, 2004) used by covariance models (CM) (Eddy & Durbin, 1994). Infernal (Nawrocki & Eddy, 2013) is one of the representative works based on CM, which is capable of sampling RNA secondary structures from a CM built around a consensus secondary structure for a conserved RNA family. However, due to the lack of homologous sequences in our dataset, Infernal is seriously limited and can only sample single stranded RNAs.\nFigure 3 illustrate RNA structures generated using HierVAE from a randomly chosen short path through the latent space. Notably, latent encoding provided by HierVAE translates smoothly in the RNA structure domain: nearby points in the latent space result in highly similar, yet different, structures. The generated structures are particularly stable for short and medium-size RNAs, and slightly less so for longer RNAs with highly complex structures. A side-by-side comparison between generated RNA secondary structures and MFE structures in Figure S3 shows that generated structures can evolve smoothly in the latent space along with their corresponding MFE structures. We also visualize neighborhoods of a Cysteine-carrying transfer RNA and a 5S ribosomal RNA in figure S4 and S5.\nSupervised RNA generation. We then evaluate our generative approaches in a semi-supervised setting using seven RBP binding data sets from RNAcompete-S. First, we compare the efficacy of different representational choices while excluding the generative components, i.e. we jointly train VAE encoders followed by simple MLP classifiers on top of the latent encodings for binary classification on RBP binding.\nTable S3 shows that incorporating RNA secondary structures is overall beneficial for the classification accuracy, except for RBMY where a model with access to RNA sequence alone (LSTM-SeqOnly) has the best performance. Notably, different choices for representing RNA secondary structures do not lead to large variation in performance, with the exception of HuR and SLBP, where graph based representations have an advantage over the linearized structural representation. On the other hand, sequence based models often have comparable performance, possibly due to the capability of inferring RNA secondary structures from short RNA sequences. It is also worth exploring other in-vitro selection protocols such as HTR-SELEX which can select RNAs with higher binding affinities than RNAcompete-S that only involves a single selection step.\nNext, we train full generative models (encoder, decoder, latent CNF and MLP embedding classifier), and show the results in Table 2. Since our strategy for targeted RNA design makes use of seed molecules in the latent space, we mainly sample RNAs from the posterior distribution of these semi-supervised VAE models. Therefore, we select a KL annealing schedule that tends to retain more information in the latent encodings, i.e. setting maximum β to 5e-4 and training 10 epochs.\nResults are promising in that classification AUROC measured by the held-out test set is comparable to the fully supervised classification models in Table S3, and much better compared to models only using fixed and pretrained VAE embeddings as shown in Table S2. Also, RNA structures generated from the posterior distribution, even under the setting of unconstrained and deterministic decoding, have high success rates, very stable conformation and good reconstruction accuracy.\nTargeted RNA design. We next studied the task of designing RNAs with high RBP binding affinity. Starting from the latent encodings of 10,000 randomly chosen RNA molecules that have negative labels in each RNAcompeteS test set, and use activation maximization to gradually alter the latent encodings so that the predicted binding probability from the embedding classifiers increases. These embedding classifiers have been trained jointly with the VAE models with accuracy reported earlier (Table 2). Then, we use separately trained full classifiers (also earlier shown in Table S3) as proxy of oracles for evaluating the “ground truth” probability of RBP binding. Table 3, report the\nsuccess rate (fraction of RNAs whose “ground truth” RBP binding probability was improved), along with the average improvement in binding probabilities. An example of a trajectory of optimized RNAs is shown in Fig. S6." }, { "heading": "5 RELATED WORK", "text": "Over the years, the field of computational drug discovery has witnessed the emergence of graphcentric approaches. One of the earliest method, proposed in Gómez-Bombarelli et al. (2018), is defined on the linearized format of molecular structures and represents a family of methods that\nrely on sequential models to represent and generate SMILES strings of chemical compounds. Later methods have sought to construct more chemical priors into the model, via (1) leveraging graph based representation and generation techniques, (2) enforcing direct chemical constraints to the decoding process, (3) considering a multi-scale view of the molecular structures, or (4) using reinforcement learning to integrate more training signal of the molecular structure and function. As a result, greater success has been achieved by models such as Kusner et al. (2017); Liu et al. (2018); Jin et al. (2018); You et al. (2018) at generating and searching valid and more useful chemical compounds.\nGraph representation learning is at the heart of these more recent approaches, to help understand the rules governing the formation of these molecular structures, as well as the correspondence between structures and functions. Duvenaud et al. (2015) were among the first to apply GNN to learn molecular fingerprints, and the general neural message passing framework for molecules is proposed in Gilmer et al. (2017), which demonstrate the power of MPNN across various molecular benchmarking tasks. These prior works on molecular MPNN, together with other GNN architectures developed in other areas, such as considering relational edges (Schlichtkrull et al., 2018) and attention (Velickovic et al., 2018), have laid the foundation for the success of these deep generative models.\nDespite the fact that RNA molecules can adopt complex structures, dedicated graph representation learning techniques have been scarce, with some recent works beginning to leverage graph related learning techniques to predict RNA folding (Chen et al., 2020; Singh et al., 2019) and to represent RNA molecular structures (Yan et al., 2020; Oliver et al., 2020). Prior to our work, the design of RNA has mostly focused on the inverse design problem, which is to conditionally generate an RNA sequence whose MFE secondary structure corresponds to an input secondary structure. Therefore, the line of prior works have predominantly relied on sequential techniques, with some representative methods based on reinforcement learning (Runge et al., 2019), or more classically framed as a combinatorial optimization problem and solved with sampling based techniques (Churkin et al., 2017). These prior works are mainly concerned with querying from an energy model with fixed thermodynamic parameters and fixed dynamics of RNA folding, which is in itself limited compared to learning based approaches (Chen et al., 2020; Singh et al., 2019), and are unable to model a joint distribution over RNA sequences and possible folds." }, { "heading": "6 CONCLUSION AND FUTURE WORKS", "text": "In this work we propose the first graph-based deep generative approach for jointly embedding and generating RNA sequence and structure, along with a series of benchmarking tasks. Our presented work has demonstrated impressive performance at generating diverse, valid and stable RNA secondary structures with useful properties.\nFor future works, there are several important directions to consider. First, it would be beneficial to obtain non-coding RNA families from the RFAM database (Kalvari et al., 2017) which would help our models learn more biologically-meaningful representation indicative of RNA homology and functions, in addition to the evolutionarily conserved RNA structural motifs that would enable the generation of more stable RNA secondary structures. In that context, a detailed comparison to Infernal and other probabilistic context-free grammar models would be meaningful.\nOn the methodological aspect, in light of the recent advances in protein sequences pretraining across a large evolutionary-scale (Rives et al., 2019; Elnaggar et al., 2020), our models for RNAs may similarly benefit by such a procedure with the data collected from RFAM. After the pretraining step, reinforcement learning can be used to finetune the generative component of our model with customizable rewards defined jointly on RNA structural validity, folding stability and functions such as binding to certain proteins.\nOn the evaluation side, it would be of great interest to analyze our models for any potential RNA tertiary structural motifs and to compare them with those deposited in the CaRNAval (Reinharz et al., 2018) or RNA 3D motifs database (Parlea et al., 2016). Our models would also need modifications to allow non-canonical interactions and pseudoknots, which are common in RNA tertiary structures.\nAll in all, the representation, generation and design of structured RNA molecules represent a rich, promising, and challenging area for future research in computational biology and drug discovery, and an opportunity to develop fundamentally new machine learning approaches." }, { "heading": "A ACKNOWLEDGEMENTS", "text": "We would like to thank all members of the Hamilton lab, Blanchette lab, and the four anonymous reviewers for their insightful suggestions. This work was funded by a Genome Quebec/Canada grant to MB and by the Institut de Valorisation des Données (IAVDO) PhD excellence scholarship to ZY. WLH is supported by a Canada CIFAR AI Chair. We also thank Compute Canada for providing the computational resources." }, { "heading": "B BACKGROUND: RNA STRUCTURE AND KEY PROPERTIES", "text": "Figure S1: A nested RNA secondary structure can be represented by: (A) dotbracket annotation, where base-pairs corresponding to matching parentheses, or (B) a molecular planar graph with two types of edges, corresponding to consecutive nucleotides (backbone) and basepairing interactions, or (C) a junction tree where node are labeled as stems (S), hairpins (H), internal loops (I), or multiloops (M), and edges correspond to the connections between these elements. All three forms are equivalent.\nThe representation of an RNA molecule starts from its primary sequence structure—i.e., a single chain of nucleotides (adenine (A), cytosine (C), guanine (G) and uracil (U)). RNA sequences are flexible and can fold onto themselves, enabling the formation of bonds between complementary nucleotides (Watson-Crick base-pairs [A-U, G-C], and Wobble base-pairs [G-U]), hence stabilizing the molecule 3. The set of pairs of interacting nucleotides in an RNA forms its so-called RNA secondary structure. In computational analyses of RNA, it is standard to assume that a secondary structure is nested: if [i, j] and [k, l] form base pairs with i < k, then either l < j (nesting) or k > j (non-overlapping). This enables simple string or planar graph representations (Figure S1 a, b).\nThe nested structure assumption means that secondary structures can be modelled by a probabilistic context free grammar (Dowell & Eddy, 2004), or by the closely related junction tree structure (Figure S1 c) (Sarrazin-Gendron et al., 2020), where each hypernode corresponds to a particular secondary substructure element: (1) stem: consecutive stacked base-pairs locally forming a double-stranded structure; (2) hairpin loop : unpaired regions closed by a base-pair; (3) internal loop: unpaired regions located between two stems; (4) multiloop: unpaired regions at the junction of at least three stems. Edges link elements that are adjacent in the structure.\nValidity and stability of RNA folding. The notion of free energy of RNA secondary structures can be used to characterize the stability of a particular conformation. Given an RNA sequence, there are combinatorially many valid RNA secondary structures which all need to obey a set of constraints (summarized in section 3.3). However, some structures are more stable than the others by having lower free energy. Therefore, these structures are more likely to exist (hence more useful) in reality due to the stochastic nature of RNA folding. The free energy of an RNA secondary structure can be estimated by an energy-based model with thermodynamic parameters obtained from experiments (Mathews et al., 2004), wherein the minimum free energy (MFE) structure can be predicted, up to a reasonable approximation (Lorenz et al., 2011).4\n3There exists other non-canonical base-pairs which are excluded from our current work. 4Throughout this work, we use RNAfold (Lorenz et al., 2011) to compute free energy as well as the MFE\nstructure, due to its interpretability and acceptable accuracy for moderately sized RNAs." }, { "heading": "C DATASET AND METRICS", "text": "The unlabeled dataset is obtained from the complete human transcriptome which is downloaded from the Ensembl database (Aken et al. (2016); version GRCh38). We slice the transcripts into snippets with length randomly drawn between 32 and 512 nts, and use RNAfold to obtain the MFE structures. We randomly split the dataset into a training set that contains 1,149,859 RNAs, and 20,000 held-out RNAs for evaluating decoding from the posterior distribution. More information on the structural diversity and complexity of this dataset is shown in Figure S2, which should present significant challenges for our algorithms.\nThe labeled dataset is pulled from a previous study on sequence and structural binding preference of RNA binding proteins (RBP), using an in vitro selection protocol called RNAcompete-S (Cook et al., 2017) which generates synthesized RNA sequences bound or unbound to a given RBP. RNAs in this experiment are of uniform length, i.e. 40 nts, and offer a rich abundance of RNA secondary structures compared to its predecessor protocols such as RNAcompete (Ray et al., 2009; 2013). Since no benchmark has been ever established since its publication, we randomly sample 500,000 positive sequences bound to an RBP, and the same amount of negative sequences from the pool of unbound sequences, to curate a dataset for each of the seven RBPs investigated in the paper. Then, 80% of all RNAs are randomly selected to the train split, and the rest goes to the test split.\nOur evaluation scheme for the generated RNA secondary structures includes the following metrics:\n• validity: percentage of generated RNA secondary structures that conform to the structural constraints specified in section 3.3.\n• free energy deviation (FE DEV): difference of free energy between the generated RNA secondary structure and the MFE structure of the corresponding sequence, which quantifies the gap of both structures from an energy perspective. A lower FE DEV should indicate higher stability of generated RNAs.\n• free energy deviation normalized by length (Normed FE DEV): FE DEV divided by the length of generated RNA, which distributes the contribution of total FE DEV to each base.\n• 5-mer sequence diversity: entropy of the normalized counts of 5-mer substrings, which directly measures the diversity of RNA sequences, and indirectly for RNA secondary structures when this metric is combined with FE DEV, since monolithic structures of diverse sequences would lead to high FE DEV." }, { "heading": "D TREE ENCODING GRU", "text": "Following Eq.3, T-GRU computes a new message vtĜi,Ĝj from Ĝi and Ĝj , based on the features in Ĝi denoted by xĜi , as well as neural messages from neighboring subgraphs to Ĝi, i.e. {v t−1 Ĝk,Ĝi\n| Ĝk ∈ N(Ĝi)}. The internal structure of T-GRU is equivalent to the tree encoder employed in Jin et al. (2018), which is essentially a neural analogue of the belief propagation algorithm on junction trees. Nevertheless, we write down the message passing formulas of T-GRU here:\nsĜi,Ĝj = ∑\nĜk∈N(Ĝi)\nvt−1 Ĝk,Ĝi\n(S1)\nzĜi,Ĝj = σ(W z[xĜi || sĜi,Ĝj ] + b z) (S2)\nrĜk,Ĝi = σ(W r[xĜi || v t−1 Ĝk,Ĝi ] + br) (S3) v̂Ĝi,Ĝj = Tanh(W [xĜi || ∑\nĜk∈N(Ĝi)\nrĜk,Ĝi · v t−1 Ĝk,Ĝi ]) (S4)\nvtĜi,Ĝj = (1− zĜi,Ĝj ) sĜi,Ĝj + zĜi,Ĝj v̂Ĝi,Ĝj (S5)" }, { "heading": "E ALGORITHM FOR HIERARCHICALLY DECODING STRUCTURED RNA", "text": "Algorithm 1: DFS decode RNA secondary structure 1 Given: zT , zG , M TI, M SI a 2 Initialize: stack ← [ ] 3 function decode(zT , zG) 4 root← sample(MLPnode(zT )) ; 5 root.add incoming message(zT ) ; 6 stack.push((root, 0)) ; 7 t← 0 ; 8 while t ≤ M TI and stack.size() ≥ 1 do 9 c node, last nuc← stack.get last item();\n10 all msg ← {msg | ∀msg ∈ c node.get incoming message()} ; 11 local field← [c node.label() || c node.get segment features()] ; 12 new msg ← T-GRU(local field, all msg) ;\n// topological prediction 13 is backtrack ← sample(MLPtopo(zT ))∗ ; // nucleotide segment prediction 14 new msg, last nuc, decoded segment, segment features← decode segment(new msg, last nuc, zT , zG ,M SI)∗ ; 15 c node.add decoded segment(decoded segment) ; 16 c node.add segment features(segment features) ; 17 if is backtrack = True then\n// backtrack to the parent node 18 c node.add incoming message(new msg) ; 19 p node, ← stack.get penultimate item(); 20 p node.add neighbor(c node) ; 21 stack.update penultimate item((p node, last nuc)); 22 stack.pop() ; 23 else\n// predict and expand to new tree node 24 new node← sample(MLPnode(new msg))∗ ; 25 new node.add incoming message(new msg) ; 26 new node.add neighbor((c node, last nuc)) ; 27 stack.push(new node) ; 28 end 29 t← t+ 1 ; 30 end 31 return root ;\naM TI refers to the threshold which set the maximum allowed number of topological prediction steps; M SI is another threshold to limit the length of each decoded nucleotide segment." }, { "heading": "F DETAILS FOR APPLYING RNA STRUCTURAL CONSTRAINTS TO LINEARIZED DECODING PROCEDURES", "text": "When decoding from the joint vocabulary of sequence and dot-bracket structure ({A,C,G,U} × {., (, )}), whenever a nucleotide nuci with a left bracket is sampled at step i, we append them to a stack, i.e. {(nuci0 , i0) . . . (nuci, i)}. Then, at decode step j,\n• if |i − j| ≤ 3, a proper mask will be added to the categorical logits of the vocabulary, to avoid sampling any nucleotides with right brackets, which means only an unpaired nucleotide or one that comes with a left bracket can be sampled;\n• if |i− j| > 3, a mask will be applied to make sure that only a nucleotide complementary to nuci can be sampled with the right bracket. Sampling nucleotides with other forms of structures are allowed.\nAs soon as a nucleotide with a closing right bracket is sampled, we pop out (nuci, i) from the stack. The special symbol for stop decoding can only be sampled when the stack has become empty." }, { "heading": "G DETAILS FOR APPLYING RNA STRUCTURAL CONSTRAINTS TO HIERARCHICAL DECODING PROCEDURES", "text": "Additional constraints to be enforced during the hierarchical decoding process to ensure the validity of the decoded RNA secondary structure. Recall in section 3.2 that three types of predictions are involved with the hierarchical decoding, therefore, each type is associated with its own set of rules. All set of rules can be observed by adding proper masks to the categorical logits before sampling, which are detailed below.\nConstraints for making topological prediction, when the current node is\n• stem node, then the algorithm always expands to a new node upon its first visit, or backtracks to its parent node upon re-visit;\n• hairpin node, then the algorithm always backtracks; • internal loop, then the algorithm acts similarly as for stem node; • multi-loop, then the algorithm always expands upon first visit and the next re-visit. Further re-visits\nto the same multi-loop node are not regulated.\nConstraints for predicting new tree node, when the current node is\n• stem node, then its child node when exists can be either a hairpin loop, an internal loop, or a multi-loop;\n• hairpin node, internal loop or multi-loop, then its child node must be a stem node.\nConstraints for decoding nucleotide segment. Due to the property of non-empty intersection between adjacent subgraphs, the start token for decoding a segment at the current node, is always the last nucleotide decoded at the last node. Therefore, without explicitly mentioning, the algorithm needs to decode at least one new nucleotide at each segment. When the current node is\n• stem node, and if it is upon its first visit (i.e. decoding the first segment of a stem), then there is no for constraints. Otherwise, upon its re-visit, the algorithm needs to decode exactly the complementary bases and in the reverse order, according to the first decoded segment;\n• hairpin node, then the decoder needs to decode at least four nucleotides before seeing the stop symbol, unless the hairpin is also the root node.\n• internal loop node, and if it is upon its first, then constraint is not necessary. Otherwise, upon its revisit, the algorithm needs to decode at least one unpaired nucleotide on condition that the first decoded internal loop segment does not contain any unpaired nucleotides;\n• multi-loop node, then there is no need for constraints." }, { "heading": "H DETAILS FOR PARAMETERIZING PRIOR DISTRIBUTION USING NORMALIZING FLOW", "text": "A normalizing flow involves a series of bijective transformation with tractable Jacobian logdeterminant, to map an observed datapoint x ∼ pθ(x) from a complex distribution to a simpler one, such as the standard normal distribution.\nConsidering the simplified case where we have a single bijective function fθ : Z → X to map some simple latent variables z to observed datapoint x, then, using the change of variable theorem, the likelihood of the observed datapoint can be evaluated as:\npθ(x) = pz(f −1 θ (x))|det\n∂f−1θ (x)\n∂x | (S6)\nwhere pz(.) denotes some simple base distribution, e.g. N (0; I). Then, it becomes clear the efficiency of this scheme heavily relies on the efficiency of inverting the forward mapping fθ as well as computing its Jacobian log-determinant.\nIn this project, we use a type of continuous normalizing flow (CNF) which simplifies the above mentioned computation (Chen et al., 2018). Consider a time continuous dynamics fψ(z(t), t) of some intermediate data representation z(t), and again z(t0) ∼ pz(.), the transformation of variable, along with its inverse mapping, can be expressed as:\nz , z(t1) = z(t0) + ∫ t1 t0 fψ(z(t), t)dt (S7)\nz(t0) = z(t1) + ∫ t0 t1 fψ(z(t), t)dt (S8)\nand the change of probability density can be expressed as: log pψ(z) = log pz(z(t0))− ∫ t1 t0 tr( ∂fθ ∂z(t) )dt (S9)\nNote that the invertibility issue is no longer a concern under some mild constraints (Chen et al., 2018). Also, Eq. S9 only involves a more light-weight trace operation on the Jacobian rather than evaluating its log-determinant.\nTherefore, we learn a parameterized prior using a CNF, and observe the decomposition of the KL term in the VAE objective:\nKL(qφ(z|x)|pψ(z)) = −Ez∼qφ(z|x)[pψ(z)]−H[qφ(z|x)] (S10)\nTherefore, during training our CNF parameterized with ψ works on the transformation of complex latent encodings z ∼ qφ(z|x) to some simple z(t0) ∼ N (0; I), with an exact likelihood described by Eq. S9 and integrated into Eq. S10 for the complete training objective. During inference, we simply sample zt0 ∼ N (0; I), and use our CNF to reversely transform it to z ∼ pψ(.) which should be closer to the approximate posterior.\nOur specific parameterization of the CNF follows from Yang et al. (2019) and Grathwohl et al. (2019), interleaving two hidden concatsquash layers of dimensionality 256 with Tanh non-linearity.\nI INFORMATION OF THE UNLABELED RNA DATASET\nFigure S2: This figure contains information of the unlabeled RNA dataset. (A) The number of hypernodes appears to grow linearly with the length of RNA, and (B) the junction tree height also grows as the length increases but on a more moderate scale. (C) and (D) have shown bar-plots of the number of hypernodes and tree height, indicating that the junction tree of RNA can take on significant depth hence contributing to the diversity and complexity of RNA secondary structures represented in this dataset." }, { "heading": "J HYPERPARAMETERS", "text": "Table S1: Hyperparameters for training VAE and full classifier models. Note that hidden units refer to the dimensionality of encoders and decoders from LSTMVAE, GraphVAE as well as HierVAE models. Dropout is applied to the embedding MLP classifier in case of training semi-supervised VAEs, which contains one hidden layer.\nfor VAE models\nlatent dimensionality 128 hidden units 512 G-MPNN iterations 5 T-GRU iterations 10 learning rate 1e-3 batch size 32 optimizer AMSGrad (Reddi et al., 2018) dropout ratio 0.2 M TI 300 S TI (hierarchical decoder) 100 S TI (linearized decoder) 1000\nfor full classifier models (overriding some above hyperparameters)\nlearning rate 2e-4 epochs 200 early stopping epochs 5" }, { "heading": "K RNACOMPETE-S CLASSIFIERS ON PRETRAINED AND FIXED VAE EMBEDDINGS", "text": "Table S2: Performance of simple MLP classifiers on top of fixed latent embeddings from VAE models, which have been pretrained on the unlabeled RNA dataset as originally shown in Table 1.\nRBP LSTMVAE GraphVAE HierVAE\nHuR 0.867 0.858 0.860 PTB 0.886 0.878 0.883 QKI 0.748 0.756 0.746 Vts1 0.775 0.758 0.774 RBMY 0.734 0.725 0.731 SF2 0.867 0.862 0.866 SLBP 0.749 0.737 0.747" }, { "heading": "L END-TO-END RNACOMPETE-S CLASSIFIERS", "text": "Table S3: We use the same encoding architectures as in the generative models, and report their AUROC averaged across 6 runs, for each RNAcompete-S RBP dataset.\nRBP LSTM-SeqOnly LSTM Graph Hierarchical\nHuR 0.880 ± 0.000 0.880 ± 0.000 0.880 ± 0.000 0.888 ± 0.002 PTB 0.900 ± 0.000 0.910 ± 0.000 0.910 ± 0.000 0.910 ± 0.000 QKI 0.820 ± 0.000 0.830 ± 0.000 0.825 ± 0.002 0.830 ± 0.000 Vts1 0.900 ± 0.000 0.908 ± 0.002 0.637 ± 0.079 0.910 ± 0.000 RBMY 0.905 ± 0.002 0.880 ± 0.003 0.802 ± 0.055 0.870 ± 0.002 SF2 0.890 ± 0.000 0.900 ± 0.000 0.900 ± 0.000 0.900 ± 0.000 SLBP 0.777 ± 0.002 0.790 ± 0.000 0.797 ± 0.002 0.797 ± 0.002" }, { "heading": "M ALTERNATIVE HIERVAE TRAINING ON RNACOMPETE-S", "text": "Table S4: Training HierVAE on supervised RNAcompete-S dataset. All models are trained with 20 epochs, including 5 epochs for warm-up, 6 epochs to linearly raise beta from 0 to 3e-3, and 9 remaining epochs with beta fixed at 3e-3. The test set measures AUROC and posterior decoding on the final model.\nTest Post R&S Post NR&D\nDataset AUROC Valid FE DEV RECON ACC Valid FE DEV RECON ACC\nHuR 0.871 100% 0.951 18.97% 99.34% 0.702 31.52% PTB 0.899 100% 0.826 21.17% 98.64% 0.674 31.28% QKI 0.822 100% 0.867 17.82% 99.40% 0.627 30.62% Vts1 0.874 100% 1.056 13.71% 99.39% 0.770 24.97% RBMY 0.872 100% 0.963 11.86% 98.68% 0.690 22.91% SF2 0.874 100% 0.921 14.44% 99.32% 0.668 25.99% SLBP 0.764 100% 1.033 14.84% 99.44% 0.743 26.93%" }, { "heading": "N COMPARISON OF GENERATED RNA SECONDARY STRUCTURES TO MFE STRUCTURES", "text": "Figure S3: A comparison of generated RNAs (left) to their corresponding MFE structures (right). RNAs are generated with structural constraints from HierVAE on three random axes. The ground truth MFE structures are predicted by RNAfold, and the generated RNAs are shown to evolve smoothly in the latent space along with their corresponding MFE structures which have also shown relatively smooth transitions." }, { "heading": "O NEIGHBORHOOD VISUALIZATION OF A CYSTEINE-CARRYING TRANSFER-RNA", "text": "Figure S4: Neighborhood visualization of tRNA-Cys6which is marked by the red bounding box in the center and the walk in the latent space takes place on two random orthogonal axes. Note that actual secondary structure of tRNA-Cys plotted in the figure is different compared to the one deposited online due to the prediction of RNAfold.\n6https://rnacentral.org/rna/URS00001F47B5/9606" }, { "heading": "P NEIGHBORHOOD VISUALIZATION OF A 5S RIBOSOMAL RNA", "text": "Figure S5: Neighborhood visualization of a 5S ribosomal RNA8which is marked by the red bounding box in the center and the walk in the latent space takes place on two random orthogonal axes. Note that actual secondary structure of this 5S ribosomal RNA plotted in the figure is different compared to the one deposited online due to the prediction of RNAfold.\n8https://rnacentral.org/rna/URS000075B93F/9606" }, { "heading": "Q TARGETED RNA GENERATION — AN EXAMPLE", "text": "Figure S6: An example of searching novel structured RNAs with higher chance of binding to HuR. The optimization takes place in the latent space of HierVAE, starting from the initial encoding of a random RNA molecule in the test set, and at each step altering the latent encoding by using activation maximization on the embedding classifier. The trajectory of generated RNAs is shown in the order of left to right and top to bottom, and the field PRED indicates that the probability of binding, as predicted by another external full classifier on the decoded molecular structure, is overall increasing as the decoded RNA structures smoothly evolving." } ]
2,021
RNA SECONDARY STRUCTURES
SP:0bd749fe44c37b521bd40f701e1428890aaa9c95
[ "This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty lies in the relatively large scale, spanning three translation directions, four discourse phenomena, and 150-5000 data points per language and phenomenon. A relatively large number of systems from previous work is benchmarked on each test set, and agreement with human judgments is measured." ]
Despite increasing instances of machine translation (MT) systems including extrasentential context information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like BLEU are not expressive or sensitive enough to capture quality improvements or drops that are minor in size but significant in perception. We introduce the first of their kind MT benchmark testsets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several competitive baseline MT systems on the curated datasets. Surprisingly, we find that the complex context-aware models that we test do not improve discourse-related translations consistently across languages and phenomena. Our evaluation benchmark is available as a leaderboard at <dipbenchmark1.github.io>.
[ { "affiliations": [], "name": "MARKS FOR" }, { "affiliations": [], "name": "DISCOURSE PHENOMENA" } ]
[ { "authors": [ "Rachel Bawden", "Rico Sennrich", "Alexandra Birch", "Barry Haddow" ], "title": "Evaluating discourse phenomena in neural machine translation", "venue": null, "year": 2018 }, { "authors": [ "Peter Bourgonje", "Manfred Stede" ], "title": "The potsdam commentary corpus 2.2: Extending annotations for shallow discourse parsing", "venue": "In LREC,", "year": 2020 }, { "authors": [ "Marine Carpuat" ], "title": "One translation per discourse", "venue": "SEW@NAACL-HLT,", "year": 2012 }, { "authors": [ "Mauro Cettolo", "Niehues Jan", "Stüker Sebastian", "Luisa Bentivogli", "R. Cattoni", "Marcello Federico" ], "title": "The iwslt 2016 evaluation campaign", "venue": null, "year": 2016 }, { "authors": [ "Mauro Cettolo", "Marcello Federico", "Luisa Bentivogli", "Niehues Jan", "Stüker Sebastian", "Sudoh Katsuitho", "Yoshino Koichiro", "Federmann Christian" ], "title": "Overview of the iwslt 2017 evaluation campaign", "venue": null, "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Liane Guillou" ], "title": "Improving pronoun translation for statistical machine translation", "venue": "In EACL,", "year": 2012 }, { "authors": [ "Liane Guillou" ], "title": "Analysing lexical consistency in translation", "venue": "In Proceedings of the Workshop on Discourse in Machine Translation,", "year": 2013 }, { "authors": [ "Liane Guillou", "Christian Hardmeier" ], "title": "PROTEST: A test suite for evaluating pronouns in machine translation", "venue": "In Proceedings of the Tenth International Conference on Language Resources and Evaluation", "year": 2016 }, { "authors": [ "Liane Guillou", "Christian Hardmeier" ], "title": "Automatic reference-based evaluation of pronoun translation misses the point", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Najeh Hajlaoui", "Andrei Popescu-Belis" ], "title": "Assessing the accuracy of discourse connective translations: Validation of an automatic metric", "venue": "In CICLing,", "year": 2013 }, { "authors": [ "Christian Hardmeier", "Marcello Federico" ], "title": "Modelling pronominal anaphora in statistical machine translation", "venue": "In Proceedings of the 2010 International Workshop on Spoken Language Translation, IWSLT", "year": 2010 }, { "authors": [ "Hany Hassan", "Anthony Aue", "Chang Chen", "Vishal Chowdhary", "Jonathan R. Clark", "Christian Federmann", "Xuedong Huang", "Marcin Junczys-Dowmunt", "William Lewis", "Mu Li", "Shujie Liu", "T.M. Liu", "Renqian Luo", "Arul Menezes", "Tao Qin", "Frank Seide", "Xu Tan", "Fei Tian", "Lijun Wu", "Shuangzhi Wu", "Yingce Xia", "Dongdong Zhang", "Zhirui Zhang", "Ming Zhou" ], "title": "Achieving human parity on automatic chinese to english news", "venue": "translation. ArXiv,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Kyle P. Johnson", "Patrick Burns", "John Stewart", "Todd Cook" ], "title": "Cltk: The classical language toolkit, 2014–2020", "venue": "URL https://github.com/cltk/cltk", "year": 2020 }, { "authors": [ "Prathyusha Jwalapuram", "Shafiq Joty", "Irina Temnikova", "Preslav Nakov" ], "title": "Evaluating pronominal anaphora in machine translation: An evaluation measure and a test suite", "venue": "EMNLP-IJCNLP,", "year": 2019 }, { "authors": [ "Yunsu Kim", "Thanh Tran", "Hermann Ney" ], "title": "When and why is document-level context useful in neural machine translation? ArXiv", "venue": null, "year": 1910 }, { "authors": [ "Ekaterina Lapshinova-Koltunski", "Christian Hardmeier", "Pauline Krielke" ], "title": "ParCorFull: a parallel corpus annotated with full coreference", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),", "year": 2018 }, { "authors": [ "Samuel Läubli", "Rico Sennrich", "Martin Volk" ], "title": "Has machine translation achieved human parity? a case for document-level evaluation", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Kazem Lotfipour-Saedi" ], "title": "Lexical cohesion and translation equivalence", "venue": null, "year": 1997 }, { "authors": [ "Sameen Maruf", "Gholamreza Haffari" ], "title": "Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Thomas Meyer", "Andrei Popescu-Belis", "N. Hajlaoui", "Andrea Gesmundo" ], "title": "Machine translation of labeled discourse connectives", "venue": "AMTA", "year": 2012 }, { "authors": [ "Lesly Miculicich", "Dhananjay Ram", "Nikolaos Pappas", "James Henderson" ], "title": "Document-level neural machine translation with hierarchical attention networks", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Lesly Miculicich Werlen", "Andrei Popescu-Belis" ], "title": "Validation of an automatic metric for the accuracy of pronoun translation (APT)", "venue": "In Proceedings of the Third Workshop on Discourse in Machine Translation,", "year": 2017 }, { "authors": [ "Han Cheol Moon", "Tasnim Mohiuddin", "Shafiq R. Joty", "Xiaofei Chi" ], "title": "A unified neural coherence model", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Jane Morris", "Graeme Hirst" ], "title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text", "venue": "Computational Linguistics,", "year": 1991 }, { "authors": [ "Maria Nadejde", "Alexandra Birch", "Philipp Koehn" ], "title": "Proceedings of the first conference on machine translation, volume 1: Research papers. The Association for Computational Linguistics, 2016", "venue": null, "year": 2016 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Emily Pitler", "Ani Nenkova" ], "title": "Revisiting readability: A unified framework for predicting text quality", "venue": "In EMNLP,", "year": 2008 }, { "authors": [ "M. Popel", "M. Tomková", "J. Tomek", "Łukasz Kaiser", "Jakob Uszkoreit", "Ondrej Bojar", "Z. Žabokrtský" ], "title": "Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals", "venue": "Nature Communications,", "year": 2020 }, { "authors": [ "Rashmi Prasad", "Bonnie L. Webber", "Aravind K. Joshi" ], "title": "Reflections on the penn discourse treebank, comparable corpora, and complementary annotation", "venue": "Computational Linguistics,", "year": 2014 }, { "authors": [ "Rashmi Prasad", "Bonnie L. Webber", "Alan Lee" ], "title": "Annotation in the pdtb : The next generation", "venue": null, "year": 2018 }, { "authors": [ "Rico Sennrich" ], "title": "Why the time is ripe for discourse in machine translation", "venue": "http://homepages. inf.ed.ac.uk/rsennric/wnmt2018.pdf,", "year": 2018 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016).,", "year": 2016 }, { "authors": [ "Karin Sim Smith", "Wilker Aziz", "Lucia Specia" ], "title": "A proposal for a coherence corpus in machine translation", "venue": "In DiscoMT@EMNLP,", "year": 2015 }, { "authors": [ "Karin Sim Smith", "Wilker Aziz", "Lucia Specia" ], "title": "The trouble with machine translation coherence", "venue": "In Proceedings of the 19th Annual Conference of the European Association for Machine Translation,", "year": 2016 }, { "authors": [ "Swapna Somasundaran", "Jill Burstein", "Martin Chodorow" ], "title": "Lexical chaining for measuring discourse coherence quality in test-taker essays", "venue": "In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers,", "year": 2014 }, { "authors": [ "Jörg Tiedemann", "Yves Scherrer" ], "title": "Neural machine translation with extended context", "venue": "In Proceedings of the Third Workshop on Discourse in Machine Translation,", "year": 2017 }, { "authors": [ "Jörg Tiedemann" ], "title": "Parallel data, tools and interfaces in opus", "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12),", "year": 2012 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Elena Voita", "Pavel Serdyukov", "Rico Sennrich", "Ivan Titov" ], "title": "Context-aware neural machine translation learns anaphora resolution", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Elena Voita", "Pavel Serdyukov", "Rico Sennrich", "Ivan Titov" ], "title": "Context-aware neural machine translation learns anaphora resolution", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Elena Voita", "Rico Sennrich", "Ivan Titov" ], "title": "When a Good Translation is Wrong in Context: ContextAware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence,", "year": 2019 }, { "authors": [ "W. Wagner. Steven" ], "title": "bird, ewan klein and edward loper: Natural language processing with python, analyzing text with the natural language toolkit", "venue": "Language Resources and Evaluation,", "year": 2010 }, { "authors": [ "KayYen Wong", "Sameen Maruf", "Gholamreza Haffari" ], "title": "Contextual neural machine translation improves translation of cataphoric pronouns", "venue": "In ACL,", "year": 2020 }, { "authors": [ "H. Xiong", "Zhongjun He", "Hua Wu", "H. Wang" ], "title": "Modeling coherence for discourse neural machine translation", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Jiacheng Zhang", "Huanbo Luan", "Maosong Sun", "Feifei Zhai", "Jingfang Xu", "Min Zhang", "Yang Liu" ], "title": "Improving the transformer translation model with document-level context", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Y. Zhou", "N. Xue" ], "title": "The chinese discourse treebank: a chinese corpus annotated with discourse relations", "venue": "Language Resources and Evaluation,", "year": 2015 }, { "authors": [ "Michał Ziemski", "Marcin Junczys-Dowmunt", "Bruno Pouliquen" ], "title": "The united nations parallel corpus v1.0", "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION AND RELATED WORK", "text": "The advances in neural machine translation (NMT) systems have led to great achievements in terms of state-of-the-art performance in automatic translation tasks. There have even been claims that their translations are no worse than what an average bilingual human may produce (Wu et al., 2016) or that the translations are on par with professional translators (Hassan et al., 2018). However, extensive studies conducting evaluations with professional translators (Läubli et al., 2018; Popel et al., 2020) have shown that there is a statistically strong preference for human translations in terms of fluency and overall quality when evaluations are conducted monolingually or at the document level.\nDocument (or discourse) level phenomena (e.g., coreference, coherence) may not seem lexically significant, but contribute significantly to readability and understandability of the translated texts (Guillou, 2012). Targeted datasets for evaluating phenomena like coreference (Guillou et al., 2014; Guillou & Hardmeier, 2016; Lapshinova-Koltunski et al., 2018; Bawden et al., 2018; Voita et al., 2018b), or ellipsis and lexical cohesion (Voita et al., 2019), have been proposed.\nThe NMT framework such as the Transformer (Vaswani et al., 2017) provides more flexibility to incorporate larger context. This has spurred a great deal of interest in developing context-aware NMT systems that take advantage of source or target contexts, e.g., Miculicich et al. (2018), Maruf & Haffari (2018), Voita et al. (2018b; 2019), Xiong et al. (2019), Wong et al. (2020), to name a few.\nMost studies only report performance on specific testsets, often limited to improvements in BLEU (Papineni et al., 2002). Despite being the standard MT evaluation metric, BLEU has been criticised for its inadequacy; the scores are not interpretable, and are not sensitive to small improvements in lexical terms that may lead to big improvements in fluency or readability (Reiter, 2018). There is no framework for a principled comparison of MT quality beyond mere lexical matching as done in BLEU: there are no standard corpora and no agreed-upon evaluation measures.\nTo address these shortcomings, we propose the DiP benchmark tests (for Discourse Phenomena), that will enable the comparison of machine translation models across discourse task strengths and source languages. We create diagnostic testsets for four diverse discourse phenomena, and also propose automatic evaluation methods for these tasks. However, discourse phenomena in translations can be tricky to identify, let alone evaluate. A fair number of datasets proposed thus far have been manually curated, and automatic evaluation methods have often failed to agree with human\njudgments (Guillou & Hardmeier, 2018). To mitigate these issues, we use trained neural models for identifying and evaluating complex discourse phenomena and conduct extensive user studies to ensure agreements with human judgments. Our methods for automatically extracting testsets can be applied to multiple languages, and find cases that are difficult to translate without having to resort to synthetic data. Moreover, our testsets are extracted in a way that makes them representative of current challenges. They can be easily updated to reflect future challenges, preventing the pitfall of becoming outdated, which is a common failing of many benchmarking testsets.\nWe also benchmark established MT models on these testsets to convey the extent of the challenges they pose. Although discourse phenomena can and do occur at the sentence-level (e.g., between clauses), we would expect MT systems that model extra-sentential context (Voita et al., 2018b; Zhang et al., 2018; Miculicich et al., 2018) to be more successful on these tasks. However, we observe significant differences in system behavior and quality across languages and phenomena, emphasizing the need for more extensive evaluation as a standard procedure. We propose to maintain a leaderboard that tracks and highlights advances in MT quality that go beyond BLEU improvement.\nOur main contributions in this paper are as follows:\n• Benchmark testsets for four discourse phenomena: anaphora, coherence & readability, lexical consistency, and discourse connectives.\n• Automatic evaluation methods and agreements with human judgments. • Benchmark evaluation and analysis of four context-aware systems contrasted with baselines, for\nGerman/Russian/Chinese-English language pairs." }, { "heading": "2 MACHINE TRANSLATION MODELS", "text": "Model Architectures. We first introduce the MT systems that we will be benchmarking on our testsets. We evaluate a selection of established models of various complexities (simple sentencelevel to complex context-aware models), taking care to include both source- and target-side contextaware models. We briefly describe the model architectures here:\n• S2S: A standard 6-layer base Transformer model (Vaswani et al., 2017) which translates sentences independently.\n• CONCAT: A 6-layer base Transformer whose input is two sentences (previous and current sentence) merged, with a special character as a separator (Tiedemann & Scherrer, 2017).\n• ANAPH: Voita et al. (2018b) incorporate source context by encoding it with a separate encoder, then fusing it in the last layer of a standard Transformer encoder using a gate. They claim that their model explicitly captures anaphora resolution.\n• TGTCON: To model target-context, we implement a version of ANAPH with an extra operation of multi-head attention in the decoder, computed between representations of the target sentence and target context. The architecture is described in detail in the Appendix (A.5).\n• SAN: Zhang et al. (2018) use source attention network: a separate Transformer encoder to encode source context, which is incorporated into the source encoder and target decoder using gates.\n• HAN: Miculicich et al. (2018) introduce a hierarchical attention network (HAN) into the Transformer framework to dynamically attend to the context at two levels: word and sentence. They achieve the highest BLEU when hierarchical attention is applied separately to both the encoder and decoder.\nDatasets and Training. The statistics for the datasets used to train the models are shown in Table 1. We tokenize the data using Jieba1 for Zh and Moses scripts2 for the other languages, lowercase the text, and apply BPE encodings3 from Sennrich et al. (2016). We learn the BPE encodings with the command learn-joint-bpe-and-vocab -s 40000. The scores reported are BLEU4, computed either through fairseq or NLTK (Wagner, 2010). Further details about dataset composition, training settings and hyperparameters can be found in the Appendix (A.7).\n1https://github.com/fxsjy/jieba 2https://www.statmt.org/moses/ 3https://github.com/rsennrich/subword-nmt/\nBLEU scores. The BLEU scores on the WMT-14 (De-En, Ru-En) and on the WMT-17 (Zh-En) testsets for each of the six trained models are shown in Table 2. We were unable to train HAN for Zh-En as the model was not optimized for training with large datasets. In contrast to increases in BLEU for selected language-pairs and datasets reported in published work, incorporating context within elaborate context-dependent models decreases BLEU scores for the Zh-En and De-En tasks. However, the simple concatenation-based model CONCAT performs better than S2S for De-En and Ru-En; this shows that context knowledge is indeed helpful for improving BLEU." }, { "heading": "3 BENCHMARK TESTSETS", "text": "We construct our benchmarking testsets based on four main principles:\nSelectivity. The testsets need to provide hard to translate contexts for MT models. We ensure this by looking at translation errors made by system submissions to campaigns like WMT and IWSLT. Authenticity. The testsets cannot contain artificial or synthetic data but only natural text. Rather than generating testset samples using heuristics, we extract hard contexts from existing humangenerated source text. Multilinguality. The testset extraction method should be automatic and applicable to multiple languages. Our framework can be used to extract testsets for all source languages that are part of the considered MT campaigns. Adaptability. The testsets should be easy to update frequently, making them adaptable to improvements in newer systems. Since we automatically extract hard contexts based on MT errors, our testsets are easy to update; they adapt to errors in newer (and possibly more accurate) systems, making the tasks harder over time.\nWe use the system outputs released by WMT and IWSLT for the most recent years (Nadejde et al., 2016; Bojar et al., 2017; 2018; 2019; Cettolo et al., 2016; 2017) to build our testsets. For De-En,\nRu-En and Zh-En, these consist of translation outputs from 68, 41 and 47 unique systems respectively. Since the data comes from a wide variety of systems, our testsets representatively aggregate different types of errors from several (arguably SOTA) models. Also note that the MT models we are benchmarking are not a part of these system submissions to WMT, so there is no potential bias in the testsets." }, { "heading": "3.1 ANAPHORA", "text": "Anaphora are references to entities that occur elsewhere in a text; mishandling them can result in ungrammatical sentences or the reader inferring the wrong antecedent, leading to misunderstanding of the text (Guillou, 2012). We focus specifically on the aspect of incorrect pronoun translations.\nTestset. To obtain hard contexts for pronoun translation, we look for source texts that lead to erroneous pronoun translations in system outputs. We align the system translations with their references, and collect the cases in which the translated pronouns do not match the reference.4\nOur anaphora testset is an updated version of the one proposed by Jwalapuram et al. (2019). We filter the system translations based on their list of cases where the translations can be considered wrong, rather than acceptable variants. The corresponding source texts are extracted as a test suite for pronoun translation. This gives us a pronoun benchmark testset of 2564 samples for De-En, 2368 for Ru-En and 1540 for Zh-En.\nEvaluation. Targeted evaluation of pronouns in MT has been challenging as it is not fair to expect an exact match with the reference. Evaluation methods like APT (Miculicich Werlen & PopescuBelis, 2017) or AutoPRF (Hardmeier & Federico, 2010) are specific to language pairs or lists of pronouns, requiring extensive manual intervention. They have also been criticised for failing to produce evaluations that are consistent with human judgments (Guillou & Hardmeier, 2018).\nJwalapuram et al. (2019) propose a pairwise ranking model that scores “good\" pronoun translations (like in the reference) higher than “poor\" pronoun translations (like in the MT output) in context, and show that their model is good at making this distinction, along with having high agreements with human judgements. However, they do not rank multiple system translations against each other, which is our main goal; the absolute scores produced by their model are not useful since it is trained in a pairwise fashion.\nWe devise a way to use their model to score and rank system translations in terms of pronouns. First, we re-train their model with more up-to-date WMT data (more details in Appendix A.1). We obtain a score for each benchmarked MT system (S2S, CONCAT, etc.) translation using the model, plus the corresponding reference sentence. We then normalize the score for each translated sentence by calculating the difference with the reference. To get an overall score for an MT system, the assigned scores are summed across all sentences in the testset.\nScoresys = ∑ i ρi(ref|θ)− ρi(sys|θ) (1)\nwhere ρi(.|θ) denotes the score given to sentence i by the pronoun model θ. The systems are ranked based on this overall score, where a lower score indicates a better performance. We conduct a user study to confirm that the model rankings correspond with human judgments, obtaining an agreement of 0.91 between four participants who annotated 100 samples. Appendix A.1 gives details (e.g., interface, participants, agreement) about the study." }, { "heading": "3.1.1 RESULTS AND ANALYSIS", "text": "The ranking results obtained from evaluating the MT systems on our pronoun benchmark testset using our evaluation measure are given in Table 4 (first two columns). We also report common pronoun errors for each model based on our manual analysis (last three columns). Specifically, we observed the following types of errors in our analysis of a subset of the translation data:\n(i) Gender copy. Translating from De/Ru to En often requires ‘flattening’ of gendered pronouns to it, since De/Ru assign gender to all nouns. In many cases, machine translated pronouns tend to (mistakenly) agree with the source language. For example, diese Wohnung in Earls Court..., und sie\n4This process requires the pronouns in the target language to be separate morphemes, as in English." }, { "heading": "1 HAN 31 48 21", "text": "" }, { "heading": "2 ANAPH 29 46 25", "text": "" }, { "heading": "3 CONCAT 29 46 25", "text": "" }, { "heading": "4 SAN 32 44 24", "text": "hatte... is translated to : apartment in Earls Court, and she had..., which keeps the female gender expressed in sie, instead of translating it to it. (ii) Named entity. A particularly hard problem is to infer gender from a named entity, e.g., Lady Liberty...She is meant to...- she is wrongly translated to it. Such examples demand higher inference abilities (e.g., distinguish male/female names). (iii) Language specific phenomena. In Russian and Chinese, pronouns are often dropped - sentences become ungrammatical in English without them. Pronouns can also be ambiguous in the source language; e.g., in German, the pronoun sie can mean both she and you, depending on capitalization, sentence structure, and context.\nOverall, we observe that the advantages of contextual models are not consistent across languages. They seem to use context well in Ru-En, but fail to outperform S2S or CONCAT in De-En, while Zh-En is inconclusive. The TGTCON model is consistently poor in this task. The partial success of the S2S model can be explained by its tendency to use it as the default pronoun, which statistically appears most often due to the lack of grammatical gender in English. More variability in pronouns occurs in the outputs of the context-aware models, but this does not contribute to a greater success." }, { "heading": "3.2 COHERENCE AND READABILITY", "text": "Pitler & Nenkova (2008) define coherence as the ease with which a text can be understood, and view readability as an equivalent property that indicates whether it is well-written.\nTestset. To test for coherence and readability, we try to find documents that can be considered hard to translate. We use the coherence model proposed by Moon et al. (2019), which is trained in a pairwise ranking fashion on WSJ articles, where a negative document is formed by shuffling the sentences of an original (positive) document. It models syntax, inter-sentence coherence relations and global topic structures. It has been shown in some studies that MT outputs are incoherent (Smith et al., 2015; 2016; Läubli et al., 2018). We thus re-train the coherence model with reference translations as positive and MT outputs as negative documents to better capture the coherence issues that are present in MT outputs (more details in Appendix A.2). We use older WMT submissions from 2011-2015 to ensure that the training data does not overlap with the benchmark testset data.\nThe coherence model takes a system translation (multi-sentential) and its reference as input and produces a score for each. Similar to Eq. 1, we consider the difference between the scores produced by the model for the reference and the translated text as the coherence score for the translated text.\nFor a given source text (document) in the WMT testsets, we obtain the coherence scores for each of the translations (i.e., WMT/IWSLT submissions) and average them. The source texts are sorted based on the average coherence scores of their translations. The texts that have lower average coherence scores can be considered to have been hard to translate coherently. We extract the source texts with scores below the median. These source texts form our benchmark testset for coherence and readability. This yields 272 documents (5,611 sentences) for De-En, 330 documents (4,427 sentences) for Ru-En and 210 documents (3,050 sentences) for Zh-En.\nEvaluation. Coherence and readability is also a hard task to evaluate, as it can be quite subjective. We resort to model-based evaluation here as well, to capture the different aspects of coherence in translations. We use our re-trained coherence model to score the benchmarked MT system translations and modify the scores for use in the same way as the anaphora evaluation (Eq. 1) to obtain a\nrelative ranking. As mentioned before (Sec. 3), the benchmarked MT systems do not overlap with the WMT system submissions, so there is no potential bias in evaluation since the testset extraction and the evaluation processes are independent. To confirm that the model does in fact produce rankings that humans would agree with, and to validate our model re-training, we conduct a user study, and obtain an agreement of 0.82 between three participants who annotated 100 samples. More details about the study can be found in the Appendix (A.2).\n3.2.1 RESULTS AND ANALYSIS\nTable 5: Coherence and Readability evaluation: Rankings of the different models for each language pair, obtained from our evaluation procedure.\nRk De-En Ru-En Zh-En\n1 CONCAT ANAPH ANAPH 2 SAN CONCAT CONCAT 3 S2S TGTCON S2S 4 ANAPH SAN TGTCON 5 TGTCON S2S SAN 6 HAN HAN -\nWe identified some frequent coherence and readability errors (more examples in Appendix A.8):\n(i) Inconsistency. As in (Somasundaran et al., 2014), we observe that inconsistent translation of words across sentences (in particular named entities) breaks the continuity of meaning.\n(ii) Translation error. Errors at various levels spanning from ungrammatical fragments to model hallucinations introduce phrases which bear little relation to the whole text (Smith et al., 2016):\nReference: There is huge applause for the Festival Orchestra, who appear on stage for the first time – in casual leisurewear in view of the high heat. Translation: There is great applause for the solicitude orchestra , which is on the stage for the first time, with the heat once again in the wake of an empty leisure clothing.\nFrom the rankings in Table 5, we can see that ANAPH is the most coherent model for Zh-En and Ru-En but performs poorly in De-En, similar to the pronoun benchmark. Generally CONCAT is better than complex contextual models in this task." }, { "heading": "3.3 LEXICAL CONSISTENCY", "text": "Lexical consistency in translation was first defined as ‘one translation per discourse’ by Carpuat (2009), i.e., the translation of a particular source word consistently to the same target word in that context. Guillou (2013) analyze different human-generated texts and conclude that human translators tend to maintain lexical consistency to support the important elements in a text. The consistent usage of lexical items in a discourse can be formalized by computing the lexical chains (Morris & Hirst, 1991; Lotfipour-Saedi, 1997).\nTestset. To extract a testset for lexical consistency evaluation, we first align the translations from the system submissions with their references. In order to get a reasonable lexical chain formed by a consistent translation, we consider translations of blocks of 3-5 sentences in which the (lemmatized) word we are considering occurs at least twice in the reference. For each such word, we check if the corresponding system translation produces the same (lemmatized) word at least once, but fewer than the number of times the word occurs in the reference. In such cases, the system translation has failed to be lexically consistent in translation (see Table 3 for an example). We limit the errors considered to nouns and adjectives. The source texts of these cases form the benchmark testset. This gives us a testset with 618 sets (i.e., text blocks) for De-En (3058 sentences), 732 sets for Ru-En (3592 sentences) and 961 sets for Zh-En (4683 sentences).\nEvaluation. For lexical consistency, we adopt a simple evaluation method. For each block of 3-5 sentences, we either have a consistent translation of the word in focus, or the translation is inconsistent. We simply count the instances of consistency and rank the systems based on the percentage. Model translations are considered lexically inconsistent if at least one translation of a particular word matches the reference translation, but this translated word occurs fewer times than in the reference. For samples where no translations match the reference, we cannot be sure about inconsistency, since a synonym of the reference translation could have been used consistently. Therefore, we do not consider them for calculating the percentage used for the main ranking, but we report the consistency percentage as a fraction of the full testset for comparison (further discussion in Appendix (A.3))." }, { "heading": "Rk Model %Con %Full Syn Rel Om NE Rd", "text": "" }, { "heading": "1 ANAPH 30.94 17.48 43 14 29 0 14", "text": "" }, { "heading": "2 TGTCON 29.84 16.02 14 0 29 43 14", "text": "" }, { "heading": "3 S2S 28.27 18.21 0 25 38 25 13", "text": "" }, { "heading": "4 CONCAT 28.17 16.65 0 66 0 16 33", "text": "" }, { "heading": "5 SAN 26.33 13.42 0 13 38 25 25", "text": "" }, { "heading": "3.3.1 RESULTS AND ANALYSIS", "text": "The rankings of the MT systems based on the percentage of samples with consistent translations on the lexical consistency benchmark testsets are given in Table 6 (first four columns), along with our findings from a manual analysis on a subset of the translations (last five columns). Our manual inspection of the lexical chains shows the following tendencies:\n(i) Synonyms & related words. Words are exchanged for their synonyms (poll - survey), hypernyms/hyponyms (ambulance - car) or related concepts (wine - vineyard).\n(ii) Named entities. Models tend to distort proper names and translate them inconsistently. For example, the original name Füchtorf (name of a town) gets translated to feeding-community.\n(iii) Omissions. Occurs when words are omitted altogether from the lexical chain.\nThe overall low quality of Russian translations contributes to the prevalence of Random translations, and the necessity to transliterate named entities increases NE errors for both Ru-En and Zh-En. Here we see some complex contextual models performing well; TGTCON leads the board across De-En, Ru-En and Zh-En, with ANAPH performing similarly well for Ru-En and Zh-En. Generally, we should be seeing a consistent advantage for target-side context models, which should be able to “remember” their own translation of a word from previous sentences; however this only materializes for TGTCON and not for HAN." }, { "heading": "3.4 DISCOURSE CONNECTIVES", "text": "Discourse connectives are used to link the contents of texts together by signaling coherence relations that are essential to the understanding of the texts (Prasad et al., 2014). Failing to translate a discourse connective correctly can result in texts that are hard to understand or ungrammatical. Finding errors in discourse connective translations can be quite tricky, since there are often many acceptable variants. To mitigate confusion, we limit the errors we consider in discourse connectives to the setting where the reference contains a connective but the translations fail to produce any (see Table 3 for an example).\nUser Study. To confirm that missing connectives are problematic, we conduct a user study. Participants are shown two previous sentences from the reference for context, and asked to choose between two candidate options for the sentence that may follow. These options consist of the reference translation which includes a connective, and an MT output that is missing the connective translation. Participants are asked to choose the sentence which more accurately conveys the intended meaning. See Figure 4b in Appendix A.4 for an example interface.\nWe obtain an agreement of 0.82 between two participants who annotated 200 samples, that translations with connectives are preferred. If the MT outputs with missing connectives were structured in such a way as to have implicit discourse relations, the agreements that favoured the references should be significantly lower. However, the strong agreements favouring the reference with the con-" }, { "heading": "1 ANAPH 49.42 75.58 75 25 0", "text": "" }, { "heading": "2 SAN 48.25 72.67 67 33 0", "text": "" }, { "heading": "3 TGTCON 48.25 72.09 40 53 6", "text": "" }, { "heading": "4 S2S 47.67 73.84 76 24 0", "text": "" }, { "heading": "1 ANAPH 40.81 68.70 63 30 7", "text": "" }, { "heading": "2 S2S 37.41 73.47 59 28 12", "text": "" }, { "heading": "3 TGTCON 36.05 63.95 73 19 8", "text": "" }, { "heading": "4 SAN 35.37 60.54 62 28 9", "text": "nective indicate that the missing connectives in MT outputs are indeed an issue. More details about the study can be found in the Appendix (A.4).\nTestset. It would not be appropriate to simply extract connectives using a list of candidates, since those words may not always act in the capacity of a discourse connective. In order to identify the discourse connectives, we build a simple explicit connective classifier (a neural model) using annotated data from the Penn Discourse Treebank or PDTB (Prasad et al., 2018) (details in Appendix A.4). The classifier achieves an average cross-validation F1 score of 93.92 across the 25 sections of PDTBv3, proving that it generalizes well.\nAfter identifying the explicit connectives in the reference translations, we align them with the corresponding system translations and extract the source texts of cases with missing connective translations. We only use the classifier on the reference text, but consider all possible candidates in the system translations to give them the benefit of the doubt. This gives us a discourse connective benchmark testset with 172 samples for De-En, 147 for Ru-En and 362 for Zh-En.\nEvaluation. There has been some work on semi-automatic evaluation of translated discourse connectives (Meyer et al., 2012; Hajlaoui & Popescu-Belis, 2013); however, it is limited to only En-Fr, based on a dictionary list of equivalent connectives, and requires using potentially noisy alignments and other heuristics. In the interest of evaluation simplicity, we expect the model to produce the same connective as the reference. Since the nature of the challenge is that connectives tend to be omitted altogether, we report both the accuracy of connective translations with respect to the reference, and the percentage of cases where any candidate connective is produced." }, { "heading": "3.4.1 RESULTS AND ANALYSIS", "text": "The rankings of MT systems based on their accuracy of connective translations are given in Table 7, along with our findings from a manual analysis on a subset of the translations. In benchmark outputs, we observed mostly omissions of connectives (disappears in the translation), synonymous translations (e.g., Naldo is also a great athlete on the bench - Naldo’s “great sport\" on the bank, too.), and mistranslations. More examples can be found in the Appendix (A.8).\nThe ranking shows that the S2S model performs well for Ru-En and Zh-En but not for De-En. ANAPH continues its high performance in Ru-En and this time also De-En, doing poorly for Zh-En, while HAN is consistently poor with a lot of omissions." }, { "heading": "4 DISCUSSION", "text": "Our benchmarking re-emphasizes the gap between BLEU scores and translation quality at the discourse level. The overall BLEU scores for De-En and Ru-En are higher than the BLEU scores for Zh-En; however, we see that Zh-En models have higher accuracies in the discourse connective task, and also outperform Ru-En in lexical consistency. Similarly, for Ru-En, both SAN and HAN have higher BLEU scores than the S2S and CONCAT models, but are unable to outperform these simpler models consistently in the discourse tasks, often ranking last.\nWe also reveal a gap in performance consistency across language pairs. Models may be tuned for a particular language pair, such as ANAPH trained for En-Ru (Voita et al., 2018a). For the same language pair (Ru-En), we show results consistent with what is reported; the model ranks first or second for all phenomena. However, it is not consistently successful in other languages, e.g., ranking\nclose to bottom for almost all cases in De-En. In general, our findings match the conclusions from Kim et al. (2019) regarding the lack of satisfactory performance gains in context-aware models.\nAlthough our testsets and evaluation procedures have their limitations, like only checking for missing connectives or being unable to detect consistently translated synonyms of reference translations, they are a first step toward a standardized, comprehensive evaluation framework for MT models that spans multiple languages. They are useful for measuring basic model proficiency, performance consistency and for discovering MT deficiencies. Discourse-aware models have been advocated for improving MT (Sennrich, 2018); as more models are proposed, our framework will be a valuable resource that provides a better picture of model capabilities. With advances in NMT models and also in evaluation models for complex phenomena, harder challenges can be added and evaluated." }, { "heading": "5 GENERALIZABILITY TO OTHER LANGUAGES", "text": "Procedures used to create our testsets can be generalized to create testsets in other languages. We briefly describe the possibilities here:\n• Anaphora: The pronouns need to be separate morphemes (and not attached to verbs etc.). If there are several equivalent pronoun translations, a list may be needed so they can be excluded from being considered translation errors; e.g., Miculicich Werlen & Popescu-Belis (2017) has such a list for French, a list can also be collected through user studies as in Jwalapuram et al. (2019).\n• Coherence & Readability: The coherence model (Moon et al., 2019) used to find poorly translated texts was re-trained on reference vs. MT outputs. It is also possible to do this for other languages for which WMT (or IWSLT) system outputs are available. The coherence model from Moon et al. (2019) is an end-to-end neural model that does not rely on any language-specific features, and thus can be trained on any target language. However, language-specific or multilingual coherence models could also be used since Moon et al. (2019) primarily train and test their model on English (WSJ) data.\n• Lexical Consistency: A lemmatizer was used to reduce common suffixes for detecting lexical consistency (e.g., “box” and “boxes” should not be detected as inconsistent words), so a similar tool will be needed for any other target language; e.g., CLTK (Johnson et al., 2014–2020) provides a lemmatizer for several languages.\n• Discourse Connectives: Discourse connectives also need to be separate morphemes. We built a classifier trained on PDTB data to identify connectives since they are ambiguous in English. Datasets analogous to PDTB in other languages e.g., PCC (German) (Bourgonje & Stede, 2020) and CDTB (Chinese) (Zhou & Xue, 2015), etc. are available." }, { "heading": "6 CONCLUSIONS", "text": "We presented the first of their kind discourse phenomena based benchmarking testsets called the DiP tests, designed to be challenging for NMT systems. Our main goal is to emphasize the need for comprehensive MT evaluations across phenomena and language pairs, which we do by highlighting the performance inconsistencies of complex context-aware models. We will release the discourse benchmark testsets and evaluation frameworks for public use, and also propose to accept translations from MT systems to maintain a leaderboard for the described phenomena." }, { "heading": "1 CONCAT 31.96 112.583", "text": "" }, { "heading": "2 S2S 31.65 113.783", "text": "" }, { "heading": "3 SAN 29.32 117.838", "text": "" }, { "heading": "4 HAN 29.69 118.067", "text": "" }, { "heading": "1 HAN 25.11 160.411", "text": "" }, { "heading": "2 ANAPH 27.66 164.603", "text": "" }, { "heading": "3 CONCAT 24.56 168.092", "text": "" }, { "heading": "4 SAN 24.34 176.143", "text": "A.2 COHERENCE\nRe-trained model. We re-train the pairwise coherence model in Moon et al. (2019) to suit the MT setting, with reference translations as the positive documents and the MT outputs as the negative documents. The results are shown in Table 10.\nUser study. Figure 3 shows our user study interface. The participants are shown three candidate English translations of the same source text, and asked to rank the texts on how coherent and readable they are. To optimize annotation time, participants are only shown the first four sentences of the document; they annotate 100 such samples. We also include the reference as one of the candidates for control, and to confirm that we are justified in re-training the evaluation model to assign a higher score to the reference. Three participants took part in the study. Our control experiment results in an AC1 agreement of 0.84.\nThe agreement between the human judgements and the rankings obtained by using the original coherence model trained on permuted WSJ articles (also news domain, like the WMT data), is 0.784. The fact that the original model performs no worse than 0.784 shows that there are definitely coherence issues in such (MT output vs reference) data that are being picked up.\nThe agreement between the human judgements and the retrained coherence evaluation model’s rankings is 0.82. Our re-trained model is therefore also learning useful task-specific features in addition" }, { "heading": "1 CONCAT 31.96 5038.057", "text": "" }, { "heading": "2 SAN 29.32 5059.811", "text": "" }, { "heading": "3 S2S 31.65 5120.633", "text": "" }, { "heading": "4 ANAPH 29.94 5166.320", "text": "to general coherence features. The high agreement validates our proposal to use the modified coherence model to evaluate the benchmarked MT systems.\nResults. The total assigned scores (difference between reference and translation scores) obtained for each system after summing the over the samples in the respective testsets are given in Table 11. The models are ranked based on these scores from lowest score (best performing) to highest score (worst performing).\nA.3 LEXICAL CONSISTENCY\nDataset extraction. One possible issue with our method could be that reference translations may contain forced consistency, i.e., human translators introduce consistency to make the text more readable, despite inconsistent word usage in the source. It may not be reasonable to expect consistency in a system translation if there is none in the source. To confirm, we conducted a manual analysis where we compared the lexical chains of nouns and adjectives in Russian source texts against the lexical chains in the English reference. We find that in a majority (77%) of the cases, the lexical chains in the source are reflected accurately in the reference, and there are relatively few cases where humans force consistency.\nEvaluation. It is possible that the word used in the system translation is not the same as the word in the reference, but the MT output is still consistent (e.g., a synonym used consistently). We tried to use alignments coupled with similarity obtained from ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) embeddings to evaluate such cases to avoid unfairly penalizing the system translations, but we found this to be noisy and unreliable. Thus, we check consistency against the reference; if at least one translation matches the reference but the translated word occurs fewer times than it does in the reference, the translation is considered inconsistent. For samples where there is no common translation between the system output and the reference, we cannot be sure if it is consistent or not, so we exclude it for calculating the primary ranking percentage. We therefore report both the percentage of consistent samples as a fraction of consistent + inconsistent samples, and percentage of consistent samples as a fraction of the full dataset for comparison purposes.\nA.4 DISCOURSE CONNECTIVES\nConnective Classification model. We build an explicit connective classifier to identify candidates that are acting in the capacity of a discourse connective. The model consists of an LSTM layer (Hochreiter & Schmidhuber, 1997) followed by a linear layer for binary classification, initialized by ELMo embeddings (Peters et al., 2018). We use annotated data from the Penn Discourse Treebank\n(PDTBv3) (Prasad et al., 2018) and conduct cross-validation experiments across all 25 sections. The average Precision, Recall and F1 scores from the cross-validation experiments are reported in Table 12. Our classifier achieves an average cross-validation F1 of 93.92, which shows that it generalizes very well. The high precision also provides certainty that the model is classifying discourse connectives reliably.\nUser Study. To confirm that the presence of the connective conveys information and contributes to the readability and understanding of the text, we conducted two user studies. For the first study, as presented in Figure 4a, participants are shown two previous sentences from the reference for context, and asked to choose between two candidate options for the sentence that may follow. These options consist of the reference translation with the connective highlighted, and the same text with the connective deleted.\nParticipants are asked to choose the sentence which more accurately conveys the intended meaning. There were two participants who annotated 200 such samples. The reference with the connective was chosen over the version without the connective with an AC1 agreement of 0.98. Table 13 shows the connective-wise breakdown.\nIn the second study, the participants were shown the reference along with the system translation that was missing the connective (Figure 4b). In this study, the setup has no artificially constructed data; the idea is to check if there is a possibility that the system translation is structured in such a way as to require no connective. However, the AC1 agreement for preferring the reference was 0.82 (2 annotators for 200 samples; different annotators from the first study) for this study as well, which is still quite high. Table 13 has the connective-wise breakdown; here we see that the results are slightly different for certain connectives, but overall the strong preference for the reference with the connective is retained. Our assumption that connectives must be translated is validated through both studies.\nNote that participants may not prefer the version without the connective due to loss of grammaticality or loss of sense information. Although indistinguishable in this setting, we argue that since both affect translation quality, it is reasonable to expect a translation for the connectives.\nNote that for both studies, participants were also given options to choose ‘Neither’ in case they didn’t prefer either choice, or ‘Invalid’ in case there was an issue with the data itself (e.g., transliteration issues, etc.); data that was marked as such was excluded from further consideration.\nA.5 TGTCON MODEL ARCHITECTURE\nHere we describe the model architecture for TGTCON. The decoder introduced in Vaswani et al. (2017) is used for encoding the target sentence and the encoder adopted from the original encoder of the Transformer architecture is used to encode context of the target side. Each part, target decoder and context encoder, is repeated 6 times (N=6). In the last layer, two multi-head attention operations are performed, followed by layer normalization (similar to Vaswani et al. (2017)). The first multihead attention is the self-attention on target sentence, whereas the second multi-head attention is a cross attention between representation of target and target context. These two representations are fused by gated linear sum which decides the amount of information from each representation that is to be passed on. Figure 5 shows the model architecture in detail.\nA.6 MODEL TRAINING\nTraining Data It is essential to provide the models with training data that contains adequate amounts of discourse phenomena, if we expect them to learn such phenomena. To construct such datasets, we first manually investigated the standard WMT corpora consisting of UN (Ziemski et al., 2016), Europarl (Tiedemann, 2012) and News Commentary, as well as the standard IWSLT dataset (Cettolo et al., 2012). We analyzed 100 randomly selected pairs of consecutive English sentences from each dataset, where the first sentence was treated as the context. Table 14 shows the percentage of cases containing the respective discourse phenomena.\nIn accordance with intuition, data sources based on narrative texts such as IWSLT exhibit increased amounts of discourse phenomena compared to strictly formal texts such as the UN corpus. On the other hand, the UN corpus consists of largely unrelated sentences, where only lexical consistency is well-represented due to the usage of very specific and strict naming of political concepts. We decided to exclude the UN corpus and combine the other datasets that have more discourse phenomena for De-En and Ru-En; for Zh-En, we keep UN and add WikiTitles to bolster the BLEU scores. Our training dataset is therefore a combination of Europarl, IWSLT and News Commentary datasets, plus UN and WikiTitles for Zh-En. The development set is a combination of WMT-2016 and older WMT data (excluding 2014). Note that our validation set does not have any data in common with the benchmark testsets. We test on WMT-2014 (De/Ru-En)/WMT-2017(Zh-En) data. We tokenize the data using Jieba for Zh and the Moses software5 for the other languages, lowercase the text, and apply BPE encodings6 from Sennrich et al. (2016). We learn the BPE encodings with the command learn-joint-bpe-and-vocab -s 40000.\nTraining For the context-aware models, we use the implementations from official author repositories. As the official code for ANAPH (Voita et al., 2018b) has not been released, we implement the model in the Fairseq framework (Ott et al., 2019).7 For training the S2S and CONCAT models, we used the Transformer implementation from fairseq.We confirmed with the authors of HAN and SAN that our configurations were correct, and we took the best configuration directly from the ANAPH paper.\nA.7 MODEL PARAMETERS\nParameters used to train HAN are displayed in Table 15, and parameters for the S2S, CONCAT, ANAPH, and SAN models are displayed in Table 16.\nA.8 ERROR EXAMPLES\nExamples for the different types of errors encountered across the tasks are given in Table 17.\n5https://www.statmt.org/moses/ 6https://github.com/rsennrich/subword-nmt/ 7https://github.com/pytorch/fairseq\nTable 15: Configuration parameters for training HAN model, taken from the authors’ repository\nhttps://github.com/idiap/HAN_NMT/\nParameters Values Step 1: sentence-level NMT -encoder_type transformer -decoder_type transformer -enc_layers 6 -dec_layers 6 -label_smoothing 0.1 -rnn_size 512 -position_encoding - -dropout 0.1 -batch_size 4096 -start_decay_at 20 -epochs 20 -max_generator_batches 16 -batch_type tokens -normalization tokens -accum_count 4 -optim adam -adam_beta2 0.998 -decay_method noam -warmup_steps 8000 -learning_rate 2 -max_grad_norm 0 -param_init 0 -param_init_glorot - -train_part sentences - Step 2: HAN encoder others - see Step 1 others - see Step 1 -batch_size 1024 -start_decay_at 2 -epochs 10 -max_generator_batches 32 -train_part all -context_type HAN_enc -context_size 3 Step 3: HAN joint others - see Step 1 others - see Step 1 -batch_size 1024 -start_decay_at 2 -epochs 10 -max_generator_batches 32 -train_part all -context_type HAN_join -context_size 3 -train_from [HAN_enc_model]" } ]
2,020
DIP BENCHMARK TESTS: EVALUATION BENCH-
SP:b2fc6ca65add04fb32bcf7622d9098de9004ca2b
[ "The authors present a framework that uses a combination of VAE and GAN to recover private user images using Side channel analysis of memory access . A VAE-LP model first reconstructs a coarse image from side channel information which is reshaped and processed using a convolutional network. The output of the VAE-LP model is refined using a GAN to add fine details. Compelling results are demonstrated for recovery of private information and state of art metrics are reported. " ]
System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to infer program secrets based on observed side channel logs. Given the ever-growing adoption of machine learning as a service (MLaaS), image analysis software on cloud platforms has been exploited by reconstructing private user images from system side channels. Nevertheless, to date, SCA is still highly challenging, requiring technical knowledge of victim software’s internal operations. For existing SCA attacks, comprehending such internal operations requires heavyweight program analysis or manual efforts. This research proposes an attack framework to reconstruct private user images processed by media software via system side channels. The framework forms an effective workflow by incorporating convolutional networks, variational autoencoders, and generative adversarial networks. Our evaluation of two popular side channels shows that the reconstructed images consistently match user inputs, making privacy leakage attacks more practical. We also show surprising results that even one-bit data read/write pattern side channels, which are deemed minimally informative, can be used to reconstruct quality images using our framework.
[ { "affiliations": [], "name": "Yuanyuan Yuan" }, { "affiliations": [], "name": "Shuai Wang" }, { "affiliations": [], "name": "Junping Zhang" } ]
[ { "authors": [ "Onur Aciicmez", "Cetin Kaya Koc" ], "title": "Trace-driven cache attacks on AES", "venue": "In ICICS,", "year": 2006 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Eleonora Cagli", "Cécile Dumas", "Emmanuel Prouff" ], "title": "Convolutional neural networks with data augmentation against jitter-based countermeasures", "venue": "In International Conference on Cryptographic Hardware and Embedded Systems,", "year": 2017 }, { "authors": [ "R.C. Chiang", "S. Rajasekaran", "N. Zhang", "H.H. Huang" ], "title": "Swiper: Exploiting virtual machine vulnerability in third-party clouds with competition for i/o resources", "venue": "IEEE Transactions on Parallel and Distributed Systems,", "year": 2015 }, { "authors": [ "Alex Clark. Pillow" ], "title": "Python image analysis library, 2020", "venue": "URL https://pillow. readthedocs.io/en/stable/", "year": 2020 }, { "authors": [ "Bart Coppens", "Ingrid Verbauwhede", "Koen De Bosschere", "Bjorn De Sutter" ], "title": "Practical mitigations for timing-based side-channel attacks on modern x86 processors", "venue": "In IEEE SP,", "year": 2009 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Xiaowan Dong", "Zhuojia Shen", "John Criswell", "Alan L Cox", "Sandhya Dwarkadas" ], "title": "Shielding software from privileged side-channel attacks", "venue": "In 27th USENIX Security Symposium,", "year": 2018 }, { "authors": [ "Goran Doychev", "Dominik Feld", "Boris Kopf", "Laurent Mauborgne", "Jan Reineke" ], "title": "CacheAudit: A tool for the static analysis of cache side channels", "venue": "In USENIX Sec.,", "year": 2013 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In Theory of cryptography conference,", "year": 2006 }, { "authors": [ "Daniel Gruss", "Julian Lettner", "Felix Schuster", "Olya Ohrimenko", "Istvan Haller", "Manuel Costa" ], "title": "Strong and efficient cache side-channel protection using hardware transactional memory", "venue": "In USENIX Sec.,", "year": 2017 }, { "authors": [ "David Gullasch", "Endre Bangerter", "Stephan Krenn" ], "title": "Cache games—bringing access-based cache attacks on AES to practice", "venue": "In Proc. IEEE Symp. on Security and Privacy (S&P),", "year": 2011 }, { "authors": [ "Yong Guo", "Qi Chen", "Jian Chen", "Qingyao Wu", "Qinfeng Shi", "Mingkui Tan" ], "title": "Auto-embedding generative adversarial networks for high resolution image synthesis", "venue": "IEEE Transactions on Multimedia,", "year": 2019 }, { "authors": [ "Marcus Hähnel", "Weidong Cui", "Marcus Peinado" ], "title": "High-resolution side channels for untrusted operating systems", "venue": "USENIX Annual Technical Conference,", "year": 2017 }, { "authors": [ "Y. Han", "J. Chan", "T. Alpcan", "C. Leckie" ], "title": "Using virtual machine allocation policies to defend against co-resident attacks in cloud computing", "venue": "IEEE Transactions on Dependable and Secure Computing,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Benjamin Hettwer", "Stefan Gehrer", "Tim Güneysu" ], "title": "Profiled power analysis attacks using convolutional neural networks with domain knowledge", "venue": "In International Conference on Selected Areas in Cryptography,", "year": 2018 }, { "authors": [ "Benjamin Hettwer", "Tobias Horn", "Stefan Gehrer", "Tim Güneysu" ], "title": "Encoding power traces as images for efficient side-channel analysis", "venue": "arXiv preprint arXiv:2004.11015,", "year": 2020 }, { "authors": [ "Annelie Heuser", "Michael Zohner" ], "title": "Intelligent machine homicide", "venue": "In International Workshop on Constructive Side-Channel Analysis and Secure Design,", "year": 2012 }, { "authors": [ "Sanghyun Hong", "Michael Davinroy", "Yiǧitcan Kaya", "Stuart Nevans Locke", "Ian Rackow", "Kevin Kulda", "Dana Dachman-Soled", "Tudor Dumitraş" ], "title": "Security analysis of deep neural networks operating in the presence of cache side-channel attacks", "venue": "arXiv preprint arXiv:1810.03487,", "year": 2018 }, { "authors": [ "Sanghyun Hong", "Michael Davinroy", "Yiǧitcan Kaya", "Dana Dachman-Soled", "Tudor Dumitraş" ], "title": "How to 0wn the nas in your spare time", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Seunghoon Hong", "Dingdong Yang", "Jongwook Choi", "Honglak Lee" ], "title": "Inferring semantic layout for hierarchical text-to-image synthesis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Gabriel Hospodar", "Benedikt Gierlichs", "Elke De Mulder", "Ingrid Verbauwhede", "Joos Vandewalle" ], "title": "Machine learning in side-channel analysis: a first study", "venue": "Journal of Cryptographic Engineering,", "year": 2011 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Jaehun Kim", "Stjepan Picek", "Annelie Heuser", "Shivam Bhasin", "Alan Hanjalic" ], "title": "Make some noise. unleashing the power of convolutional neural networks for profiled side-channel analysis", "venue": "IACR Transactions on Cryptographic Hardware and Embedded Systems,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Ivan Laptev", "Tony Lindeberg" ], "title": "Velocity adaptation of space-time interest points", "venue": "In Proceedings of the 17th International Conference on Pattern Recognition,", "year": 2004 }, { "authors": [ "F. Liu", "Q. Ge", "Y. Yarom", "F. Mckeen", "C. Rozas", "G. Heiser", "R.B. Lee" ], "title": "Catalyst: Defeating last-level cache side channel attacks in cloud computing", "venue": null, "year": 2016 }, { "authors": [ "Fangfei Liu", "Y. Yarom", "Qian Ge", "G. Heiser", "R.B. Lee" ], "title": "Last-level cache side-channel attacks are practical", "venue": "In 2015 IEEE Symposium on Security and Privacy,", "year": 2015 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Chi-Keung Luk", "Robert Cohn", "Robert Muth", "Harish Patil", "Artur Klauser", "Geoff Lowney", "Steven Wallace", "Vijay Janapa Reddi", "Kim Hazelwood" ], "title": "Pin: building customized program analysis tools with dynamic instrumentation", "venue": "In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation (PLDI’05),", "year": 2005 }, { "authors": [ "Houssem Maghrebi", "Thibault Portigliatti", "Emmanuel Prouff" ], "title": "Breaking cryptographic implementations using deep learning techniques", "venue": "In International Conference on Security, Privacy, and Applied Cryptography Engineering,", "year": 2016 }, { "authors": [ "Tae-Hyun Oh", "Tali Dekel", "Changil Kim", "Inbar Mosseri", "William T Freeman", "Michael Rubinstein", "Wojciech Matusik" ], "title": "Speech2face: Learning the face behind a voice", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yossef Oren", "Vasileios P Kemerlis", "Simha Sethumadhavan", "Angelos D Keromytis" ], "title": "The spy in the sandbox: Practical cache attacks in javascript and their implications", "venue": "In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,", "year": 2015 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In Proceedings of 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Himanshu Raj", "Ripal Nathuji", "Abhishek Singh", "Paul England" ], "title": "Resource management for isolation enhanced cloud services", "venue": "In CCSW,", "year": 2009 }, { "authors": [ "Scott Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Michael Schwarz", "Moritz Lipp", "Daniel Gruss", "Samuel Weiser", "Clémentine Maurice", "Raphael Spreitzer", "Stefan Mangard" ], "title": "KeyDrown: Eliminating software-based keystroke timing side-channel attacks", "venue": null, "year": 2018 }, { "authors": [ "Shuai Wang", "Pei Wang", "Xiao Liu", "Danfeng Zhang", "Dinghao Wu" ], "title": "CacheD: Identifying cachebased timing channels in production software", "venue": "In 26th USENIX Security Symposium,", "year": 2017 }, { "authors": [ "Shuai Wang", "Yuyan Bao", "Xiao Liu", "Pei Wang", "Danfeng Zhang", "Dinghao Wu" ], "title": "Identifying cachebased side channels through secret-augmented abstract interpretation", "venue": "In 28th USENIX Security Symposium,", "year": 2019 }, { "authors": [ "Zhenghong Wang", "Ruby B. Lee" ], "title": "Covert and side channels due to processor architecture", "venue": "In ACSAC,", "year": 2006 }, { "authors": [ "Zhenghong Wang", "Ruby B. Lee" ], "title": "New cache designs for thwarting software cache-based side channel attacks", "venue": "In ISCA,", "year": 2007 }, { "authors": [ "Zhenghong Wang", "Ruby B Lee" ], "title": "A novel cache architecture with enhanced performance and security", "venue": "In MICRO,", "year": 2008 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "Yandong Wen", "Bhiksha Raj", "Rita Singh" ], "title": "Face reconstruction from voice using generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zhenyu Wu", "Zhang Xu", "Haining Wang" ], "title": "Whispers in the hyper-space: High-speed covert channel attacks in the cloud", "venue": "In Presented as part of the 21st USENIX Security Symposium (USENIX Security", "year": 2012 }, { "authors": [ "Yuanzhong Xu", "Weidong Cui", "Marcus Peinado" ], "title": "Controlled-channel attacks: Deterministic side channels for untrusted operating systems", "venue": "In 2015 IEEE Symposium on Security and Privacy,", "year": 2015 }, { "authors": [ "Yuval Yarom", "Katrina Falkner" ], "title": "FLUSH+RELOAD: A high resolution, low noise, L3 cache side-channel attack", "venue": "In Proceedings of the 23rd USENIX Conference on Security Symposium,", "year": 2014 }, { "authors": [ "Yuval Yarom", "Daniel Genkin", "Nadia Heninger" ], "title": "Cachebleed: a timing attack on openssl constanttime rsa", "venue": "Journal of Cryptographic Engineering,", "year": 2017 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Side channel analysis (SCA) recovers program secrets based on the victim program’s nonfunctional characteristics (e.g., its execution time) that depend on the values of program secrets. SCA constitutes a major threat in today’s system and hardware security landscape. System side channels, such as CPU cache accesses and operating system (OS) page table accesses made by the victim software, are widely used to recover program secrets under various real-world scenarios (Gullasch et al., 2011; Aciicmez & Koc, 2006; Wu et al., 2012; Hähnel et al., 2017; Xu et al., 2015; Yarom et al., 2017).\nTo conduct SCA, attackers first conduct an online phase to log a trace of side channel data points made by the victim software (e.g., its accessed CPU cache lines). Then, attackers launch an offline phase to analyze the logged trace and infer secrets (e.g., private inputs). Enabled by advances in system research, the online phase can be performed smoothly (Xu et al., 2015). Nevertheless, the offline phase is challenging, requiring comprehension of victim software’s input-relevant operations and how such operations influence side channels. The influence is program-specific and obscure (see an example in Fig. 1). Even worse, side channel data points made by real-world software are usually highly noisy. For instance, executing libjpeg (libjpeg, 2020) to decompress one unknown JPEG image produces a trace of over 700K side channel data points, where only a small portion depends on the image content. Identifying such input-dependent data points from over 700K records is extremely difficult.\nLaunching SCA to recover images processed by media software constitutes a common threat in the era of cloud computing (Xu et al., 2015; Hähnel et al., 2017), especially when machine learning as a service (MLaaS) is substantially offered (e.g., for face recognition). When envisioning the high\n∗Corresponding Author\nrisk of violating user privacy, there is a demanding need to understand the adversarial capability of reconstructing private images with SCA. To date, the offline inference phase of existing SCA attacks requires lots of manual efforts with heuristics (Xu et al., 2015; Hähnel et al., 2017). While some preliminary studies explore to use AI models to infer secrets (Hospodar et al., 2011; Kim et al., 2019; Cagli et al., 2017; Hettwer et al., 2018), their approaches are primarily driven by classification, i.e., predicting whether a particular bit of crypto key is 0 or 1. In contrast, reconstructing user private images requires to synthesize and enhance images from a more holistic perspective.\nRecent advances in generative models, such as generative adversarial network (GAN) and variational autoencoder (VAE), have enabled a major thrust in image reconstruction, given subtle signals in even cross-modal settings, e.g., voice-to-face or text-to-image (Radford et al., 2016; Reed et al., 2016; Wen et al., 2019; Hong et al., 2018b). Inspired by this breakthrough, we propose an SCA framework using generative models. Given a trace of side channel data points made by image analysis software (e.g., libjpeg) when processing a user input, we reconstruct an image visually similar to the input. Each logged side channel trace, containing around a million records, is first encoded into a matrix and pre-processed by a convolutional neural network (CNN) for feature extraction. Then, a VAE network with a learned prior (referred to as VAE-LP) is employed to reconstruct an image with a holistic visual appearance. We further supplement VAE-LP with a GAN model to enhance the recovered image with vivid details. The GAN generator yields the final output.\nOur attack exploits media libraries, libjpeg (libjpeg, 2020) and uPNG (Middleditch, 2010), using two popular side channels, CPU cache line accesses and OS page table accesses. Our attack is independent of the underlying computing infrastructure (i.e., OS, hardware, image library implementation). We require enough side channel logs for training, which is consistently assumed by previous works (Heuser & Zohner, 2012; Maghrebi et al., 2016). While existing attacks particularly target libjpeg and leverage domain knowledge, system hacking, and manual efforts to infer pixel values (Xu et al., 2015; Hähnel et al., 2017), we show that images with many details can be reconstructed in an end-to-end manner. We also show surprising results that enabled by our framework, side channel traces composing one-bit data read/write patterns, which prima facie seems minimally informative, suffice recovering images. We conduct qualitative and quantitative evaluations on specific and general datasets representing daily images that can violate privacy if leaked. The recovered images manifest consistent visual appearances with private inputs. The recovered images also exhibit high discriminability: each recovered image (e.g., a face) can be matched to its reference input among many candidates with high accuracy. In summary, we make the following contributions:\nAt the conceptual level, we present the first generative model-based SCA. Our novel approach learns how program inputs influence system side channels from historical side channel logs to reconstruct user private images automatically. We, for the first time, demonstrate surprisingly effective attacks toward even low-resolution side channels like one-bit data read/write access patterns.\nAt the technical level, we design an effective framework by incorporating various design principles to facilitate image reconstruction from side channels. Our framework pipelines 2D CNN, VAE-LP, and GAN models to systematically enhance the quality of generated images.\nAt the empirical level, our evaluations show that the proposed framework can generate images with vivid details and are closely similar to reference inputs. The reconstructed images show high discriminability, making privacy leakage attacks more practical.\nThis is the first paper to conduct SCA with generative models, revealing new SCA opportunities and unknown threats. Our code is at https://github.com/genSCA/genSCA." }, { "heading": "2 BACKGROUND", "text": "To formulate SCA, let the attacked program be P and its input domain be I . For a deterministic and terminating program P , the program execution can be modeled as a mapping P : I → E where E represents program runtime behavior (e.g., memory access). As a common assumption (Hähnel et al., 2017), program inputs are private and profitable for attackers. Since different inputs i, i′ ∈ I can likely induce different e, e′ ∈ E, using input-dependent e ∈ E enables to infer i. Modern computer architectures have primarily zeroed the possibility for adversaries to log e ∈ E. Nevertheless, an attacker’s view on P can be modeled as a function view : E → O that maps E to side channel observations O. Hence, the composition (view ◦ P ) : I → O maps inputs to side channel data points that can be logged by attackers. The view indicates the attacker’s capability, and for typical system security scenarios, the view is formulated as view : Emem → Ocache ∪\nOpage, where Emem denotes a trace of accessed memory locations when executing P with i, and Ocache andOpage represent CPU cache and OS page table side channels, respectively. Despite being unable to monitor Emem, attackers can log accessed cache lines Ocache or page table entries Opage derived from Emem. Attackers then infer Emem and recover i. We now concretize the procedure by introducing how SCA is used to exploit cloud platforms in a two-step approach as follows: Online Phase to Record O. Considering a cloud environment in Fig. 1(a), where two users, one normal and one malicious, deploy two virtual machine (VM) instances on the host. Private images i ∈ I uploaded by users are processed by media library P within the left VM. Modern computer design, e.g., Intel SGX (Intel, 2014), guarantees that i ∈ I and the execution of P cannot be viewed from outside the VM. However, when processing i, P usually imposes a large volume of CPU cache and page table accesses, which, as shown in Fig. 1(a), can be recorded by the co-located malicious VM or the malicious host OS in a fully automated manner (Han et al., 2017; Chiang et al., 2015; Liu et al., 2015a; Xu et al., 2015; Hähnel et al., 2017). Offline Phase to Infer i. Once side channel traces o ∈ O are collected, an offline phase is conducted to infer (view ◦P )−1 : O → I and recover i. Fig. 1(b) presents a sample code, where depending on values of input i, different memory locations (and cache lines) will be visited. Fig. 1(c) shows the corresponding trace of logged cache side channel records. To infer i, attackers eliminate the second record (since it is input-independent), and infer i as 1 according to the first record.\nAttackers anticipate to 1) pinpointing a subset of records o∗ ⊆ o that depend on i, and to 2) recovering the mapping from o∗ to i. However, real-world side channel traces (e.g., generated by uPNG) could contain over one million records, where only a tiny portion o∗ is input-dependent. Even worse, constructing the mapping between i and o∗ requires a deep understanding of program control flows (e.g., how i affects program execution and induces cache accesses in Fig. 1(b)). To date, these tasks require either manual effort (Xu et al., 2015; Hähnel et al., 2017) or formal analysis (Doychev et al., 2013; Wang et al., 2017; 2019), which are program-specific and error-prone with low scalability.\nExisting research tackles the offline phase challenge by proposing profiling-based SCA (Maghrebi et al., 2016; Hettwer et al., 2018; Kim et al., 2019), where models are trained to approximate (view◦ P )−1 : O → I . However, existing work focuses on predicting particular bits of crypto keys from succinct side channel traces, e.g., a few hundred records (Hettwer et al., 2020). In contrast, this is the first work shows that by incorporating generative models, SCA can be conducted to exploit real-world media libraries and holistically reconstruct high-quality and discriminable images." }, { "heading": "3 THE PROPOSED FRAMEWORK", "text": "A common assumption shared by SCA (Heuser & Zohner, 2012; Hähnel et al., 2017; Xu et al., 2015) is that the attackers can profile the victim software locally or remotely with training inputs and collect corresponding side channel traces. We train a model to learn how different inputs can influence side channel traces. Then, given a side channel trace logged when processing an unknown image, our framework reconstructs an image that is visually similar to the unknown input.\nOur framework has two pipelined modules (see Fig. 2). Given a side channel trace Ti corresponding to processing an image i, we first encode Ti into a matrix. The encoded matrix will be fed to the VAE-LP module to generate image îtrace, and we further use GAN to denoise îtrace and yield the final output îGAN . We now elaborate on each module. More details are given in Appendix B." }, { "heading": "3.1 SIDE CHANNEL TRACE ENCODING", "text": "Real-world software is highly complex, and processing one image could generate a huge amount of records, where only a few records are secret-dependent. Processing overlong traces for previous attacks is very difficult and requires considerable domain-specific knowledge, expertise, and even manual efforts to locate and remove irrelevant records (Xu et al., 2015; Hähnel et al., 2017).\nDespite the general difficulty of processing overlong traces (each trace contains about 700K to 1.3M data points in our evaluation), we note that adjacent records on a side channel trace are often derived from the same or related modules (e.g., functions) of the victim software. Hence, we “fold” each side channel trace into a N × N × K matrix to approximate spatial locality which can be further exploited by CNNs. A trace is first divided into K segments, where N adjacent points in a segment are put into one row, in total N rows. We do zero padding. CNNs are deployed in the trace encoder of VAE-LP to process the encoded matrices. Overall, we have no assumption of the access pattern or the convolutional structure of the inputs. Side channel traces are generally sparse, where only a small portion is private-related (see Appendix J for experimental information). To smoothly process the side channel traces with generative models, we thus employ CNN models to pre-process side channel traces." }, { "heading": "3.2 THE VAE-LP MODULE", "text": "VAE-LP extends standard VAE by replacing its fixed Gaussian prior with a learned prior (Denton & Fergus, 2018), which represents the latent distribution of side channel traces. VAE-LP is trained using both real-life images and their corresponding side channel traces. By incorporating the side channel trace encoder, we can extract latent representations from the logged side channel data points. Simultaneously, by integrating corresponding reference images during training, we provide a guideline to help the image decoder to generate quality images. As shown in Fig. 2(a), the trace encoder Enctrace (marked in blue) employs 2D CNNs and can extract features from side channel traces Ti in the encoded matrices. The output of Enctrace constitutes the learned prior distribution of latent variable, namely p(zTi), rather than a fixed Gaussian distribution that VAE usually is. The decoder Dec takes the mean of p(zTi) as its input and outputs the image generated from side channel traces. The training phase also employs the image encoder Encimage (marked in red), which accepts reference images i and outputs q(zi|i). We train the VAE-LP network by performing forward propagation separately for two different data resources. We then use two generated images to compute reconstruction loss and perform one iteration of backward propagation. Let îtrace and Dectrace be the generated image and decoder Dec by conducting forward propagation with only Ti. Similarly, let îimage and Decimage be the generated image and Dec by conducting forward propagation with only i. Parameters of Dectrace and Decimage are shared in the training phase, and the loss of the VAE-LP module is defined as follows:\nLossV AE−LP = L1(i, îimage) + L1(i, îtrace) + βDKL(q(zi|i)||p(zTi))\nwhere three terms, namely, i) a reconstruction loss L1(i, îimage) derived from the reference input, ii) a reconstruction loss L1(i, îtrace) derived from the side channel trace, and iii) a KL-divergence\nthat forces q(zi|i) to be close to p(zTi) are subsumed. During the generation phase, we remove Encimage andDectrace. Enctrace andDecimage are retained to yield îtrace given a logged Ti. The generative process is given by:\nzTi ∼ p(zTi) îtrace ∼ p(i|zTi)" }, { "heading": "3.3 THE GAN MODULE", "text": "VAE-LP module is seen to recover îtrace with relatively coarse-grained information (see Sec. 5.1). As will be elaborated in Sec. 4, different private inputs can manifest identical side channel patterns. Hence, some details are inevitably missing during input reconstruction. To tackle this inherent limitations, we further deploy a GAN module (see Fig. 2(b)) which takes the output of the VAELP module, îtrace, and generates the final output îGAN . To smoothly refine îtrace, we employ an autoencoder as the generator G of GAN. The loss of the extended GAN model is defined as follows:\nLossGAN = γL1(G(̂itrace), îtrace) +Ei∼p(i)[logD(i)] +Eîtrace∼p(̂itrace)[log(1−D(G(̂itrace)))]\nCompared with standard GAN, we extend the loss function with L1 loss of îtrace and G(̂itrace) with a weight of γ to force G to retain the holistic visual appearance delivered by îtrace. L1 loss is generally acknowledged to perform better on capturing the low-frequency part of an image (Isola et al., 2017). Indeed, our evaluation shows that L1 loss, as a common setting, sufficiently conducts SCA and recovers user private inputs of high quality." }, { "heading": "4 ATTACK SETUP", "text": "As introduced in Sec. 2, popular system side channels are primarily derived from program memory accesses. Let addr be the address of a memory location accessed by the victim software, Table 1 reports three utilized side channels and how they are derived from memory accesses. Cache line and page table side channels are commonly used for exploitation (Hähnel et al., 2017; Yarom & Falkner, 2014). Furthermore, enabled by our framework, a very low-resolution side channel of data read/write access patterns can, for the first time, be used to reconstruct high-quality images. Fig. 1 holistically depicts how attackers can monitor cache and page table side channels. Data read/write patterns can be similarly recorded by monitoring how caches or page tables are visited.\nTable 1 shows that different addr can be mapped to the same side channel record. Similarly, different inputs can induce identical memory address addr. For instance, in Fig. 1(b) array[59] and array[60]will always be executed as long as i 6= 1. Two layers of many-to-one mapping amplify the uncertainties of synthesizing discriminable images of high quality. It is easy to see that we are not simply mapping a trace back to an image.\nEach memory address addr has 48 bits, denoting a large range of values. We normalize the memory address value (discrete integers) into continuous values within [0, 1]. Overall, while arbitrary 48-bit integers have a large range, side channel data points indeed vary within a small range. For instance, for cache based side channels, the possible values are limited by the total number of CPU cache units. In all, side channel data points are large values ranging in a relatively small range. Attack Target. We attack two media libraries, libjpeg and uPNG, to reconstruct private user images of JPEG and PNG formats. Previous image reconstruction SCA (Xu et al., 2015; Hähnel et al., 2017) only exploit libjpeg. PNG and JPEG denote very popular image compression standards, and given an image in JPEG/PNG format, libjpeg and uPNG can reverse the compression to generate a bitmap image as the basis of many image analysis tools, e.g., the Python Pillow library (Clark, 2020). The decompression process introduces many input-dependent memory accesses which, from the attacker’s perspective, can be reflected on side channels according to Table 1.\nWe share common assumptions with existing profiling-based SCA (Hospodar et al., 2011; Heuser & Zohner, 2012) that side channel traces have been well prepared for use. For our experiments, we use Pin (Luk et al., 2005), a runtime monitoring tool, to intercept memory accesses of victim software. A logged memory access trace will be converted into three separate side channel traces according to Table 1, denoting the attacker’s view on i. Each side channel trace generated by libjpeg or uPNG contains 700K to 1.3M records. See Appendix A for attack setup details. We evaluate traces logged via different side channels separately. Evaluating the composition of side channels (i.e., a “mega-side channel”) is not aligned with how real-world SCA is typically launched." }, { "heading": "5 EVALUATION", "text": "We present the first systematic approach to reconstructing images from side channels. There is no previous research for empirical comparison. Two closely related works provide no tools for use (Xu et al., 2015; Hähnel et al., 2017). As disclosed in their papers, manual efforts are extensively used to reconstruct images. For instance, both methods treat image color recovery as a separate task, by iterating multiple reconstruction trials and manually picking one with relatively better visual effect. Xu et al. (2015) exploit page table side channels and colors are rarely recovered. Hähnel et al. (2017) recover adequate image colors but only exploit finer-grained cache side channels. Also, domain-specific knowledge on libjpeg is required to locate a tiny portion of secret-dependent side channel data points for use. In contrast, we present an end-to-end approach to recovering colorful images with high quality, by directly analyzing a page table or cache side channel trace of up to 1.3M records. Our attack treats victim software (libjpeg or uPNG) as a “black-box” (no need for source code) and is independent of any underlying computing infrastructure details.\nBenchmarks. Three datasets are primarily used in the evaluation, containing typical daily images that could violate privacy if leaked to adversaries. Consistent with existing research reconstructing images from audio recording (Wen et al., 2019; Oh et al., 2019), we accelerate the model training using images of 3 × 128 × 128 pixels. Wen et al. (2019) use images of an even smaller size (3 × 64× 64). See Appendix B for model implementation and training details. (i) Large-scale CelebFaces Attributes (CelebA) (Liu et al., 2015b) contains about 200K celebrity face images. We randomly select 80K images for training and 20K images for testing. (ii) KTH Human Actions (KTH) (Laptev & Lindeberg, 2004) contains videos of six actions made by 25 persons in 4 directions. For each action, we randomly select videos of 20 persons for training and use the rest for testing. We have 40K images for training and 10K images for testing. (iii) LSUN Bedroom Scene (LSUN) (Yu et al., 2015) contains images of typical bedroom scenes. We randomly select 80K images for training and 20K images for testing." }, { "heading": "5.1 QUALITATIVE EVALUATION RESULTS", "text": "Fig. 3 shows the reconstructed images in different settings. In addition to reporting the final outputs (i.e., the “VAE-LP & GAN” column), we also report images generated only from the VAE-LP module in the “VAE-LP” column for comparison. More results are given in Appendix C. For most of the cases, the reconstructed images and reference images show consistent visual appearances,\nsuch as gender, skin color, bedroom window, and human gesture. Images in the LSUN dataset contain many subtle bedroom details, imposing relatively higher challenge for reconstruction.\nRealistic and recognizable images can be recovered using cache side channels (especially for LSUN) while images recovered from the other side channels are relatively blurry. As explained in Table 1, cache line indices (addr 6) are closer to the memory address addr (only missing the lowest 6 bits), while page table indices eliminate the lowest 12 bits from addr (typically lower bits in addr are informative and likely influenced by inputs), and each read/write pattern has only one bit.\nCompared with images generated by VAE-LP, the GAN module enhances the image quality by adding details and sharpening the blurred regions. GAN may overly enhance the image quality (e.g., the first LSUN case with jungle green wallpaper). However, GAN is indeed vital to exploit user privacy. For example, considering the first human gesture case in KTH, where the image reconstructed from cache side channels contains a “black bar” when using VAE-LP. The GAN module enhances this obscure image and reconstructs the human gesture, thus violating user privacy." }, { "heading": "5.2 QUANTITATIVE EVALUATION RESULTS", "text": "To assess the generated images w.r.t. discriminability, we first study the accuracy of matching a reconstructed image î to its reference input i. To do so, we form a set of N images which include the reference input i andN−1 images randomly selected from our testing dataset. We then measure whether i appears in the top-k most similar images of î. Conceptually, we mimic a de-anonymization attack of user identity, where N scopes the search space attackers are facing (e.g., all subscribers of a cloud service). We use a perceptually-based similarity metric, SSIM (Wang et al., 2004), to quantify structural-level similarity between the reconstructed images and reference inputs.\nTable 2 reports the evaluation results of six practical settings. Consistent with Fig. 3, cache side channels help to reconstruct î with better discriminability (highest accuracy for all settings in Table 2). LSUN images have lower accuracy. As shown in Fig. 3, images in LSUN contain many subtle bedroom details and deem challenging for discrimination. We achieve the highest accuracy when k = 20 and N = 100, while the accuracy, as expected, decreases in more challenging settings (e.g., when k = 1 and N = 500). Evaluations consistently outperform the baseline — random guess. For instance, while the accuracy of random guess when k = 1 and N = 500 is 0.2%, we achieve higher accuracy (range from 0.46% to 28.68%) across all settings. Appendix D also conducts this evaluation using images generated from only VAE-LP. We report that better discriminability can be achieved for all datasets when supplementing VAE-LP with GAN.\nFor face images in CelebA, we also study how well different facial attributes are being captured in the reconstructed images. We use Face++ (fac, 2020), a commercial image analysis service, to classify reconstructed images and reference inputs w.r.t. age and gender attributes. Fig. 4 reports the confusion matrices for age and gender attributes and distributions of training data for the reference. The reconstructed images are produced using cache side channels. Overall, we achieve a good agreement for both male and female labels. We also observe correlated classification results for most age groups. The age distribution indicates that early adulthood (20–40) and middle age (40– 60) mostly dominate the dataset, which presumably induces biases in the age confusion matrix. Similarly, “male” has a smaller representation in the training set, potentially explaining its lower agreement in the confusion matrix. Appendix D also conducts this evaluation using only VAE-LP or using other side channels where comparable results can be achieved." }, { "heading": "5.3 GENERALIZABILITY", "text": "This section explores the generalizability of our SCA. We launch attacks toward uPNG to illustrate that our method is independent of specific software implementation or image formats. For uPNG experiments, we evaluate attacks on the CelebA dataset using cache side channels. As shown in table Table 3, our attack can recover discriminable images and largely outperforms the baseline (random guess) in terms of privacy inference. See Appendix E for the reconstructed images. We also benchmark our attack to synthesize arbitrary images without using specific types of images to train the model. We instead use a general training dataset, mini-Imagenet. We use cache side channels to exploit libjpeg. Table 3 illustrates that considerable privacy is leaked using mini-Imagenet as the training set, and for all settings, our SCA largely outperforms the baseline.\nThe recovered images when using mini-Imagenet as the training data are visually worse than images recovered using specialized datasets. See Appendix F for the recovered images. This observation reveals the potential trade-off of our research. Overall, training generative models using a general dataset without the knowledge of images classes seems “unconventional.” A generative model is typically trained using dataset of only one class (Guo et al., 2019), or it requires image class information to be explicitly provided during both training and generation phases (Brock et al., 2019). Nevertheless, we still evaluate our approach using a general dataset to explore the full potential of our attack. We attribute the adequate results using general datasets to discriminable features extracted by our trace encoder from images of different classes. See Appendix G for further evaluations.\nFrom a holistic perspective, the adopted training image sets constitute a predictive model of user privacy. While a particular user input is private, we assume that the functionality of victim software (e.g., a human face analysis service) is usually known to the public or can be probed prior to attacks." }, { "heading": "6 DISCUSSION", "text": "This is the first paper that provides a practical solution to reconstruct images from system side channels. Proposing a novel generative model design is not our key focus. Also, despite the encouraging results, the reconstructed images show room for improvement. For instance, not all image colors were well recovered. Our manual inspection shows that compared with libjpeg, uPNG does not impose informative side channel dependency on pixel colors (i.e., different colors can likely induce identical side channel logs). Nevertheless, user privacy is presumably leaked as long as the image skeleton is recovered. Colors (or other details) can be recovered if the system community discovered more powerful (finer-grained) side channels." }, { "heading": "7 RELATED WORK", "text": "Exploiting System Side Channels. System side channels have been used to exploit various real-life software systems (Dong et al., 2018; Wu et al., 2012; Hähnel et al., 2017; Xu et al., 2015). The CPU cache is shown to be a rich source for SCA attacks on cloud computing environments and web browsers (Hähnel et al., 2017; Oren et al., 2015). In addition to cache line side channels analyzed in this research, other cache storage units, including cache bank and cache set, are also leveraged for SCA attacks (Yarom et al., 2017; Liu et al., 2015a). Overall, while most attacks in this field perform dedicated SCA attacks toward specific side channels, our approach is general and orthogonal to particular side channels.\nProfiling-based SCA. Machine learning techniques have substantially boosted profiling-based SCA by learning from historical data. DNN models have been used to recover secret keys from crypto libraries under different scenarios (Heuser & Zohner, 2012; Maghrebi et al., 2016; Cagli et al., 2017; Hettwer et al., 2018; Kim et al., 2019; Hettwer et al., 2020). Nevertheless, the success of existing AIbased SCA attacks is primarily driven by the model classification capability, e.g., deciding whether a particular bit of AES secret key is 0 or 1. This paper advocates the new focus on reconstructing images with generative models, leveraging another major breakthrough in DNN.\nSCA Mitigation. Existing SCA mitigation techniques can be categorized into system-based and software-based approaches. For system-based approaches, previous works have proposed to randomize the cache storage units or enforce fine-grained isolation schemes (Wang & Lee, 2006; 2008; 2007; Liu et al., 2016). Some recent advances propose to leverage new hardware features to mitigate side channel attacks (Gruss et al., 2017). In addition, software-level approaches, including designing secret-independent side channel accesses, randomizing memory access patterns (Coppens et al., 2009; Raj et al., 2009; Schwarz et al., 2018), have also been proposed. Compared with system- and hardware-based mitigations, software-based approaches usually do not require a customized hardware design, and are generally more flexible. Nevertheless, software-based approaches can usually incur extra performance penalty." }, { "heading": "8 CONCLUSION", "text": "This paper has presented a general and effective SCA framework. The framework is trained with side channels to exploit media software like libjpeg and uPNG. Our evaluation shows that reconstructed images manifest close similarity with user inputs, making privacy leakage attacks practical. We also show surprising findings that enabled by our framework, attacks with low-resolution side channels become feasible." }, { "heading": "9 ETHICS STATEMENT", "text": "We present a systematic and effective pipeline of recovering private images using system side channels. It is generally acknowledged that studying attack schemes helps eliminate false trust on modern computing infrastructures and promote building secure systems (Athalye et al., 2018). While there is a risk that SCA could become easier using our methods, we believe that our work will also promote rapidly detecting SCA before security breaches. As will be shown in Appendix J, our proposed technique can serve as a “bug detector” to isolate certain code blocks in image processing software that induce SCA opportunities. Developers can thus refer to our findings to patch their software.\nOur efforts could impact the ever-growing CV community in building side channel-free image analysis tools. Despite the algorithm-level efforts to address privacy concerns, e.g., via differential privacy (Dwork et al., 2006), the infrastructure-level vulnerabilities have not yet received enough attention, especially in the real-world scenarios like MLaaS. Our research will serve as a critical incentive to re-think tradeoffs (e.g., cost vs. security guarantee) currently taken in this field." }, { "heading": "10 ACKNOWLEDGEMENTS", "text": "We thank the ICLR anonymous reviewers and area chairs for their valuable feedback. Junping Zhang is supported by National Key Research and Development Project (2018YFB1305104)." }, { "heading": "A ATTACK SETUP DETAILS", "text": "In this section, we provide detailed information regarding our attack setup, including three employed side channels, the attacked libjpeg and uPNG libraries (libjpeg, 2020; Middleditch, 2010), and how we log side channel information. Three side channels are taken into account as follows:\n• Cache Line. Cache line side channel denotes one popular hardware side channel, enabling the exploitation of real-world crypto, image and text libraries (Hähnel et al., 2017; Yarom & Falkner, 2014). The CPU cache, as one key component in modern computer architecture, stores data so that future memory requests of that particular data become much faster. Data are stored in a cache block of fixed size, called the cache line. Each memory access made by victim software is projected to a cache line access. In typical cloud platforms, an attacker can monitor cache line accesses made by victim software, leading to a powerful side channel. For modern Intel architectures, the cache line index of a memory address addr can be computed as addr 6. Therefore, access of a particular cache line can be mapped back to 26 memory addresses.\n• Page Table. The OS kernel uses the page table to track mappings between virtual and physical memory addresses. Every memory access made by the victim software is converted into its physical address by querying a page table entry. In cloud computing platforms, a malicious OS on the host can observe page table accesses made by the victim software to infer its memory access (Xu et al., 2015). Given a virtual address addr, we calculate the accessed page table index by masking addr with PAGE MASKM : addr & (∼ M). M is 4095 on modern x86 architectures (pag, 2020).\n• Data Read/Write Access. Our preliminary study shows surprising results that enabled by powerful generative models, low-resolution side channels of only one-bit data read/write access records can be used to recover quality images. That is, given a memory access made by the victim software, we use one bit to note whether the access is a read or write operation. Such data read/write accesses can be easily observed by monitoring either cache or page table accesses.\nSCA attacks using cache lines and page tables are well-known and have enabled real-life exploitations in various scenarios. In contrast, to our knowledge, no real-world attacks are designed to exploit read/write patterns. This work shows that quality images can be synthesized by using such low-resolution read/write access side channels.\nPreparing Victim Software Consistent with existing SCA of exploiting media software (Xu et al., 2015; Hähnel et al., 2017), we use a widely-used JPEG image processing library, libjpeg, as the attack target. Attacking libjpeg which has been exploited in the literatures makes it easier to (conceptually) compare our approach with existing works. As mentioned in our paper, we indeed contact authors of both papers to inquire their tools; we didn’t receive any response by the time of writing. As disclosed in their papers, manual efforts are primarily used to recover images. On the other hand, there is no issue for our approach to analyze other image processing libraries as long as different inputs adequately influence side channel logs. To demonstrate the generalizability of our approach, we also attacked another image library, uPNG, which takes images of PNG format as the inputs.\nJPEG and PNG are two popular image compression standards. Given an image of JPEG/PNG formats, both image processing libraries reverse the compression step to generate a bitmap image as the prerequisite of many image analysis applications. The decompression process introduces considerable amount of input-dependent side channel accesses for both libraries. We compile both libjpeg and uPNG on 64-bit Ubuntu 18.04 machine with gcc with optimization-level as -O0 which disables all optimizations.\nWe measure the complex of libjpeg, by counting the line of code of the attacked libjpeg module. We report the attacked module, conducting JPEG image decompression under various settings, has approximately 46K lines of code. Similarly, the uPNG software has about 1.2K lines of code. In contrast, typically the crypto software attacked by previous profiling-based SCA (Hettwer et al., 2020; Gullasch et al., 2011; Aciicmez & Koc, 2006; Yarom et al., 2017) are much simpler. For\ninstance, the x86 implementation of Advanced Encryption Standard (AES) in OpenSSL has about 600 lines of code (excluding global data structures like permutation boxes).\nPreparing Side Channel Logs To prepare side channel traces, we use Pin (Luk et al., 2005), a runtime monitoring framework developed by Intel to intercept all memory accesses of our test programs when processing an input image i. Every virtual address addr on the logged trace is translated into its corresponding cache line and page table indexes following the aforementioned methods. Similarly, we intercept all memory accesses, and for each memory access, we use one bit to denote whether it is a data read or write access. All these runtime monitoring tasks can be done by writing two plugins of Pin. We report that processing each image with libjpeg can generate a trace of 730K to 760K side channel records. Processing an image with uPNG can generate a trace of about 1.3M side channel records. Recall as introduced in Sec. 3.1, each side channel trace is encoded into a N × N × K matrix and then processed by CNNs. A libjpeg trace is encoded into a 512 × 512 × 3 matrix. A uPNG trace is encoded into a 512 × 512 × 6 matrix. We do zero padding for matrices. In comparison, exploiting crypto libraries (e.g., AES decryption) generates much succinct side channel traces with only a few hundred records (Hettwer et al., 2020),\nAttacking Other Image Processing Software and DNN Models We pick libjpeg since this is the only media software attacked by existing SCA (Xu et al., 2015; Hähnel et al., 2017). We also attacked uPNG to demonstrate the generalizability of our approach. Note that libjpeg is commonly used in the image analysis pipeline, e.g., it is the prerequisite of the popular Python image processing library — Pillow (Clark, 2020).\nAlso, careful readers may wonder the feasibility of directly exploiting DNN-based image analysis tools. However, as pointed out in previous research (Hong et al., 2018a), memory access of typical DNN operations like matrix multiplications is not input-dependent. That is, while it has been demonstrated by the same authors that cache side channels is feasible to recover DNN model architectures (Hong et al., 2020), SCA is generally not feasible to recover inputs to DNN models." }, { "heading": "B MODEL ARCHITECTURE AND EXPERIMENT SETUP", "text": "We now report the architecture and parameters of our framework. The Enctrace of VAE-LP module is reported in Table 5. Encimage and Decimage are reported in Table 4. The generator G and discriminator D of our GAN module are listed in Table 6 and Table 7, respectively.\nWe implement our framework in Pytorch (ver. 1.5.0). We use the Adam optimizer (Kingma & Ba, 2014) with learning rate ηV AE−LP = 0.0001 for the VAE-LP module, and learning rate ηGAN = 0.0002 for the GAN module. We set β1 = 0.5, and β2 = 0.999 for both modules. β in LossV AE−LP is 0.0001, and γ in LossGAN is 100. Minibatch size is 50.\nWe ran our experiments on an Intel Xeon CPU E5-2678 with 256 GB of RAM and one Nvidia GeForce RTX 2080 GPU. The training is completed at 200 iterations (100 iterations for the VAE-LP module, and 100 iterations for the GAN module).\nC SAMPLE OUTPUTS WHEN ATTACKING LIBJPEG\nThis section provides more images generated by our framework when attacking libjpeg. Overall, we interpret the results as promising and highly consistent across all three different datasets. As discussed in Sec. 5.1, the reconstructed and the corresponding reference images show highly consistent identities, such as gender, face orientation, human gesture, and hair style.\nVAE-LP Reference InputVAE-LP & GAN VAE-LP VAE-LPVAE-LP & GAN VAE-LP & GAN\nVAE-LP Reference InputVAE-LP & GAN VAE-LP VAE-LPVAE-LP & GAN VAE-LP & GAN\nVAE-LP Reference InputVAE-LP & GAN VAE-LP VAE-LPVAE-LP & GAN VAE-LP & GAN\nVAE-LP Reference InputVAE-LP & GAN VAE-LP VAE-LPVAE-LP & GAN VAE-LP & GAN\nVAE-LP Reference InputVAE-LP & GAN VAE-LP VAE-LPVAE-LP & GAN VAE-LP & GAN\nVAE-LP Reference InputVAE-LP & GAN VAE-LP VAE-LPVAE-LP & GAN VAE-LP & GAN" }, { "heading": "D QUANTITATIVE EVALUATION", "text": "Besides the quantitative evaluation of the discriminability of reconstructed images reported in the paper, we also analyze images reconstructed by using only the VAE-LP module and presented the results in Table 8. Accordingly, we give the quantitative data that has been already reported in our paper for cross comparison in Table 9.\nComparing results reported in Table 8 and Table 9, we observed that by using VAE-LP & GAN modules together, the KTH human gesture dataset has a substantial improvement in terms of accuracy. The average accuracy of the KTH dataset is 41.9% in Table 8 while the average accuracy of the KTH dataset is increased to 49.8% in Table 9. This observation is consistent with our findings in Sec. 5.2 and some cases demonstrated in Fig. 7 and Fig. 8. Recall we observed an obscure “black bar” in KTH images reconstructed by only using VAE-LP module, while a human gesture can be clearly identified by enhancing the “black bar” with the GAN module. We also observed improved accuracy for the CelebA (41.0% to 41.4%) and LSUN datasets (12.6% to 12.9%) when supplementing VAE-LP with GAN. Overall, we interpret that better discriminability can be achieved when using GAN, implying higher success rate for attackers to de-anonymize user identity and privacy.\nWe also present fine-grained facial attribute comparison between the reconstructed images and reference inputs. In Sec. 5.2 we have reported gender and age confusion matrices evaluation using cache side channels (also presented in Fig. 11 for the reference and cross comparison), we also report other settings in Fig. 12, Fig. 13, and Fig. 14. To quantitatively evaluate the confusion matrices, we measure and report the weighted-average F1 score in Table 10. Besides one case with notably increased F1 score (the gender matrix using page table), VAE-LP and VAE-LP & GAN have comparable weighted-average F1 scores.\nE SAMPLE OUTPUTS WHEN ATTACKING UPNG\nThis section provides images generated by our framework when attacking uPNG. While the side channel traces induced by uPNG is generally less informative than libjpeg (as shown in Table 3 and discussed in Sec. 6), we still observed high visually consistent identities between the reconstructed images and their reference inputs, including gender, face orientation, hair style, hair length, whether wearing a pair of glasses and many other factors." }, { "heading": "F SAMPLE OUTPUTS WHEN TRAINING WITH MINI-IMAGENET", "text": "To measure our attack reconstructing arbitrary images without knowing the type of images being processed, Sec. 5.3 reports model training and attack performance using a general dataset, miniImagenet. We report that the mini-Imagenet dataset has in total 60K images of 100 classes, and we divide each class (with 600 images) into 480 training images and 120 testing images. As a result, we have in total 48K images for training and take the other 12K images for testing. While training generative models with a general dataset is not the common practice unless image class information is explicitly provided (Brock et al., 2019), Table 3 still reports highly encouraging results of the discriminability analysis by largely outperforming the baseline — random guess. In this section, we provide sample images generated at this step.\nThe synthesized images from the mini-Imagenet dataset is visually much worse that images synthesized from specific datasets (e.g., CelebA in Fig. 5). Nevertheless, by comparing the synthesized images and their corresponding references (i.e., user inputs), we interpret that images still exhibit high discriminability, in the sense that many visually consistent image skeletons and colors are recovered, indicating adequate leakage of user privacy." }, { "heading": "G CLASSIFYING OUTPUTS OF THE TRACE ENCODER", "text": "By training our framework with mini-Imagenet and assessing the discriminability, Sec. 5.3 has demonstrated that our attack can largely outperform the baseline even with no prior knowledge on the class information of user private images. We attribute the promising evaluation results to the discriminable features successfully extracted by our trace encoder (trained with mini-Imagenet; see Appendix F). This section presents empirical results by assessing to what extent the latent representations derived from images of two randomly selected classes are distinguishable.\nTo this end, we build a binary classifier, by piggybacking our trace encoder with a fully-connected (FC) layer and using Sigmoid as the activation function. As mentioned in Appendix F, the miniImagenet dataset has in total 60K images of 100 classes, and we divide each class into 480 images for training and 120 images for testing. At this step, we randomly select two classes of images, and use their training sets (in total 960 images) to train the proposed binary classifier. We then use their testing sets, including in total 240 images, to measure the binary classification accuracy. It is worth mentioning that we only train the classifier for one epoch to highlight that the latent representations extracted by the trace encoder already exhibit high quality and discriminability. We only tune the parameters of FC layer but preserve the parameters of our trace encoder.\nWe iterate this process for 100 times. Fig. 21 reports the classification accuracy across all 100 binary classification tasks. While the baseline accuracy for our binary classification task is 50%, most tasks exhibit much higher classification accuracy. We report the average accuracy is 81.6% and 32 cases exhibit a classification accuracy above 90%." }, { "heading": "H ROBUSTNESS TO NOISE", "text": "This section provides experiences on the robustness of our framework by inserting noise into the side channel traces. To this end, we evaluated the following three settings:\n• Gaussian noise insertion: For each side channel data point input on the side channel trace, input = x×n+(1−x)× input, where x ∈ [0.1, 0.2, 0.5], and n denotes randomly generated noise following the Gaussian Distribution.\n• Zero replacement: Randomly set x% of the data points on the side channel trace to 0, where x ∈ [10, 20, 50].\n• Round shifting: Round shifting the side channel trace for x steps, where x ∈ [1, 10, 100].\nTable 11: Discriminability evaluation by adding Gaussian noise in the side channel trace.\nConfiguration x = 0 x = 0.1 x = 0.2 x = 0.5 k = 1 N = 100 49.98% 48.32% 39.14% 5.66% k = 5 N = 100 78.00% 76.28% 67.08% 19.62% k = 20 N = 100 91.98% 90.56% 86.22% 45.20%\nTable 12: Discriminability evaluation by randomly replacing x% side channel data points with zero.\nWe present the corresponding qualitative evaluation results in Fig. 22, Fig. 23, and Fig. 24, respectively. Accordingly, we present the quantitative results in Table 11, Table 12, and Table 13.\nOverall, despite the challenging settings, we still observed considerable visually consistent features (e.g., face orientation, hair style, gender) between the reconstructed images and their reference inputs. Fig. 24 shows that round shifting seems to impose relatively low impact on the reconstructed images (e.g., comparing x = 0 with x = 100). In contrast, a more challenging setting, adding Gaussian noise to each side channel data point, causes observable effect on the constructed images (e.g., comparing x = 0 with x = 0.5)." }, { "heading": "I ABLATION EXPERIMENTS", "text": "This section provides more ablation experiments. We aim to demonstrate the necessity of image encoder and a learned prior. To this end, we launch experiences to synthesize images without using the image encoder (see the 4th row of Fig. 25), and also synthesize images with a fixed Gaussian prior (the 3rd row of Fig. 25). It is easy to see that the reconstructed images manifest much lower quality compared with images synthesized by our framework (the 2nd row of Fig. 25). In particular, images synthesized without using the image encoder are seen to contain grids (the last row of Fig. 25). It is also observed that when feeding the decoder with a fixed gaussian prior, the synthesized images are low quality as well. The outputs become not recognizable since the fixed prior has primarily no information of side channel traces. This also indicates that our model is not a simple image generator, and the trace encoder plays a vital role in the pipeline. Overall, we interpret these ablation evaluations highlight the importance of the image encoder and a learned prior in SCA. Our designed framework, by incorporating the side channel trace encoder, can effectively extract latent representations from the logged side channel data points. Simultaneously, by integrating corresponding reference images during training, we provide a guideline to help the image decoder to generate quality images.\nIn addition, we further conduct another ablation experiments regarding image level metrics. To do so, we use LPIPS (Zhang et al., 2018), image-level perceptual loss, to calculate the similarity of the reconstructed image and ground truth image.\nThe results are given in Table 14. Compared with our results reported in Sec. 5, the accuracy of GAN output is reduced by around 10% and reduced even more on output of VAE-LP. Nevertheless, the results are reasonable since the model is not train using this perceptual loss. Overall, we interpret the evaluation results show that from the perspective of “human adversaries”, the GAN module is indeed necessary." }, { "heading": "J MAPPING SIDE CHANNEL TRACES BACK TO INFORMATIVE FUNCTIONS", "text": "In this section, we explore side channel traces and aim to answer the question “what information in the side channel is used for image reconstruction?” To this end, we explore which data points on the side channel trace affects most to the output by calculating the gradient. Due to the limited time, we tested cache side channel traces logged for the libjpeg software.\nGiven an image i which is not in training data, we first collect its corresponding cache side channel trace Ti when being processed by libjpeg. VAE-LP module will then take Ti as the input and reconstruct itrace. As shown in Fig. 26, we then calculate the loss of itrace and i, and further perform backward propagation to calculate the gradient up to Ti, namely gTi . Since gTi has the same size of Ti, we can pinpoint which part of Ti contributes most to itrace by localizing which part of gTi has a higher value. More specifically, we normalize gTi to [0, 1] and only keep values greater than a threshold T (T is set as 0.8 in our current study). Overall, we report that from the employed cache side channel trace with 754139 data points, we successfully pinpoint a set of 31 data points that primarily contribute to the private image reconstruction (see Fig. 28 and Fig. 29).\nSince each side channel data point is converted from a memory address (see Table 1), our retained “informative” side channel data points can thus be mapped back to certain functions in libjpeg. That is, we indeed use informative side channel records to isolate functions in libjpeg that potentially leak privacy. Fig. 27 depicts how this backward mapping and isolation are conducted. For instance, given a side channel record 0x55dba1628e62 marked as informative and notably contributes to the gradient, we use the address of the corresponding memory access instruction, 0x7f38daafd6b5, to isolate function jsimd convsamp float. That is, certain input-dependent memory access in jsimd convsamp float induces this cache line access and eventually contributes to the reconstruction of the user private input.\nFig. 28 reports a part of the logged side channel trace and marks several side channel data points in red which largely affects the gradient. We show their corresponding functions in libjpeg in Fig. 29. Overall, we report that this evaluation successfully pinpoints multiple critical functions in the libjpeg software that has input-dependent cache accesses. In particular, we note that this evaluation helps to “re-discover” some functions that have been reported by previous research (mostly\nwith manual effort) as vulnerable to SCA: e.g., functions write ppm, put rgb and output ppm which dump the decompressed raw image to the disk.\nMore importantly, this evaluation helps to pinpoint new functions that contribute to the private image reconstruction (and hence become vulnerabilities to SCA), such as jsimd can fdct islow, jsimd can fdct ifast, jsimd convsamp and jsimd convsamp float. These functions primarily conduct image discrete cosine transformation (DCT) and decompression. We interpret this as a highly encouraging finding, in particular:\n• As reviewed in Sec. 2, previous research uses manual effort (Xu et al., 2015; Hähnel et al., 2017) or formal methods (Doychev et al., 2013; Wang et al., 2017) to pinpoint program components that depend on inputs, which are program-specific and error-prone with low scalability.\n• This research and our study in this section actually reveals a procedure where we leverage gradient to directly highlight which part of the logged side channel trace contributes to the synthesis of outputs. Then, we map the highlighted trace back to where they are derived from in the victim software to isolate vulnerable components (i.e., a bug detection tool).\nWe view it as a promising finding: our approach depicted in this section is general and can be launched fully automatically without requiring manual efforts or formal methods which are usually not scalable. As shown in this section, our tentative study not only re-discovers vulnerabilities that were found by previous research, but helps to identify, to our best knowledge, unknown program components that are vulnerable to SCA. Looking ahead, we would like to explore this direction as a follow-up of the present work." } ]
2,021
null
SP:7fb11c941e8d79248ce5ff7caa0535a466303395
[ "This paper proposes a method of learning ensembles that adhere to an \"ensemble version\" of the information bottleneck principle. Whereas the information bottleneck principle says the representation should avoid spurious correlations between the representation (Z) and the training data (X) that is not useful for predicting the labels (Y), i.e. I(X;Z) or I(X;Z|Y), this paper proposes that ensembles should additionally avoid spurious correlations between the ensemble members that aren't useful for predicting Y, i.e. I(Z_i; Z_j| Y). They show empirically that the coefficient on this term increases diversity at the expense of decreasing accuracy of individual members of the ensemble." ]
Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information, we introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features. The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant. Therefore, besides the classification loss with information bottleneck, we adversarially prevent features from being conditionally predictable from each other. We manage to reduce simultaneous errors while protecting class information. We obtain state-of-the-art accuracy results on CIFAR-10/100: for example, an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently. We further analyze the consequences on calibration, uncertainty estimation, out-of-distribution detection and online co-distillation.
[ { "affiliations": [], "name": "Alexandre Rame" } ]
[ { "authors": [ "Arturo Hernández Aguirre", "Carlos A Coello Coello" ], "title": "Mutual information-based fitness functions for evolutionary circuit synthesis", "venue": "In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753),", "year": 2004 }, { "authors": [ "Matti Aksela" ], "title": "Comparison of classifier selection methods for improving committee performance", "venue": "In International Workshop on Multiple Classifier Systems,", "year": 2003 }, { "authors": [ "Alex Alemi", "Ian Fischer", "Josh Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "In In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon" ], "title": "Uncertainty in the variational information bottleneck", "venue": "arXiv preprint arXiv:1807.00906,", "year": 2018 }, { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Roberto Battiti" ], "title": "Using mutual information for selecting features in supervised neural net learning", "venue": "IEEE Transactions on neural networks,", "year": 1994 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "Devon Hjelm" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Anthony J Bell", "Terrence J Sejnowski" ], "title": "An information-maximization approach to blind separation and blind deconvolution", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Hedi Ben-Younes", "Remi Cadene", "Nicolas Thome", "Matthieu" ], "title": "Cord. Block: Bilinear superdiagonal fusion for visual question answering and visual relationship detection", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Sangnie Bhardwaj", "Ian Fischer", "Johannes Ballé", "Troy Chinen" ], "title": "An unsupervised informationtheoretic perceptual quality metric", "venue": "arXiv preprint arXiv:2006.06752,", "year": 2020 }, { "authors": [ "Michael Blot", "Thomas Robert", "Nicolas Thome", "Matthieu Cord" ], "title": "Shade: Information-based regularization for deep learning", "venue": "In 2018 25th IEEE International Conference on Image Processing (ICIP),", "year": 2018 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume", "year": 2015 }, { "authors": [ "Nicholas A Bowman" ], "title": "How much diversity is enough? the curvilinear relationship between college diversity interactions and first-year student outcomes", "venue": "Research in Higher Education,", "year": 2013 }, { "authors": [ "Glenn W Brier" ], "title": "Verification of forecasts expressed in terms of probability", "venue": "Monthly weather review,", "year": 1950 }, { "authors": [ "Gavin Brown" ], "title": "A new perspective for information theoretic feature selection", "venue": "In Artificial intelligence and statistics,", "year": 2009 }, { "authors": [ "Gavin Brown", "Jeremy Wyatt", "Ping Sun" ], "title": "Between two extremes: Examining decompositions of the ensemble objective function", "venue": "In International workshop on multiple classifier systems,", "year": 2005 }, { "authors": [ "Gavin Brown", "Jeremy L Wyatt", "Peter Tiňo" ], "title": "Managing diversity in regression ensembles", "venue": "Journal of machine learning research,", "year": 2005 }, { "authors": [ "Changrui Chen", "Xin Sun", "Yang Hua", "Junyu Dong", "Hongwei Xv" ], "title": "Learning deep relations to promote saliency detection", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Defang Chen", "Jian-Ping Mei", "Can Wang", "Yan Feng", "Chun Chen" ], "title": "Online knowledge distillation with diverse peers", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Nadezhda Chirkova", "Ekaterina Lobacheva", "Dmitry Vetrov" ], "title": "Deep ensembles on a fixed memory budget: One wide network or several thinner ones", "venue": "arXiv preprint arXiv:2005.07292,", "year": 2020 }, { "authors": [ "Inseop Chung", "SeongUk Park", "Jangho Kim", "Nojun Kwak" ], "title": "Feature-map-level online adversarial knowledge distillation", "venue": "arXiv preprint arXiv:2002.01775,", "year": 2020 }, { "authors": [ "Pierre Comon" ], "title": "Independent component analysis, a new concept", "venue": "Signal processing,", "year": 1994 }, { "authors": [ "Thomas M Cover" ], "title": "Elements of information theory", "venue": null, "year": 1999 }, { "authors": [ "Ali Dabouei", "Sobhan Soleymani", "Fariborz Taherkhani", "Jeremy Dawson", "Nasser M. Nasrabadi" ], "title": "Exploiting joint robustness to adversarial perturbations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Learning confidence for out-of-distribution detection in neural networks", "venue": "arXiv preprint arXiv:1802.04865,", "year": 2018 }, { "authors": [ "Thomas G Dietterich" ], "title": "Ensemble methods in machine learning", "venue": "In International workshop on multiple classifier systems,", "year": 2000 }, { "authors": [ "Monroe D Donsker", "SR Srinivasa Varadhan" ], "title": "Asymptotic evaluation of certain markov process expectations for large time", "venue": "i. Communications on Pure and Applied Mathematics,", "year": 1975 }, { "authors": [ "Nikita Dvornik", "Cordelia Schmid", "Julien Mairal" ], "title": "Diversity with cooperation: Ensemble methods for few-shot classification", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Bradley Efron", "Robert J Tibshirani" ], "title": "An introduction to the bootstrap", "venue": "CRC press,", "year": 1994 }, { "authors": [ "Ian Fischer" ], "title": "The conditional entropy bottleneck", "venue": "arXiv preprint arXiv:2002.05379,", "year": 2020 }, { "authors": [ "Ian Fischer", "Alexander A Alemi" ], "title": "Ceb improves model robustness", "venue": "arXiv preprint arXiv:2002.05380,", "year": 2020 }, { "authors": [ "François Fleuret" ], "title": "Fast binary feature selection with conditional mutual information", "venue": "Journal of Machine learning research,", "year": 2004 }, { "authors": [ "Yoav Freund", "Robert Schapire" ], "title": "A short introduction to boosting", "venue": "Journal-Japanese Society For Artificial Intelligence,", "year": 1999 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Weihao Gao", "Sewoong Oh", "Pramod Viswanath" ], "title": "Demystifying fixed k-nearest neighbor information estimators", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tilmann Gneiting", "Adrian E Raftery" ], "title": "Strictly proper scoring rules, prediction, and estimation", "venue": "Journal of the American statistical Association,", "year": 2007 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Qiushan Guo", "Xinjiang Wang", "Yichao Wu", "Zhipeng Yu", "Ding Liang", "Xiaolin Hu", "Ping Luo" ], "title": "Online knowledge distillation via collaborative learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Lars Kai Hansen", "Peter Salamon" ], "title": "Neural network ensembles", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1990 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "Proceedings of International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "David Hin" ], "title": "Stackoverflow vs kaggle: A study of developer discussions about data science", "venue": "arXiv preprint arXiv:2006.08334,", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "stat, 1050:9,", "year": 2015 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: Train 1, get m for free", "venue": null, "year": 2017 }, { "authors": [ "Kirthevasan Kandasamy", "Akshay Krishnamurthy", "Barnabas Poczos", "Larry Wasserman" ], "title": "Nonparametric von mises estimators for entropies, divergences and mutual informations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Sanjay Kariyappa", "Moinuddin K. Qureshi" ], "title": "Improving adversarial robustness of ensembles with diversity training", "venue": "arXiv preprint arXiv:1901.09981,", "year": 2019 }, { "authors": [ "Mete Kemertas", "Leila Pishdad", "Konstantinos G. Derpanis", "Afsaneh Fazly" ], "title": "Rankmi: A mutual information maximizing ranking loss", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Hyoungseok Kim", "Jaekyeom Kim", "Yeonwoo Jeong", "Sergey Levine", "Hyun Oh Song" ], "title": "Emi: Exploration with mutual information", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jangho Kim", "Minsung Hyun", "Inseop Chung", "Nojun Kwak" ], "title": "Feature fusion for online mutual knowledge distillation", "venue": "arXiv preprint arXiv:1904.09058,", "year": 2019 }, { "authors": [ "Wonsik Kim", "Bhavya Goyal", "Kunal Chawla", "Jungmin Lee", "Keunjoo Kwon" ], "title": "Attention-based ensemble for deep metric learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Andreas Kirsch", "Clare Lyle", "Yarin Gal" ], "title": "Unpacking information bottlenecks: Unifying information-theoretic objectives in deep learning", "venue": "arXiv preprint arXiv:2003.12537,", "year": 2020 }, { "authors": [ "Ron Kohavi", "David H Wolpert" ], "title": "Bias plus variance decomposition for zero-one loss functions", "venue": null, "year": 1996 }, { "authors": [ "John F Kolen", "Jordan B Pollack" ], "title": "Back propagation is sensitive to initial conditions", "venue": "In Advances in neural information processing systems,", "year": 1991 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Anders Krogh", "Jesper Vedelsby" ], "title": "Neural network ensembles, cross validation, and active learning", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Ludmila I Kuncheva", "Christopher J Whitaker" ], "title": "Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy", "venue": "Machine learning,", "year": 2003 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Xu Lan", "Xiatian Zhu", "Shaogang Gong" ], "title": "Knowledge distillation by on-the-fly native ensemble", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "Marc’Aurelio Ranzato", "Fu Jie Huang" ], "title": "A tutorial on energy-based learning", "venue": "To appear in “Predicting Structured Data,", "year": 2006 }, { "authors": [ "Stefan Lee", "Senthil Purushwalkam", "Michael Cogswell", "David J. Crandall", "Dhruv Batra" ], "title": "Why M heads are better than one: Training a diverse ensemble of deep networks", "venue": null, "year": 2015 }, { "authors": [ "Stefan Lee", "Michael Cogswell", "Viresh Ranjan", "David Crandall", "Dhruv Batra" ], "title": "Stochastic multiple choice learning for training diverse deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "Rayadurgam Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ralph Linsker" ], "title": "Self-organization in a perceptual network", "venue": null, "year": 1988 }, { "authors": [ "Cheng-Lin Liu", "Masaki Nakagawa" ], "title": "Evaluation of prototype learning algorithms for nearestneighbor classifier in application to handwritten character recognition", "venue": "Pattern Recognition,", "year": 2001 }, { "authors": [ "Yong Liu", "Xin Yao" ], "title": "Ensemble learning via negative correlation", "venue": "Neural networks,", "year": 1999 }, { "authors": [ "Yong Liu", "Xin Yao" ], "title": "Simultaneous training of negatively correlated neural networks in an ensemble", "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),", "year": 1999 }, { "authors": [ "Wesley J Maddox", "Pavel Izmailov", "Timur Garipov", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andres R. Masegosa" ], "title": "Learning under model misspecification: Applications to variational and ensemble methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Sina Molavipour", "Germán Bassi", "Mikael Skoglund" ], "title": "On neural estimators for conditional mutual information using nearest neighbors sampling", "venue": "arXiv preprint arXiv:2006.07225,", "year": 2020 }, { "authors": [ "Sudipto Mukherjee", "Himanshu Asnani", "Sreeram Kannan" ], "title": "Ccmi: Classifier based conditional mutual information estimation", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Ryan Muldoon" ], "title": "Social contract theory for a diverse world: Beyond tolerance", "venue": null, "year": 2016 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Nils J. Nilsson" ], "title": "Learning machines: Foundations of trainable pattern-classifying systems", "venue": null, "year": 1965 }, { "authors": [ "Jeremy Nixon", "Michael W Dusenberry", "Linchuan Zhang", "Ghassen Jerfel", "Dustin Tran" ], "title": "Measuring calibration in deep learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2019 }, { "authors": [ "Jana Novovičová", "Petr Somol", "Michal Haindl", "Pavel Pudil" ], "title": "Conditional mutual information based feature selection for classification task", "venue": "In Iberoamerican Congress on Pattern Recognition,", "year": 2007 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "David Sculley", "Sebastian Nowozin", "Joshua Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Improving adversarial robustness via promoting ensemble diversity", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hanchuan Peng", "Fuhui Long", "Chris Ding" ], "title": "Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2005 }, { "authors": [ "Zhenyue Qin", "Dongwoo Kim" ], "title": "Rethinking softmax with cross-entropy: Neural network classifier as mutual information estimator", "venue": "arXiv preprint arXiv:1911.10688,", "year": 2019 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "A scalable laplace approximation for neural networks", "venue": "In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings,", "year": 2018 }, { "authors": [ "Lior Rokach" ], "title": "Ensemble-based classifiers", "venue": "Artificial intelligence review,", "year": 2010 }, { "authors": [ "Andrew Slavin Ross", "Weiwei Pan", "Leo Anthony Celi", "Finale Doshi-Velez" ], "title": "Ensembles of locally independent prediction models", "venue": "In Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Adrià Ruiz", "Jakob Verbeek" ], "title": "Distilled Hierarchical Neural Ensembles with Adaptive Inference Cost. working paper or preprint, March 2020", "venue": "URL https://hal.inria.fr/ hal-02500660", "year": 2020 }, { "authors": [ "Antoine Saporta", "Yifu Chen", "Michael Blot", "Matthieu Cord" ], "title": "REVE: Regularizing Deep Learning with Variational Entropy Bound", "venue": "IEEE International Conference on Image Processing (ICIP)", "year": 2019 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning factorial codes by predictability minimization", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Claude E Shannon" ], "title": "A mathematical theory of communication", "venue": "The Bell system technical journal,", "year": 1948 }, { "authors": [ "Changjian Shui", "Azadeh Sadat Mozafari", "Jonathan Marek", "Ihsen Hedhli", "Christian Gagné" ], "title": "Diversity regularization in deep ensembles", "venue": null, "year": 2018 }, { "authors": [ "Demetrio Sierra-Mercado", "Gabriel Lázaro-Muñoz" ], "title": "Enhance diversity among researchers to promote participant trust in precision medicine research", "venue": "The American Journal of Bioethics,", "year": 2018 }, { "authors": [ "Harshinder Singh", "Neeraj Misra", "Vladimir Hnizdo", "Adam Fedorowicz", "Eugene Demchuk" ], "title": "Nearest neighbor estimates of entropy", "venue": "American journal of mathematical and management sciences,", "year": 2003 }, { "authors": [ "Saurabh Singh", "Derek Hoiem", "David Forsyth" ], "title": "Swapout: Learning an ensemble of deep architectures", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Samarth Sinha", "Homanga Bharadhwaj", "Anirudh Goyal", "Hugo Larochelle", "Animesh Garg", "Florian Shkurti" ], "title": "Dibs: Diversity inducing information bottleneck in model ensembles", "venue": "arXiv preprint arXiv:2003.04514,", "year": 2020 }, { "authors": [ "Stefano Soatto", "Alessandro Chiuso" ], "title": "Visual representations: Defining properties and deep approximations", "venue": null, "year": 2014 }, { "authors": [ "Casper Kaae Sønderby", "Jose Caballero", "Lucas Theis", "Wenzhe Shi", "Ferenc Huszár" ], "title": "Amortised map inference for image super-resolution", "venue": null, "year": 2016 }, { "authors": [ "Guocong Song", "Wei Chai" ], "title": "Collaborative learning for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jiaming Song", "Stefano Ermon" ], "title": "Understanding the limitations of variational mutual information estimators", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Asa Cooper Stickland", "Iain Murray" ], "title": "Diverse ensembles improve calibration", "venue": "arXiv preprint arXiv:2007.04206,", "year": 2020 }, { "authors": [ "Talia H Swartz", "Ann-Gel S Palermo", "Sandra K Masur", "Judith A Aberg" ], "title": "The science and value of diversity: Closing the gaps in our understanding of inclusion and diversity", "venue": "The Journal of infectious diseases,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": null, "year": 2020 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Naftali Tishby" ], "title": "The information bottleneck method", "venue": "In Proc. 37th Annual Allerton Conference on Communications, Control and Computing,", "year": 1999 }, { "authors": [ "Naonori Ueda", "Ryohei Nakano" ], "title": "Generalization error of ensemble estimators", "venue": "In Proceedings of International Conference on Neural Networks (ICNN’96),", "year": 1996 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": null, "year": 2018 }, { "authors": [ "Bogdan Vasilescu", "Daryl Posnett", "Baishakhi Ray", "Mark GJ van den Brand", "Alexander Serebrenik", "Premkumar Devanbu", "Vladimir Filkov" ], "title": "Gender and tenure diversity in github teams", "venue": "In Proceedings of the 33rd annual ACM conference on human factors in computing systems,", "year": 2015 }, { "authors": [ "David H Wolpert" ], "title": "Stacked generalization", "venue": "Neural networks,", "year": 1992 }, { "authors": [ "A Wu", "S Nowozin", "E Meeds", "RE Turner", "JM Hernández-Lobato", "AL Gaunt" ], "title": "Deterministic variational inference for robust bayesian neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Guile Wu", "Shaogang Gong" ], "title": "Peer collaborative learning for online knowledge distillation", "venue": "arXiv preprint arXiv:2006.04147,", "year": 2020 }, { "authors": [ "Tailin Wu", "Ian Fischer" ], "title": "Phase transitions for the information bottleneck in representation learning", "venue": null, "year": 2020 }, { "authors": [ "Tailin Wu", "Ian Fischer", "Isaac L Chuang", "Max Tegmark" ], "title": "Learnability for the information", "venue": "bottleneck. Entropy,", "year": 2019 }, { "authors": [ "Pingmei Xu", "Krista A Ehinger", "Yinda Zhang", "Adam Finkelstein", "Sanjeev R. Kulkarni", "Jianxiong Xiao" ], "title": "Turkergaze: Crowdsourcing saliency with webcam based eye tracking", "venue": "arXiv preprint arXiv:1504.06755,", "year": 2015 }, { "authors": [ "Yanchao Yang", "Stefano Soatto" ], "title": "Fda: Fourier domain adaptation for semantic segmentation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "R.W. Yeung" ], "title": "A new outlook on shannon’s information measures", "venue": "IEEE Transactions on Information Theory,", "year": 1991 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 }, { "authors": [ "Tianyuan Yu", "Da Li", "Yongxin Yang", "Timothy M Hospedales", "Tao Xiang" ], "title": "Robust person reidentification by modelling feature uncertainty", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "Proceedings of the British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "Ruqi Zhang", "Chunyuan Li", "Jianyi Zhang", "Changyou Chen", "Andrew Gordon Wilson" ], "title": "Cyclical stochastic gradient mcmc for bayesian deep learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ying Zhang", "Tao Xiang", "Timothy M Hospedales", "Huchuan Lu" ], "title": "Deep mutual learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Han Zhao", "Amanda Coston", "Tameem Adel", "Geoffrey J. Gordon" ], "title": "Conditional learning of fair representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhi-Hua Zhou", "Jianxin Wu", "Wei Tang" ], "title": "Ensembling neural networks: many could be better than all", "venue": "Artificial intelligence,", "year": 2002 }, { "authors": [ "Sinha" ], "title": "Diversity inducing Information Bottleneck in Model Ensembles", "venue": null, "year": 2020 }, { "authors": [ "E[(fi − E[fi])(fj − E[fj" ], "title": "The estimation improves when the covariance between members is zero: the reduction factor of the variance component equals to M when errors are uncorrelated. Compared to the Bias-Variance Decomposition (Kohavi et al., 1996), it leads to a variance reduction", "venue": null, "year": 1996 }, { "authors": [ "M . Brown" ], "title": "in addition to the bias and variance of the individual estimators, the generalisation error of an ensemble also depends on the covariance between the individuals. This raises the interesting issue of why we should ever train ensemble members separately; why shouldn’t we try to find some way to capture the effect of the covariance in the error function?", "venue": null, "year": 2005 } ]
[ { "heading": null, "text": "Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information, we introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features. The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant. Therefore, besides the classification loss with information bottleneck, we adversarially prevent features from being conditionally predictable from each other. We manage to reduce simultaneous errors while protecting class information. We obtain state-of-the-art accuracy results on CIFAR-10/100: for example, an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently. We further analyze the consequences on calibration, uncertainty estimation, out-of-distribution detection and online co-distillation." }, { "heading": "1 INTRODUCTION", "text": "Averaging the predictions of several models can significantly improve the generalization ability of a predictive system. Due to its effectiveness, ensembling has been a popular research topic (Nilsson, 1965; Hansen & Salamon, 1990; Wolpert, 1992; Krogh & Vedelsby, 1995; Breiman, 1996; Dietterich, 2000; Zhou et al., 2002; Rokach, 2010; Ovadia et al., 2019) as a simple alternative to fully Bayesian methods (Blundell et al., 2015; Gal & Ghahramani, 2016). It is currently the de facto solution for many machine learning applications and Kaggle competitions (Hin, 2020).\nEnsembling reduces the variance of estimators (see Appendix E.1) thanks to the diversity in predictions. This reduction is most effective when errors are uncorrelated and members are diverse, i.e., when they do not simultaneously fail on the same examples. Conversely, an ensemble of M identical networks is no better than a single one. In deep ensembles (Lakshminarayanan et al., 2017), the weights are traditionally trained independently: diversity among members only relies on the randomness of the initialization and of the learning procedure. Figure 1 shows that the performance of this procedure quickly plateaus with additional members.\nTo obtain more diverse ensembles, we could adapt the training samples through bagging (Breiman, 1996) and bootstrapping (Efron & Tibshirani, 1994), but a reduction of training samples has a negative impact on members with multiple local minima (Lee et al., 2015). Sequential boosting does not scale well for time-consuming deep learners that overfit their training dataset. Liu & Yao (1999a;b); Brown et al. (2005b) explicitly quantified the diversity and regularized members into having negatively correlated errors. However, these ideas have not significantly improved accuracy when applied to deep learning (Shui et al., 2018; Pang et al., 2019): while members should predict the same target, they force disagreements among strong learners and therefore increase their bias. It highlights the main objective and challenge of our paper: finding a training strategy to reach an improved trade-off between ensemble diversity and individual accuracies (Masegosa, 2020).\none input ( , ) should not share more information than features from two inputs in the same class ( , ): i.e., ( ,-) should not be able to differentiate (-, ) and (-, ).\nOur core approach is to encourage all members to predict the same thing, but for different reasons. Therefore the diversity is enforced in the features space and not on predictions. Intuitively, to maximize the impact of a new member, extracted features should bring information about the target that is absent at this time so unpredictable from other members’ features. It would remove spurious correlations, e.g. information redundantly shared among features extracted by different members but useless for class prediction. This redundancy may be caused by a detail in the image background and therefore will not be found in features extracted from other images belonging to the same class. This could make members predict badly simultaneously, as shown in Figure 2.\nOur new learning framework, called DICE, is driven by Information Bottleneck (IB) (Tishby, 1999; Alemi et al., 2017) principles, that force features to be concise by forgetting the task-irrelevant factors. Specifically, DICE leverages the Minimum Necessary Information criterion (Fischer, 2020) for deep ensembles, and aims at reducing the mutual information (MI) between features and inputs, but also information shared between features. We prevent extracted features from being redundant. As mutual information can detect arbitrary dependencies between random variables (such as symmetry, see Figure 2), we increase the distance between pairs of members: it promotes diversity by reducing predictions’ covariance. Most importantly, DICE protects features’ informativeness by conditioning mutual information upon the target. We build upon recent neural approaches (Belghazi et al., 2018) based on the Donsker-Varadhan representation of the KL formulation of MI.\nWe summarize our contributions as follows:\n• We introduce DICE, a new adversarial learning framework to explicitly increase diversity in ensemble by minimizing the conditional redundancy between features.\n• We rationalize our training objective by arguments from information theory. • We propose an implementation through neural estimation of conditional redundancy.\nWe consistently improve accuracy on CIFAR-10/100 as summarized in Figure 1, with better uncertainty estimation and calibration. We analyze how the two components of our loss modify the accuracy-diversity trade-off. We improve out-of-distribution detection and online co-distillation." }, { "heading": "2 DICE MODEL", "text": "Notations Given an input distribution X , a network θ is trained to extract the best possible dense features Z to model the distribution pθ(Y |X) over the targets, which should be close to the Dirac on the true label. Our approach is designed for ensembles with M members θi, i ∈ {1, . . . ,M} extracting Zi. In branch-based setup, members share low-level weights to reduce computation cost. We average the M predictions in inference. We initially consider an ensemble of M = 2 members.\nQuick overview First, we train each member separately for classification with information bottleneck. Second, we train members together to remove spurious redundant correlations while training adversarially a discriminator. In conclusion, members learn to classify with conditionally uncorrelated features for increased diversity. Our procedure is driven by the following theoretical findings.\n2.A DERIVING TRAINING OBJECTIVE\n2.A.1 BASELINE: NON-CONDITIONAL OBJECTIVE\nThe Minimum Necessary Information (MNI) criterion from (Fischer, 2020) aims at finding minimal statistics. In deep ensembles, Z1 and Z2 should capture only minimal information from X , while preserving the necessary information about the task Y . First, we consider separately the two Markov chains Z1 ← X ↔ Y and Z2 ← X ↔ Y . As entropy measures information, entropy of Z1 and Z2 not related to Y should be minimized. We recover IB (Alemi et al., 2017) in deep ensembles: IBβib(Z1, Z2) = 1 βib\n[I(X;Z1) + I(X;Z2)] − [I(Y ;Z1) + I(Y ;Z2)] = IBβib(Z1) + IBβib(Z2). Second, let’s consider I(Z1;Z2): we minimize it following the minimality constraint of the MNI.\nIBRβib,δr (Z1, Z2) = 1 βib\nCompression︷ ︸︸ ︷ [I(X;Z1) + I(X;Z2)]− Relevancy︷ ︸︸ ︷ [I(Y ;Z1) + I(Y ;Z2)] +δr Redundancy︷ ︸︸ ︷ I(Z1;Z2)\n= IBβib(Z1) + IBβib(Z2) + δrI(Z1;Z2).\n(green vertical stripes ) with no overlap with relevancy (red stripes).\nAnalysis In this baseline criterion, relevancy encouragesZ1 and Z2 to capture information about Y . Compression & redundancy (R) split the information from X into two compressed & independent views. The relevancy-compressionredundancy trade-off depends on the values of βib & δr.\n2.A.2 DICE: CONDITIONAL OBJECTIVE\nThe problem is that the compression and redundancy terms in IBR also reduce necessary information related to Y : it is detrimental to have Z1 and Z2 fully disentangled while training them to predict the same Y . As shown on Figure 3, redundancy regions (blue horizontal stripes ) overlap with relevancy regions (red stripes). Indeed, the true constraints that the MNI criterion really entails are the following conditional equalities given Y :\nI(X;Z1|Y ) = I(X;Z2|Y ) = I(Z1;Z2|Y ) = 0. Mutual information being non-negative, we transform them into our main DICE objective:\nDICEβceb,δcr (Z1, Z2) = 1βceb [I(X;Z1|Y ) + I(X;Z2|Y )]︸ ︷︷ ︸\nConditional Compression\n− [I(Y ;Z1) + I(Y ;Z2)]︸ ︷︷ ︸ Relevancy +δcr I(Z1;Z2|Y )︸ ︷︷ ︸ Conditional Redundancy\n= CEBβceb(Z1) + CEBβceb(Z2) + δcrI(Z1;Z2|Y ),\n(1)\nwhere we recover two conditional entropy bottleneck (CEB) (Fischer, 2020) components, CEBβceb(Zi) = 1 βceb I(X;Zi|Y )− I(Y ;Zi), with βceb > 0 and δcr > 0.\nAnalysis The relevancy terms force features to be informative about the task Y . But contrary to IBR, DICE bottleneck constraints only minimize irrelevant information to Y . First, the conditional compression removes in Z1 (or Z2) information from X not relevant to Y . Second, the conditional redundancy (CR) reduces spurious correlations between members and only forces them to have independent bias, but definitely not independent features. It encourages diversity without affecting members’ individual precision as it protects information related to the target class in Z1 and Z2. Useless information from X to predict Y should certainly not be in Z1 or Z2, but it is even worse if they are in Z1 and Z2 simultaneously as it would cause simultaneous errors. Even if for i ∈ {1, 2}, reducing I(Zi, X|Y ) indirectly controls I(Z1, Z2|Y ) (as I(Z1;Z2|Y ) ≤ I(X;Zi|Y ) by chain rule), it is more efficient to directly target this intersection region through the CR term. In a final word, DICE is to IBR for deep ensembles as CEB is to IB for a single network.\nWe now approximate the two CEB and the CR components in DICE objective from equation 1.\n2.B APPROXIMATING DICE INTO A TRACTABLE LOSS\n2.B.1 VARIATIONAL APPROXIMATION OF CONDITIONAL ENTROPY BOTTLENECK\nWe leverage Markov assumptions in Zi ← X ↔ Y, i ∈ {1, 2} and empirically estimate on the classification training dataset of N i.i.d. points D = {xn, yn}Nn=1, yn ∈ {1, . . . ,K}. Following Fischer (2020), CEBβceb(Zi) = 1 βceb I(X;Zi|Y )− I(Y ;Zi) is variationally upper bounded by:\nVCEBβceb({ei, bi, ci}) = 1\nN N∑ n=1 1 βceb DKL (ei(z|xn)‖bi(z|yn))− E [log ci(yn|ei(xn, ))] . (2)\nSee explanation in Appendix E.4. ei(z|x) is the true features distribution generated by the encoder, ci(y|z) is a variational approximation of true distribution p(y|z) by the classifier, and bi(z|y) is a variational approximation of true distribution p(z|y) by the backward encoder. This loss is applied separately on each member θi = {ei, ci, bi}, i ∈ {1, 2}.\nPractically, we parameterize all distributions with Gaussians. The encoder ei is a traditional neural network features extractor (e.g. ResNet-32) that learns distributions (means and covariances) rather than deterministic points in the features space. That’s why ei transforms an image into 2 tensors; a features-mean eµi (x) and a diagonal features-covariance e σ i (x) each of size d (e.g. 64). The classifier ci is a dense layer that transforms a features-sample z into logits to be aligned with the target y through conditional cross entropy. z is obtained via reparameterization trick: z = ei(x, ) = eµi (x)+ e σ i (x) with ∼ N(0, 1). Finally, the backward encoder bi is implemented as an embedding layer of size (K, d) that maps the K classes to class-features-means bµi (z|y) of size d, as we set the class-features-covariance to 1. The Gaussian parametrization also enables the exact computation of the DKL (see Appendix E.3), that forces (1) features-mean e µ i (x) to converge to the class-featuresmean bµi (z|y) and (2) the predicted features-covariance eσi (x) to be close to 1. The advantage of VCEB versus VIB (Alemi et al., 2017) is the class conditional bµi (z|y) versus non-conditional bµi (z) which protects class information.\n2.B.2 ADVERSARIAL ESTIMATION OF CONDITIONAL REDUNDANCY\nTheoretical Problem We now focus on estimating I(Z1;Z2|Y ), with no such Markov properties. Despite being a pivotal measure, mutual information estimation historically relied on nearest neighbors (Singh et al., 2003; Kraskov et al., 2004; Gao et al., 2018) or density kernels (Kandasamy et al., 2015) that do not scale well in high dimensions. We benefit from recent advances in neural estimation of mutual information (Belghazi et al., 2018), built on optimizing Donsker & Varadhan (1975) dual representations of the KL divergence. Mukherjee et al. (2020) extended this formulation for conditional mutual information estimation.\nCR = I(Z1;Z2|Y ) = DKL(P (Z1, Z2, Y )‖P (Z1, Y )p(Z2|Y )) = sup\nf Ex∼p(z1,z2,y)[f(x)]− log\n( Ex∼p(z1,y)p(z2|y)[exp(f(x))] ) = Ex∼p(z1,z2,y)[f∗(x)]− log ( Ex∼p(z1,y)p(z2|y)[exp(f∗(x))] ) ,\nwhere f∗ computes the pointwise likelihood ratio, i.e., f∗(z1, z2, y) = p(z1,z2,y)\np(z1,y)p(z2|y) .\nEmpirical Neural Estimation We estimate CR (1) using the empirical data distribution and (2) replacing f∗ = w ∗\n1−w∗ by the output of a discriminator w, trained to imitate the optimal w ∗. Let\nB be a batch sampled from the observed joint distribution p(z1, z2, y) = p(e1(z|x), e2(z|x), y); we select the features extracted by the two members from one input. Let Bp be sampled from the product distribution p(z1, y)p(z2|y) = p(e1(z|x), y)p(z2|y); we select the features extracted by the two members from two different inputs that share the same class. We train a multi-layer network w on the binary task of distinguishing these two distributions with the standard cross-entropy loss:\nLce(w) = − 1\n|B|+ |Bp| ∑ (z1,z2,y)∈B logw(z1, z2, y) + ∑ (z1,z′2,y)∈Bp log(1− w(z1, z′2, y)) . (3)\nIf w is calibrated (see Appendix B.3), a consistent (Mukherjee et al., 2020) estimate of CR is:\nÎCRDV = 1 |B| ∑\n(z1,z2,y)∈B log f(z1, z2, y)︸ ︷︷ ︸ Diversity − log\n 1 |Bp| ∑ (z1,z′2,y)∈Bp f(z1, z ′ 2, y)︸ ︷︷ ︸\nFake correlations\n ,with f = w 1− w .\nIntuition By training our members to minimize ÎCRDV , we force triples from the joint distribution to be indistinguishable from triples from the product distribution. Let’s imagine that two features are conditionally correlated, some spurious information is shared between features only when they are from the same input and not from two inputs (from the same class). This correlation can be informative about a detail in the background, an unexpected shape in the image, that is rarely found in samples from this input’s class. In that case, the product and joint distributions are easily distinguishable by the discriminator. The first adversarial component will force the extracted features to reduce the correlation, and ideally one of the two features loses this information: it reduces redundancy and increases diversity. The second term would create fake correlations between features from different inputs. As we are not interested in a precise estimation of the CR, we get rid of this second term that, empirically, did not increase diversity, as detailed in Appendix G.\nL̂CRDV (e1, e2) = 1 |B| ∑\n(z1,z2,y)∈B∼p(e1(z|x),e2(z|x),y)\nlog f(z1, z2, y). (4)\nSummary First, we train each member for classification with VCEB from equation 2, as shown in Step 1 from Figure 4. Second, as shown in Step 2 from Figure 4, the discriminator, conditioned on the class Y , learns to distinguish features sampled from one image versus features sampled from two images belonging to Y . Simultaneously, both members adversarially (Goodfellow et al., 2014) delete spurious correlations to reduce CR estimation from equation 4 with differentiable signals: it conditionally aligns features. We provide a pseudo-code in B.4. While we derive similar losses for IBR and CEBR in Appendix E.5, the full DICE loss is finally:\nLDICE(θ1, θ2) = VCEBβceb(θ1) + VCEBβceb(θ2) + δcrL̂CRDV (e1, e2). (5)\n2.C FULL PROCEDURE WITH M MEMBERS\nWe expand our objective for an ensemble with M > 2 members. We only consider pairwise interactions for simplicity to keep quadratic rather than exponential growth in number of components and truncate higher order interactions, e.g. I(Zi;Zj , Zk|Y ) (see Appendix F.1). Driven by previous variational and neural estimations, we train θi = {ei, bi, ci}, i ∈ {1, . . . ,M} on:\nLDICE(θ1:M ) = M∑ i=1 VCEBβceb(θi) + δcr (M − 1) M∑ i=1 M∑ j=i+1 L̂CRDV (ei, ej), (6)\nwhile training adversariallyw on Lce. Batch B is sampled from the concatenation of joint distribution p(zi, zj , y) where i, j ∈ {1, . . . ,M}, i 6= j, while Bp is sampled from the product distribution, p(zi, y)p(zj |y). We use the same discriminator w for ( M 2 ) estimates. It improves scalability by reducing the number of parameters to be learned. Indeed, an additional member in the ensemble only adds 256 ∗ d trainable weights in w, where d is the features dimension. See Appendix B.3 for additional information related to the discriminator w." }, { "heading": "3 RELATED WORK", "text": "To reduce the training cost of deep ensembles (Hansen & Salamon, 1990; Lakshminarayanan et al., 2017), Huang et al. (2017) collect snapshots on training trajectories. One stage end-to-end codistillation (Song & Chai, 2018; Lan et al., 2018; Chen et al., 2020b) share low-level features among members in branch-based ensemble while forcing each member to mimic a dynamic weighted combination of the predictions to increase individual accuracy. However both methods correlate errors among members, homogenize predictions and fail to fit the different modes of the data which overall reduce diversity.\nBeyond random initializations (Kolen & Pollack, 1991), authors implicitly introduced stochasticity into the training, by providing subsets of data to learners with bagging (Breiman, 1996) or by backpropagating subsets of gradients (Lee et al., 2016); however, the reduction of training samples hurts performance for sufficiently complex models that overfit their training dataset (Nakkiran et al., 2019). Boosting with sequential training is not suitable for deep members (Lakshminarayanan et al., 2017). Some approaches applied different data augmentations (Dvornik et al., 2019; Stickland & Murray, 2020), used different networks or hyperparameters (Singh et al., 2016; Ruiz & Verbeek, 2020; Yang & Soatto, 2020), but are not general-purpose and depend on specific engineering choices.\nOthers explicitly encourage orthogonality of the gradients (Ross et al., 2020; Kariyappa & Qureshi, 2019; Dabouei et al., 2020) or of the predictions, by boosting (Freund & Schapire, 1999; Margineantu & Dietterich) or with a negative correlation regularization (Shui et al., 2018), but they reduce members accuracy. Second-order PAC-Bayes bounds motivated the diversity loss in Masegosa (2020). As far as we know, adaptive diversity promoting (ADP) (Pang et al., 2019) is the unique approach more accurate than the independent baseline: they decorrelate the non-maximal predictions. The limited success of these logits approaches suggests that we seek diversity in features. Empirically we found that the increase of (L1, L2, − cos) distances between features (Kim et al., 2018) reduce performance: they are not invariant to variables’ symmetry. Simultaneously to our findings, Sinha et al. (2020) is somehow equivalent to our IBR objective (see Appendix C.2) but without information bottleneck motivations for the diversity loss.\nThe uniqueness of mutual information (see Appendix E.2) as a distance measure between variables has been applied in countless machine learning projects, such as reinforcement learning (Kim et al., 2019a), metric learning (Kemertas et al., 2020), or evolutionary algorithms (Aguirre & Coello, 2004). Objectives are often a trade-off between (1) informativeness and (2) compression. In computer vision, unsupervised deep representation learning (Hjelm et al., 2019; van den Oord et al., 2018; Tian et al., 2020a; Bachman et al., 2019) maximizes correlation between features and inputs following Infomax (Linsker, 1988; Bell & Sejnowski, 1995), while discarding information not shared among different views (Bhardwaj et al., 2020), or penalizing predictability of one latent dimension given the others for disentanglement (Schmidhuber, 1992; Comon, 1994; Kingma & Welling, 2014; Kim & Mnih, 2018; Blot et al., 2018).\nThe ideal level of compression is task dependent (Soatto & Chiuso, 2014). As a selection criterion, features should not be redundant (Battiti, 1994; Peng et al., 2005) but relevant and complementary given the task (Novovičová et al., 2007; Brown, 2009). As a learning criteria, correlations between features and inputs are minimized according to Information Bottleneck (Tishby, 1999; Alemi et al., 2017; Kirsch et al., 2020; Saporta et al., 2019), while those between features and targets are maximized (LeCun et al., 2006; Qin & Kim, 2019). It forces the features to ignore task-irrelevant factors (Zhao et al., 2020), to reduce overfitting (Alemi et al., 2018) while protecting needed information (Tian et al., 2020b). Fischer & Alemi (2020) concludes in the superiority of conditional alignment to reach the MNI point." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present our experimental results on the CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009) datasets. We detail our implementation in Appendix B. We took most hyperparameter values from Chen et al. (2020b). Hyperparameters for adversarial training and information bottleneck were fine-tuned on a validation dataset made of 5% of the training dataset, see Appendix D.1. Bold highlights best score. First, we show gain in accuracy. Then, we further analyze our strategy’s impacts on calibration, uncertainty estimation, out-of-distribution detection and co-distillation.\n4.A COMPARISON OF CLASSIFICATION ACCURACY\nTable 1: CIFAR-100 ensemble classification accuracy (Top-1, %).\nName Components ResNet-32 ResNet-110 WRN-28-2Div. I.B. 3-branch 4-branch 5-branch 4-net 3-branch 4-branch 3-branch 4-branch 3-net\nInd. 76.28±0.12 76.78± 0.19 77.24± 0.25 77.38± 0.12 80.54± 0.09 80.89± 0.31 78.83± 0.12 79.10± 0.08 80.01± 0.15\nONE (Lan et al., 2018) 75.17±0.35 75.13±0.25 75.25±0.22 76.25±0.32 78.97±0.24 79.86±0.25 78.38±0.45 78.47±0.32 77.53±0.36 OKDDip (Chen et al., 2020b) 75.37±0.32 76.85±0.25 76.95±0.18 77.27±0.31 79.07±0.27 80.46±0.35 79.01±0.19 79.32±0.17 80.02±0.14\nADP (Pang et al., 2019) Pred. 76.37±0.11 77.21±0.21 77.67±0.25 77.51±0.25 80.73±0.38 81.40± 0.27 79.21±0.19 79.71±0.18 80.01±0.17\nIB (equation 8) VIB 76.01±0.12 76.93± 0.24 77.22±0.19 77.72±0.12 80.43±0.34 81.12±0.19 79.19±0.35 79.15±0.12 80.15±0.13 CEB (equation 2) VCEB 76.36±0.06 76.98± 0.18 77.35±0.14 77.64± 0.15 81.08± 0.12 81.17± 0.16 78.92±0.08 79.20±0.13 80.38±0.18\nIBR (equation 9) R VIB 76.68±0.13 77.25± 0.13 77.77±0.21 77.84±0.12 81.34±0.21 81.38± 0.08 79.33±0.15 79.90±0.10 80.22±0.10 CEBR (equation 10) R VCEB 76.72±0.08 77.30± 0.12 77.81± 0.10 77.82± 0.11 81.52±0.11 81.55±0.33 79.25±0.15 79.98±0.07 80.35±0.15\nDICE (equation 6) CR VCEB 76.89± 0.09 77.51± 0.17 78.08± 0.18 77.92± 0.08 81.67±0.14 81.93± 0.13 79.59±0.13 80.05±0.11 80.55± 0.12\nTable 1 reports the Top-1 classification accuracy averaged over 3 runs with standard deviation for CIFAR-100, while Table 2 focuses on CIFAR-10. {3,4,5}-{branch,net} refers to the training of {3,4,5}members {with,without} low-level weights sharing. Ind. refers to independent deterministic deep ensembles without interactions between members (except optionally the low-level weights sharing). DICE surpasses concurrent approaches (summarized in Appendix C) for ResNet and WideResNet architectures, in network and even more in branch setup. We bring significant and systematic improvements to the current state-of-the-art ADP (Pang et al., 2019): e.g., {+0.52,+0.30,+0.41} for {3,4,5}-branches ResNet-32, {+0.94,+0.53} for {3,4}-branches ResNet-110 and finally +0.34 for 3-networks WRN-28-2. Diversity approaches better leverage size, as shown on the main Figure 1, which is detailed in Table 8: on CIFAR-100, DICE outperforms Ind. by {+0.60,+0.73,+0.84} for {3,4,5}-branches ResNet-32. Finally, learning only the redundancy loss without compression yields unstable results: CEB learns a distribution (at almost no extra cost) that stabilizes adversarial training (see Appendix F.1) through sampling, with lower standard deviation in results than IB (βib can hinder the learnability (Wu et al., 2019b)).\n4.B ABLATION STUDY\nBranch-based is attractive: it reduces bias by gradient diffusion among shared layers, at only a slight cost in diversity which makes our approach even more valuable. We therefore study the 4-branches ResNet-32 on CIFAR-100 in following experiments. We ablate the two components of DICE: (1) deterministic, with VIB or VCEB, and (2) no adversarial loss, or with redundancy, conditionally or not. We measure diversity by the ratio-error (Aksela, 2003), r = NsingleNshared , which computes the ratio between the number of single errors Nsingle and of shared errors Nshared. A higher average over the( M 2 ) pairs means higher diversity as members are less likely to err on the same inputs. Our analysis remains valid for non-pairwise diversity measures, analyzed in Appendix A.5.\nIn Figure 5, CEB has slightly higher diversity than Ind.: it benefits from compression. ADP reaches higher diversity but sacrifices individual accuracies. On the contrary, co-distillation OKDDip sacri-\nfices diversity for individual accuracies. DICE curve is above all others, and notably δcr = 0.2 induces an optimal trade-off between ensemble diversity and individual accuracies on validation. CEBR reaches same diversity with lower individual accuracies: information about Y is removed.\nFigure 6 shows that starting from random initializations, diversity begins small: DICE minimizes the estimated CR in features and increases diversity in predictions compared to CEB (δcr = 0.0). The effect is correlated with δcr: a high value (0.6) creates too much diversity. On the contrary, a negative value (−0.025) can decrease diversity. Figure 8 highlights opposing dynamics in accuracies.\nFigure 5: Ensemble diversity/individual accuracy trade-off for different strategies. DICE (r. CEBR) is learned with different δcr (r. δr). Figure 6: Impact of the diversity coefficient δcr in DICE on the training dynamics on validation: CR is negatively correlated with diversity.\n4.C FURTHER ANALYSIS: UNCERTAINTY ESTIMATION AND CALIBRATION\nProcedure We follow the procedure from (Ashukha et al., 2019). To evaluate the quality of the uncertainty estimates, we reported two complementary proper scoring rules (Gneiting & Raftery, 2007); the Negative Log-Likelihood (NLL) and the Brier Score (BS) (Brier, 1950). To measure the calibration, i.e., how classification confidences match the observed prediction accuracy, we report the Expected Calibration Error (ECE) (Naeini et al., 2015) and the Thresholded Adaptive Calibration Error (TACE) (Nixon et al., 2019) with 15 bins: TACE resolves some pathologies in ECE by thresholding and adaptive binning. Ashukha et al. (2019) showed that “comparison of [. . .] ensembling methods without temperature scaling (Guo et al., 2017) might not provide a fair ranking”. Therefore, we randomly divide the test set into two equal parts and compute metrics for each half using the temperature T optimized on another half: their mean is reported. Table 3 compares results after temperature scaling (TS) while those before TS are reported in Table 9 in Appendix A.6.\nResults We recover that ensembling improves performances (Ovadia et al., 2019), as one single network (1-net) performs significantly worse than ensemble approaches with 4-branches ResNet-32. Members’ disagreements decrease internal temperature and increase uncertainty estimation. DICE performs best even after TS, and reduces NLL from 8.13 to 7.98 and BS from 3.24 to 3.12 compared to independant learning. Calibration criteria benefit from diversity though they do “not provide a consistent ranking” as stated in Ashukha et al. (2019): for example, we notice that ECE highly depends on hyperparameters, especially δcr, as shown on Figure 8 in Appendix A.4.\n4.D FURTHER ANALYSIS: DISCRIMINATOR BEHAVIOUR THROUGH OOD DETECTION\nTo measure the ability of our ensemble to distinguish in- and out-of-distribution (OOD) images, we consider other datasets at test time following (Hendrycks & Gimpel, 2017) (see Appendix D.2). The confidence score is estimated with the maximum softmax value: the confidence for OOD images should ideally be lower than for CIFAR-100 test images.\nTemperature scaling (results in Table 7) refines performances (results without TS in Table 6). DICE beats Ind. and CEB in both cases. Moreover, we suspected that features were more correlated for OOD images: they may share redundant artifacts. DICE×w multiplies the classification logits by the mean over all pairs of 1 − w(zi, zj , ŷ), i 6= j, with predicted ŷ (as the true y is not available at test time). DICE×w performs even better than DICE+TS, but at the cost of additional operations. It shows that w can detect spurious correlations, adversarially deleted only when found in training.\n4.E FURTHER ANALYSIS: DIVERSE TEACHER FOR IMPROVED CO-DISTILLATION\nThe inference time in network-ensembles grows linearly with M. Sharing early-features is one solution. We experiment another one by using only the M-th branch at test time. We combine DICE with OKDDip (Chen et al., 2020b): the M-th branch (= the student) learns to mimic the soft predictions from the M-1 first branches (= the teacher), among which we enforce diversity. Our teacher has lower internal temperature (as shown in Experiment 4.c): DICE performs best when soft predictions are generated with lower T . We improve state-of-the-art by {+0.42,+0.53} for {3,4}-branches." }, { "heading": "5 CONCLUSION", "text": "In this paper, we addressed the task of improving deep ensembles’ learning strategies. Motivated by arguments from information theory, we derive a novel adversarial diversity loss, based on conditional mutual information. We tackle the trade-off between individual accuracies and ensemble diversity by deleting spurious and redundant correlations. We reach state-of-the-art performance on standard image classification benchmarks. In Appendix F.2, we also show how to regularize deterministic encoders with conditional redundancy without compression: this increases the applicability of our research findings. The success of many real-world systems in production depends on the robustness of deep ensembles: we hope to pave the way towards general-purpose strategies that go beyond independent learning." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was granted access to the HPC resources of IDRIS under the allocation 20XXAD011011953 made by GENCI. We acknowledge the financial support by the ANR agency in the chair VISA-DEEP (project number ANR-20-CHIA-0022-01). Finally, we would like to thank those who helped and supported us during these confinements, in particular Julie and Rouille." }, { "heading": "Appendices", "text": "Appendix A shows additional experiments. Appendix B describes our implementation to facilitate reproduction. In Appendix C, we summarize the concurrent approaches (see Table 10). In Appendix D, we describe the datasets and the metrics used in our experiments. Appendix E clarifies certain theoretical formulations. In Appendix F, we explain that DICE is a second-order approximation in terms of information interactions and then we try to apply our diversity regularization to deterministic encoders. Appendix G motivates the removal of the second term from our neural estimation of conditional redundancy. We conclude with a sociological analogy in Appendix H." }, { "heading": "A ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "A.1 COMPARISONS WITH CO-DISTILLATION AND SNAPSHOT-BASED APPROACHES", "text": "Table 5: Ensemble Accuracy on different setups. Concurrent approaches’ accuracies are those reported in recent papers. DICE outperforms co-distillation and snapshot-based ensembles collected on the training trajectory, which fail to capture the different modes of the data (Ashukha et al., 2019).\nArchitecture Concurrent Approach Baseline Ours\nDataset Backbone Structure Ens. Size Name Acc. According to Ind. Acc. DICE Acc.\nCIFAR-100\nResNet-32 Branches 3 CL-ILR (Song & Chai, 2018) 72.99 (Chen et al., 2020b) 76.28 76.89\nNets 3 DML (Zhang et al., 2018) 76.11 (Chung et al., 2020) 76.45 76.98AFD (Chung et al., 2020) 76.64 (Chung et al., 2020)\nResNet-110\nBranches 3 FFL (Kim et al., 2019b) 78.22 (Wu & Gong, 2020) 80.54 81.67PCL-E (Wu & Gong, 2020) 80.51 (Wu & Gong, 2020)\n4 CL-ILR (Song & Chai, 2018) 79.81 (Chen et al., 2020b) 80.89 81.93\nNets 5\nSWAG (Maddox et al., 2019) 77.69 (Ashukha et al., 2019)\n81.7 (Ashukha et al., 2019) 81.82 Cyclic SGLD (Zhang et al., 2019) 74.27 (Ashukha et al., 2019) Fast Geometric Ens (Garipov et al., 2018) 78.78 (Ashukha et al., 2019) Variational Inf. (FFG) (Wu et al., 2019a) 77.59 (Ashukha et al., 2019)\nKFAC-Laplace (Ritter et al., 2018) 77.13 (Ashukha et al., 2019) Snapshot Ensembles (Huang et al., 2017) 77.17 (Ashukha et al., 2019)\nWRN-28-2 Nets 3 DML (Zhang et al., 2018) 79.41 (Chung et al., 2020) 80.01 80.55AFD (Chung et al., 2020) 79.78 (Chung et al., 2020)\nCIFAR-10 ResNet-110 Branches 3 FFL (Kim et al., 2019b) 95.01 (Wu & Gong, 2020) 95.62 95.74PCL-E (Wu & Gong, 2020) 95.58 (Wu & Gong, 2020)" }, { "heading": "A.2 OUT-OF-DISTRIBUTION DETECTION", "text": "Table 6 summarizes our OOD experiments in the 4-branches ResNet-32 setup. We recover that IB improves OOD detection (Alemi et al., 2018). Moreover, we empirically validate our intuition: features from in-distribution images are in average less predictive from each other compared to pairs of features from OOD images. w can perform alone as a OOD-detector, but is best used in complement to DICE. In DICE×w, logits are multiplied by the sigmoid output of w averaged over all pairs. Table 7 shows that temperature scaling improves all approaches without modifying ranking. Finally, DICE×w, even without TS, is better than DICE, even with TS." }, { "heading": "A.3 ACCURACY VERSUS SIZE", "text": "We recover from Table 8 the Memory Split Advantage (MSA) from Chirkova et al. (2020): splitting the memory budget between three branches of ResNet-32 results in better performance than spending twice the budget on one ResNet-110. DICE further improves this advantage. Our framework is particularly effective in the branch-based setting, as it reduces the computational overhead (especially in terms of FLOPS) at a slight cost in diversity. A 4-branches DICE ensemble has the same accuracy in average as a classical 7-branches ensemble." }, { "heading": "A.4 TRAINING DYNAMICS IN TERMS OF ACCURACY, UNCERTAINTY ESTIMATION AND CALIBRATION", "text": "" }, { "heading": "A.5 TRAINING DYNAMICS IN TERMS OF DIVERSITY", "text": "We measured diversity in 4.b with the ratio error (Aksela, 2003). But as stated by Kuncheva & Whitaker (2003), diversity can be measured in numerous ways. For pairwise measures, we averaged over the ( M 2 ) pairs: the Q-statistics is positive when classifiers recognize the same object, the agreement score measures the frequency that both classifiers predict the same class. Note that even if we only apply pairwise constraints, we also increase non-pairwise measures: for example, the Kohavi-Wolpert variance (Kohavi et al., 1996) which measures the variability of the predicted class, and the entropy diversity which measures overall disagreement." }, { "heading": "A.6 UNCERTAINTY ESTIMATION AND CALIBRATION BEFORE TEMPERATURE SCALING", "text": "" }, { "heading": "B TRAINING DETAILS", "text": "" }, { "heading": "B.1 GENERAL OPTIMIZATION", "text": "Experiments Classical hyperparameters were taken from (Chen et al., 2020b) for conducting fair comparisons. Newly added hyperparameters were fine-tuned on a validation dataset made of 5% of the training dataset.\nArchitecture We implemented the proposed method with ResNet (He et al., 2016) and WideResNet (Zagoruyko & Komodakis, 2016) architectures. Following standard practices, we average the logits of our predictions uniformly. For branch-based ensemble, we separate the last block and the classifier of each member from the weights sharing while the other low-level layers were shared.\nLearning Following (Chen et al., 2020b), we used SGD with Nesterov with momentum of 0.9, mini-batch size of 128, weight decay of 5e-4, 300 epochs, a standard learning rate scheduler that sets values {0.1, 0.001, 0.0001} at steps {0, 150, 225} for CIFAR-10/100. In CIFAR-100, we additionally set the learning rate at 0.00001 at step 250. We used traditional basic data augmentation that consists of horizontal flips and a random crop of 32 pixels with a padding of 4 pixels. The learning curve is shown on Figure 8.\nB.2 INFORMATION BOTTLENECK IMPLEMENTATION\nArchitecture Features are extracted just before the dense layer since deeper layers are more semantics, of size d = {64, 128, 256} for {ResNet-32, WRN-28-2, ResNet-110}. Our encoder does not provide a deterministic point in the features space but a feature distribution encoded by mean and diagonal covariance matrix. The covariance is predicted after a Softplus activation function with one additional dense layer, taking as input the features mean, with d(d+ 1) trainable weights. In training we sample once from this features distribution with the reparameterization trick. In inference, we predict from the distribution’s mean (and therefore only once). We parameterized b(z|y) ∼ N(bµ(y),1) with trainable mean and unit diagonal covariance, with d additional trainable weights per class. As noticed in (Fischer & Alemi, 2020), this can be represented as a single embedding layer mapping one-hot classes to d-dimensional tensors. Therefore in total we only add d(d+1+K) trainable weights, that all can be discarded during inference. For VIB, the embedding bµ is shared among classes: in total it adds d(d + 2) trainable weights. Contrary to recent IB approaches (Wu et al., 2019b; Wu & Fischer, 2020; Fischer & Alemi, 2020), we only have one dense layer to predict logits after the features bottleneck, and we did not change the batch normalization, for fair comparisons with traditional ensemble methods.\nScheduling We employ the jump-start method that facilitates the learning of bottleneck-inspired models (Wu et al., 2019b; Wu & Fischer, 2020; Fischer & Alemi, 2020): we progressively anneal the value of βceb. For CIFAR-10, we took the scheduling from (Fischer & Alemi, 2020), except that we widened the intervals to make the training loss decrease more smoothly: log(βceb) reaches values {100, 10, 2} at steps {0, 5, 100}. No standard scheduling was available for CIFAR-100. As it is more difficult than CIFAR-10, we added additional jump-epochs with lower values: log(βceb) reaches values {100, 10, 2, 1.5, 1} at steps {0, 8, 175, 250, 300}. This slow scheduling increases progressively the covariance predictions eσ(x) and facilitates learning. For VIB, we scheduled similarly using the equivalence from (Fischer, 2020): βib = βceb + 1. We found VCEB to have lower standard deviation in performances than VCEB: βib can hinder the learnability (Wu et al., 2019b). These schedulings have been used in all our setups, without and with redundancy losses, for ResNet-32, ResNet-110 and WRN-28-10, for from 1 to 10 members." }, { "heading": "B.3 ADVERSARIAL TRAINING IMPLEMENTATION", "text": "Redundancy Following standard adversarial learning practices, our discriminator for redundancy estimation is a MLP with 4 layers of size {256, 256, 100, 1}, with leaky-ReLus of slope 0.2, optimized by RMSProp with learning rate {0.003, 0.005} for CIFAR-{10, 100}. We empirically found that four steps for the discriminator for one step of the classifier increase stability. Specifically, it takes as input the concatenation of the two hidden representations of size d, sampled with a repa-\nrameterization trick. Gradients are not backpropagated in the layer that predicts the covariance, as it would artificially increase the covariance to reduce the mutual information among branches. The output, followed by a sigmoid activation function, should be close to 1 (resp. 0) if the sample comes from the joint (resp. product) distribution.\nConditional Redundancy The discriminator for CR estimation needs to take into account the target class Y . It first embeds Y in an embedding layer of size 64, which is concatenated at the inputs of the first and second layers. Improved features merging method could be applied, such as Ben-Younes et al. (2019). The output has size K, and we select the index associated with the Y . We note in Figure 11 that our discriminator remains calibrated.\nEnsemble with M Models In the general case, we only consider pairwise interactions, therefore we need to estimate ( M 2 ) values. To reduce the number of parameters, we use only one discriminator w. Features associated with zk are filled with zeros when we sample from p(zi, zj , y) or from p(zi, y)p(zj |y), where i, j, k ∈ {1, . . . ,M}, k 6= i and k 6= j. Therefore, the input tensor\nfor the discriminator is of size (M ∗ d + 64): its first layer has (M ∗ d + 64) ∗ 256 dense weights: the number of weights in w scales linearly with M and d as w’s input grows linearly, but w’s hidden size remains fixed.\nδcr value For branch-based and network-based CIFAR-100, we found δcr at {0.1, 0.15, 0.2, 0.22, 0.25} for {2, 3, 4, 5, 6} members to perform best on the validation dataset when training on 95% on the classical training dataset. For CIFAR-10, {0.1} for 4 members. We found that lower values of δr were necessary for our baselines IBR and CEBR.\nScheduling For fair comparison, we apply the traditional ramp-up scheduling up to step 80 from the co-distillation literature (Lan et al., 2018; Kim et al., 2019b; Chen et al., 2020b) to all concurrent approaches and to our redundancy training.\nSampling To sample from p(z1, z2, y), we select features extracted from one image. To sample from p(z1, y)p(z2|y), we select features extracted from two different inputs, that share the same class y. In practise, we keep a memory from previous batches as the batch size is 128 whereas we have 100 classes in CIFAR-100. This memory, of size M ∗ d ∗ K ∗ 4, is updated at the end of each training step. Our sampling is a special case of k-NN sampling (Molavipour et al., 2020): as we sample from a discrete categorical variable, the closest neighbour has exactly the same discrete value. The training can be unstable as it minimises the divergence between two distributions. To make them overlap over the features space, we sample numsample = {4} times from the gaussian distribution of Z1 and Z2 with the reparameterization trick. This procedure is similar to instance noise (Sønderby et al., 2016) and it allows us to safely optimise w at each iteration. It gives better robustness than just giving the gaussian mean. Moreover, we progressively ease the discriminator task by scheduling the covariance through time with a linear ramp-up. First the covariance is set to 1 until epoch 100, then it linearly reduces to the predicted covariance eσi (x) until step 250. We sample a ratio rationegpos of one positive pair for {2, 4} negative pairs on CIFAR-{10, 100}.\nClipping Following Bachman et al. (2019), we clip the density ratios (tanhclip) by computing the non linearity exp[τ tanh log[f(z1,z2,y)]τ ]. A lower τ reduces the variance of the estimation and stabilizes the training even with a strong discriminator, at the cost of additional bias. The clipping threshold τ was set to 10 as in Song & Ermon (2020)." }, { "heading": "B.4 PSEUDO-CODE", "text": "Algorithm 1: Full DICE Procedure for M = 2 members /* Setup */ Parameters: θ1 = {e1, b1, c1}, θ2 = {e2, b2, c2} and discriminator w, randomly initialized Input: Observations {xn, yn}Nn=1, coefficients βceb and δcr, schedulings scheceb and\nrampupendstepstartstep, clipping threshold τ , batch size b, optimisers gθ1,2 and gw, number of discriminators step nstepd, number of samples nums, ratio of positive/negative sample rationegpos\n/* Training Procedure */ 1 for s← 1 to 300 do 2 βsceb ← scheceb(startvalue=0, endvalue=βceb, step=s) 3 δscr ← rampup800 (startvalue=0, endvalue=δcr, step=s) 4 Randomly select batch {(xn, yn)}n∈B of size b // Batch Sampling /* Step 1: Classification Loss with CEB */ 5 for m← 1 to 2 do 6 zni ← e µ i (z|xn) + eσi (z|xn), ∀n ∈ B with ∼ N(0, 1)\n7 VCEBi ← 1b ∑ n∈B { 1βscebDKL(ei(z|x n)‖bi(z|yn))− log ci(yn|zni }\n/* Step 2: Diversity Loss with Conditional Redundancy */ 8 for m← 1 to 2 do 9 eσ,si (z|xn) = rampup250100(startvalue=1, endvalue=eσi (z|xn), step=s)\n10 for k ← 1 to nums do 11 zni,k ← e µ i (z|xn) + e σ,s i (z|xn), ∀n ∈ B with ∼ N(0, 1)\n12 B ← {(zn1,k, zn2,k, yn)}, ∀n ∈ B, k ∈ {1, . . . , nums} // Joint Distrib. 13 L̂CRDV ← 1|B| ∑ t∈B log f(t) with f(t)← tanhclip( w(t)1−w(t) , τ) 14 θ1,2 ← gθ1,2(∇θ1VCEB1 +∇θ2VCEB2 + δscr∇θ1,2L̂CRDV ) // Backprop Ensemble /* Step 3: Adversarial Training */ 15 for ← 1 to nstepd do 16 B ← {(zn1,k, zn2,k, yn)}, ∀n ∈ B,∀k ∈ {1, . . . , nums} // Joint Distrib. 17 Bp ← {(zn1,k, zn ′ 2,k′ , y n)}, ∀n ∈ B,∀k ∈ {1, . . . , nums}, k′ ∈ {1, . . . , rationegpos } 18 with n′ ∈ B, yn = yn′ , n 6= n′ // Product distribution 19 w ← gw(∇wLce(w)) // Backprop Discriminator 20 Sample new zni,k\n/* Test Procedure */ Data: Inputs {xn}Tn=1 // Test Data Output: argmax\nk∈{1,...,K} ( 12 [c1(e µ 1 (z|xn)) + c2(e µ 2 (z|xn))]), ∀n ∈ {1, . . . , T}" }, { "heading": "B.5 EMPIRICAL LIMITATIONS", "text": "Our approach relies on very recent works in neural network estimation of mutual information, that still suffer from loose approximations. Improvements in this area would facilitate our learning procedure. Our approach increases the number of operations because of the adversarial procedure, but only during training: the inference time remains the same." }, { "heading": "C CONCURRENT APPROACHES", "text": "Concurrent approaches can be divided in two general patterns: they promote either individual accuracy by co-distillation either ensemble diversity." }, { "heading": "C.1 CO-DISTILLATION APPROACHES", "text": "Contrary to the traditional distillation (Hinton et al., 2015) that aligns the soft prediction between a static pre-trained strong teacher towards a smaller student, online co-distillation performs teaching in an end-to-end one-stage procedure: the teacher and the student are trained simultaneously.\nDistillation in Logits The seminal ”Deep Mutual Learning” (DML) (Zhang et al., 2018) introduced the main idea: multiple networks learn to mimic each other by reducing KL-losses between pairs of predictions. ”Collaborative learning for deep neural networks” (CL-ILR) (Song & Chai, 2018) used the branch-based architecture by sharing low-level layers to reduce the training complexity, and ”Knowledge Distillation by On-the-Fly Native Ensemble” (ONE) (Lan et al., 2018) used a weighted combination of logits as teacher hence providing better information to each network. ”Online Knowledge Distillation via Collaborative Learning” (KDCL) (Guo et al., 2020) computed the optimum weight on an held-out validation dataset. ”Feature Fusion for Online Mutual Knowledge Distillation” (FFL) (Kim et al., 2019b) introduced a feature fusion module. These approaches improve individual performance at the cost of increased homogenization. ”Online Knowledge Distillation with Diverse Peers” (OKDDip) (Chen et al., 2020b) slightly alleviates this problem with an asymmetric distillation and a self-attention mechanism. ”Peer Collaborative Learning for Online Knowledge Distillation” (PCL) (Wu & Gong, 2020) benefited from the mean-teacher paradigm with temporal ensembling and from diverse data augmentation, at the cost of multiple inferences through the shared backbone.\nDistillation in Features Whereas all previous approaches only apply distillation on the logits, the recent ”Feature-map-level Online Adversarial Knowledge Distillation” (AFD) (Chung et al., 2020) aligned features distributions by adversarial training. Note that this is not opposite to our approach, as they force distributions to be similar while we force them to be uncorrelated." }, { "heading": "C.2 DIVERSITY APPROACHES", "text": "On the other hands, some recent papers in computer vision explicitly encourage diversity among the members with regularization losses.\nDiversity in Logits ”Diversity Regularization in Deep Ensembles” (Shui et al., 2018) applied negative correlation (Liu & Yao, 1999a) to regularize the training for improved calibration, with no impact on accuracy. ”Learning under Model Misspecification: Applications to Variational and Ensemble methods” (Masegosa, 2020) theoretically motivated the minimization of second-order PAC-Bayes bounds for ensembles, empirically estimated through a generalized variational method.\n”Adaptive Diversity Promoting” (ADP) (Pang et al., 2019) decorrelates only the non-maximal predictions to maintain the individual accuracies, while promoting ensemble entropy. It forces different members to have different ranking of predictions among non maximal predictions. However, Liang et al. (2018) has shown that ranking of outputs are critical: for example, non maximal logits tend to be more separated from each other for in-domain inputs compared to out-of-domain inputs. Therefore individual accuracies are decreased. Coefficients α and β are respectively set to 2 and 0.5, as in the original paper.\nDiversity in Features One could think about increasing classical distances among features like L2 in (Kim et al., 2018), but in our experiments it reduces overall accuracy: it is not even invariant to linear transformations such as translation. ”Diversity inducing Information Bottleneck in Model Ensembles” from Sinha et al. (2020) trains a multi-branch network and applies VIB on individual branch, by encoding p(z|y) ∼ N (0, 1), which was shown to be hard to learn (Wu & Fischer, 2020). Moreover, we notice that their diversity-inducing adversarial loss is an estimation of the JS-divergence between pairs of features, built on the dual f -divergence representation (Nowozin et al., 2016): similar idea was recently used for saliency detection (Chen et al., 2020a). As the JS-divergence is a symmetrical formulation of the KL, we argue that DIBS and IBR share the same motivations and only have minor discrepancies: the adversarial terms in DIBS loss with both terms sampled from the same branch and both terms sampled from the same prior. In our experiments, these differences reduce overall performance. We will include their scores when they publish measurable results on CIFAR datasets or when they release their code.\nDiversity in Gradients ”Improving adversarial robustness of ensembles with diversity training.” (GAL) (Kariyappa & Qureshi, 2019) enforced diversity in the gradients with a gradient alignment loss. ”Exploiting Joint Robustness to Adversarial Perturbations” (Dabouei et al., 2020) considered the optimal bound for the similarity of gradients. However, as stated in the latter, “promoting diversity of gradient directions slightly degrades the classification performance on natural examples . . . [because] classifiers learn to discriminate input samples based on distinct sets of representative features”. Therefore we do not consider them as concurrent work." }, { "heading": "D EXPERIMENTAL SETUP", "text": "" }, { "heading": "D.1 TRAINING DATASETS", "text": "We train our procedure on two image classification benchmarks, CIFAR-100 and CIFAR-10, (Krizhevsky et al., 2009). They consist of 60k 32*32 natural and colored images in respectively 100 classes and 10 classes, with 50k training images and 10k test images. For hyperparameter selection and ablation studies, we train on 95% of the training dataset, and analyze performances on the validation dataset made of the remaining 5%." }, { "heading": "D.2 OOD", "text": "Dataset We used the traditional out-of-distribution datasets for CIFAR-100, described in (Liang et al., 2018): TinyImageNet (Deng et al., 2009), LSUN (Yu et al., 2015), iSUN(Xu et al., 2015), and CIFAR-10. We borrowed the evaluation code from https://github.com/ uoguelph-mlrg/confidence_estimation (DeVries & Taylor, 2018).\nMetrics We reported the standard metrics for binary classification: FPR at 95 % TPR, Detection error, AUROC (Area Under the Receiver Operating Characteristic curve) and AUPR (Area under the Precision-Recall curve, -in or -out depending on which dataset is specified as positive). See Liang et al. (2018) for definitions and interpretations of these metrics." }, { "heading": "E ADDITIONAL THEORETICAL ELEMENTS", "text": "" }, { "heading": "E.1 BIAS VARIANCE COVARIANCE DECOMPOSITION", "text": "The Bias-Variance-Covariance Decomposition (Ueda & Nakano, 1996) generalizes the BiasVariance Decomposition (Kohavi et al., 1996) by treating the ensemble of M members as a single learning unit.\nE[(f − t)2] = bias2 + 1 M var + (1− 1 M )covar, (7)\nwith\nbias = 1\nM ∑ i (E[fi]− t),\nvar = 1\nM ∑ i E[(E[fi]− t)2],\ncovar = 1 M(M − 1) ∑ i ∑ j 6=i E[(fi − E[fi])(fj − E[fj ])].\nThe estimation improves when the covariance between members is zero: the reduction factor of the variance component equals to M when errors are uncorrelated. Compared to the Bias-Variance Decomposition (Kohavi et al., 1996), it leads to a variance reduction of 1M . Brown et al. (2005a;b) summarized it this way: “in addition to the bias and variance of the individual estimators, the generalisation error of an ensemble also depends on the covariance between the individuals. This raises the interesting issue of why we should ever train ensemble members separately; why shouldn’t we try to find some way to capture the effect of the covariance in the error function?”.\nE.2 MUTUAL INFORMATION\nNobody knows what entropy really is.\nJohn Van Neumann to Claude Shannon\nAt the cornerstone of Shannon’s information theory in 1948 (Shannon, 1948), mutual information is the difference between the sum of individual entropies and the entropy of the variables considered jointly. Stated otherwise, it is the reduction in the uncertainty of one variable due to the knowledge of the other variable (Cover, 1999). Entropy owed its name to the thermodynamic measure of uncertainty introduced by Rudolf Clausius and developed by Ludwig Boltzmann.\nI(Z1;Z2) = H(Z1) +H(Z2)−H(Z1, Z2) = H(Z1)−H(Z1|Z2) = DKL(P (Z1, Z2)‖P (Z1)P (Z2)).\nThe conditional mutual information generalizes mutual information when a third variable is given:\nI(Z1;Z2|Y ) = DKL(P (Z1, Z2|Y )‖P (Z1|Y )P (Z2|Y ))." }, { "heading": "E.3 KL BETWEEN GAUSSIANS", "text": "The Kullback-Leibler divergence (Kullback, 1959) between two gaussian distributions takes a particularly simple form:\nDKL(e(z|x)‖b(z|y)) = log bσ(y) eσ(x) + eσ(x)2 + (eµ(x)− bµ(y))2 2bσ(y)2 − 1 2 (Gaussian param.)\n= 1 2 [(1 + eσ(x)2 − log(eσ(x)2))︸ ︷︷ ︸\nVariance\n+(eµ(x)− bµ(y))2︸ ︷︷ ︸ Mean ]. (bσ(y) = 1)\nThe variance component forces the predicted variance eσ(x) to be close to bσ(y) = 1. The mean component forces the class-embedding bµ(y) to converge to the average of the different elements\nin its class. These class-embeddings are similar to class-prototypes, highlighting a theoretical link between CEB (Fischer, 2020; Fischer & Alemi, 2020) and prototype based learning methods (Liu & Nakagawa, 2001)." }, { "heading": "E.4 DIFFERENCE BETWEEN VCEB AND VIB", "text": "In Fischer (2020), CEB is variationally upper bounded by VCEB. We detail the computations:\nCEBβceb(Z) = 1\nβceb I(X;Z|Y )− I(Y ;Z) (Definition)\n= 1\nβceb [I(X,Y ;Z)− I(Y ;Z)]− I(Y ;Z) (Chain rule)\n= 1\nβceb [I(X;Z)− I(Y ;Z)]− I(Y ;Z) (Markov assumptions)\n= 1\nβceb [−H(Z|X) +H(Z|Y )]− [H(Y )−H(Y |Z)] (MI as diff. of 2 ent.)\n≤ 1 βceb [−H(Z|X) +H(Z|Y )]− [−H(Y |Z)] (Non-negativity of ent.)\n= ∫ { 1 βceb log e(z|x) p(z|y) − log p(y|z)}p(x, y, z)∂x∂y∂z (Definition of ent.)\n≤ ∫ { 1 βceb log e(z|x) b(z|y) − log c(y|z)}p(x, y)e(z|x)∂x∂y∂z (Variational approx.)\n≈ 1 N N∑ n=1 ∫ { 1 βceb log e(z|xn) b(z|yn) − log c(y n|z)}e(z|xn)∂z (Empirical data distrib.)\n≈ VCEBβceb(θ = {e, b, c}), (Reparameterization trick) where\nVCEBβceb(θ = {e, b, c}) = 1\nN N∑ n=1 { 1 βceb DKL(e(z|xn)‖b(z|yn))− E log c(yn|e(xn, )}.\nAs a reminder, Alemi et al. (2017) upper bounded: IBβib(Z) = 1 βib I(X;Z)− I(Y ;Z) by:\nVIBβib(θ = {e, b, c}) = 1\nN N∑ n=1 { 1 βib DKL(e(z|xn)‖b(z))− E log c(yn|e(xn, )}. (8)\nIn VIB, all features distribution e(z|x) are moved towards the same class-agnostic distribution b(z) ∼ N(µ, σ), independently of y. In VCEB, e(z|x) are moved towards the class conditional marginal bµ(y) ∼ N(bµ(y), bσ(y)). This is the unique difference between VIB and VCEB. VIB leads to a looser approximation with more bias than VCEB." }, { "heading": "E.5 TRANSFORMING IBR AND CEBR INTO TRACTABLE LOSSES", "text": "In this section we derive the variational approximation of the IBR criterion, defined by: IBRβib,δr (Z1, Z2) = IBβib(Z1) + IBβib(Z2) + δrI(Z1;Z2).\nRedundancy Estimation To estimate the redundancy component, we apply the same procedure as for conditional redundancy but without the categorical constraint, as in the seminal work of Belghazi et al. (2018) for mutual information estimation. Let B and Bp be two random batches sampled respectively from the observed joint distribution p(z1, z2) = p(e1(z|x), e2(z|x)) and the product distribution p(z1)p(z2) = p(e1(z|x))p(e2(z|x′)), where x, x′ are two inputs that may not belong to the same class. We similarly train a network w that tries to discriminate these two distributions. With f = w1−w , the redundancy estimation is:\nÎRDV = 1 |B| ∑\n(z1,z2)∈B log f(z1, z2)︸ ︷︷ ︸ Diversity − log( 1 |Bp| ∑ (z1,z′2)∈Bp f(z1, z ′ 2)),\nand the final loss: L̂RDV (e1, e2) = 1 |B| ∑\n(z1,z2)∈B\nlog f(z1, z2).\nIBR Finally we train θ1 = {e1, b1, c1} and θ2 = {e2, b2, c2} jointly by minimizing: LIBR(θ1, θ2) = VIBβib(θ1) + VIBβib(θ2) + δrL̂RDV (e1, e2). (9)\nCEBR For ablation study, we also consider a criterion that would benefit from CEB’s tight approximation but with non-conditional redundancy regularization:\nLCEBR(θ1, θ2) = VCEBβceb(θ1) + VCEBβceb(θ2) + δrL̂RDV (e1, e2). (10)" }, { "heading": "F FIRST, SECOND AND HIGHER-ORDER INFORMATION INTERACTIONS", "text": "" }, { "heading": "F.1 DICE REDUCES FIRST AND SECOND ORDER INTERACTIONS", "text": "Applying information-theoretic principles for deep ensembles leads to tackling interactions among features through conditional mutual information minimization. We define the order of an information interaction as the number of different extracted features involved.\nFirst Order Tackling the first-order interaction I(X;Zi|Y ) with VCEB empirically increased overall performance compared to ensembles of deterministic features extractors learned with categorical cross entropy, at no cost in inference and almost no additional cost in training. In the Markov chain Zi ← X → Zj , the chain rules provides: I(Zi;Zj |Y ) ≤ I(X;Zi|Y ). More generally, I(X;Zi|Y ) upper bounds higher order interactions such as third order I(Zi;Zj , Zk|Y ). In conclusion, VCEB reduces an upper bound of higher order interactions with quite a simple variational approximation.\nSecond Order In this paper, we directly target the second-order interaction I(Zi;Zj |Y ) through a more complex adversarial training. We increase diversity and performances by remove spurious correlations shared by Zi and Zj that would otherwise cause simultaneous errors.\nHigher Order interactions include the third order I(Zi;Zj , Zk|Y ), the fourth order I(Zi;Zj , Zk, Zl|Y ), etc, up to the M -th order. They capture more complex correlations among features. For example, Zj alone (and Zk alone) could be unable to predict Zi, while they [Zj , Zk] could together. However we only consider first and second order interactions in the current submission. It is common practice, for example in the feature selection literature (Battiti, 1994; Fleuret, 2004; Brown, 2009; Peng et al., 2005). The main reason to truncate higher order interactions is computational, as the number of components would grow exponentially and add significant additional cost in training. Another reason is empirical, the additional hyper-parameters may be hard to calibrate. But these higher order interactions could be approximated through neural estimations like the second order. For example, for the third order, features Zi, Zj and Zk could be given simultaneously to the discriminator w. The complete analysis of these higher order interactions has huge potential and could lead to a future research project." }, { "heading": "F.2 LEARNING FEATURES INDEPENDENCE WITHOUT COMPRESSION", "text": "The question is whether we could learn deterministic encoders with second order I(Zi;Zj |Y ) regularization without tackling first order I(X;Zi|Y ). We summarized several approaches in Table 11.\nFirst Approach Without Sampling Deterministic encoders predict deterministic points in the features space. Feeding the discriminator w with deterministic triples without sampling increases diversity and reaches 77.09, compared to 76.78 for independent deterministic. Compared to DICE, w’s task has been simplified: indeed, w tries to separate the joint and the product deterministic distributions that may not overlap anymore. This violates convergence conditions, destabilizes overall adversarial training and the equilibrium between the encoders and the discriminator.\nSampling and Reparameterization Trick To make the joint and product distributions overlap over the features space, we apply the reparametrization trick on features with variance 1. This second approach is similar to instance noise (Sønderby et al., 2016), which tackled the instability of adversarial training. We reached 77.33 by protecting individual accuracies.\nSynergy between CEB and CR In comparison, we obtain 77.51 with DICE. In addition to theoretical motivations, VCEB and CR work empirically in synergy. First, the adversarial learning is simplified and only focuses on spurious correlations VCEB has not already deleted. Thus it may explain the improved stability related to the value of δcr and the reduction in standard deviations in performances. Second, VCEB learns a Gaussian distribution; a mean but also an input-dependant covariance eσi (x). This covariance fits the uncertainty of a given sample: in a similar context, Yu et al. (2019) has shown that large covariances were given for difficult samples. Sampling from this input-dependant covariance performs better than using an arbitrary fixed variance shared by all dimensions from all extracted features from all samples, from 77.29 to 77.51.\nConclusion DICE benefits from both components: learning redundancy along with VCEB improves results, at almost no extra cost. We think CR can definitely be applied with deterministic encoders as long as the inputs of the discriminator are sampled from overlapping distributions in the features space. Future work could study new methods to select the variance in sampling. As compression losses yield additional hyper-parameters and may underperform for some architectures/datasets, learning only the conditional redundancy (without compression) could increase the applicability of our contributions.\nG IMPACT OF THE SECOND TERM IN THE NEURAL ESTIMATION OF CONDITIONAL REDUNDANCY" }, { "heading": "G.1 CONDITIONAL REDUNDANCY IN TWO COMPONENTS", "text": "The conditional redundancy can be estimated by the difference between two components:\nÎCRDV = 1 |B| ∑\n(z1,z2,y)∈B log f(z1, z2, y)︸ ︷︷ ︸ Diversity − log\n 1 |Bp| ∑ (z1,z′2,y)∈Bp f(z1, z ′ 2, y)︸ ︷︷ ︸\nFake correlations , (11) with f = w1−w . In this paper, we focused only on the left hand side (LHS) component from equation 11 which leads to L̂CRDV in equation 4. We showed empirically that it improves ensemble diversity and overall performances. LHS forces features extracted from the same input to be unpredictable from each other; to simulate that they have been extracted from two different images.\nNow we investigate the impact of the right hand side (RHS) component from equation 11. We conjecture that RHS forces features extracted from two different inputs from the same class to create fake correlations, to simulate that they have been extracted from the same image. Overall, the RHS would correlate members and decrease diversity in our ensemble." }, { "heading": "G.2 EXPERIMENTS", "text": "These intuitions are confirmed by experiments with a 4-branches ResNet-32 on CIFAR-100, which are illustrated in Figure 12. Training only with the RHS and removing the LHS (the opposite of what is done in DICE) reduces diversity compared to CEB. Moreover, keeping both the LHS and the RHS leads to slightly reduced diversity and ensemble accuracy compared to DICE. We obtained 77.40± 0.19 with LHS+RHS instead of 77.51± 0.17 with only the LHS. In conclusion, dropping the RHS performs better while reducing the training cost." }, { "heading": "H SOCIOLOGICAL ANALOGY", "text": "We showed that increasing diversity in features while encouraging the different learners to agree improves performance for neural networks: the optimal diversity-accuracy trade-off was obtained with a large diversity. To finish, we make a short analogy with the importance of diversity in our society. Decision-making in group is better than individual decision as long as the members do not belong to the same cluster. Homogenization of the decision makers increases vulnerability to failures, whereas diversity of backgrounds sparks new discoveries (Muldoon, 2016): ideas should be shared and debated among members reflecting the diversity of the society’s various components. Academia especially needs this diversity to promote trust in research (Sierra-Mercado & LázaroMuñoz, 2018), to improve quality of the findings (Swartz et al., 2019), productivity of the teams (Vasilescu et al., 2015) and even schooling’s impact (Bowman, 2013)." }, { "heading": "I LEARNING STRATEGY OVERVIEW", "text": "We provide in Figure 13 a zoomed version of our learning strategy." }, { "heading": "J MAIN TABLE", "text": "Table 12 unifies our main results on CIFAR-100 from Table 1 and CIFAR-10 from Table 2.\nFi gu\nre 13\n: L\nea rn\nin g\nst ra\nte gy\nov er\nvi ew\n. B\nlu e\nar ro\nw s\nre pr\nes en\nt tr\nai ni\nng cr\nite ri\na: (1\n) cl\nas si\nfic at\nio n\nw ith\nco nd\niti on\nal en\ntr op\ny bo\nttl en\nec k\nap pl\nie d\nse pa\nra te\nly on\nm em\nbe rs\n1 an\nd 2,\nan d\n(2 )\nad ve\nrs ar\nia l\ntr ai\nni ng\nto de\nle te\nsp ur\nio us\nco rr\nel at\nio ns\nbe tw\nee n\nm em\nbe rs\nan d\nin cr\nea se\ndi ve\nrs ity\n. X\nan d X ′\nbe lo\nng to\nth e\nsa m\ne Y\nfo r\nco nd\niti on\nal re\ndu nd\nan cy\nm in\nim iz\nat io\nn.\nTa bl\ne 12\n:E ns\nem bl\ne cl\nas si\nfic at\nio n\nac cu\nra cy\n(T op\n-1 ,%\n).\nM et\nho d\nC IF\nA R\n-1 00\nC IF\nA R\n-1 0\nN am\ne C\nom po\nne nt\ns R\nes N\net -3\n2 R\nes N\net -1\n10 W\nR N\n-2 8-\n2 R\nes N\net -3\n2 R\nes N\net -1 10 D iv . I.B . 3- br an ch 4- br an ch 5- br an ch 4- ne t 3- br an ch 4- br an ch 3- br an ch 4- br an ch 3- ne t 4- br an ch 3- br an ch\nIn d.\n76 .2\n8± 0.\n12 76\n.7 8±\n0. 19\n77 .2\n4± 0.\n25 77\n.3 8±\n0. 12\n80 .5\n4± 0.\n09 80\n.8 9±\n0. 31\n78 .8\n3± 0.\n12 79\n.1 0±\n0. 08\n80 .0\n1± 0.\n15 94\n.7 5±\n0. 08\n95 .6\n2± 0.\n06\nO N\nE (L\nan et\nal .,\n20 18\n) 75\n.1 7±\n0. 35\n75 .1\n3± 0.\n25 75\n.2 5±\n0. 22\n76 .2\n5± 0.\n32 78\n.9 7±\n0. 24\n79 .8\n6± 0.\n25 78\n.3 8±\n0. 45\n78 .4\n7± 0.\n32 77\n.5 3±\n0. 36\n94 .4\n1± 0.\n05 95\n.2 5±\n0. 08\nO K\nD D\nip (C\nhe n\net al\n., 20\n20 b)\n75 .3\n7± 0.\n32 76\n.8 5±\n0. 25\n76 .9\n5± 0.\n18 77\n.2 7±\n0. 31\n79 .0\n7± 0.\n27 80\n.4 6±\n0. 35\n79 .0\n1± 0.\n19 79\n.3 2±\n0. 17\n80 .0\n2± 0.\n14 94\n.8 6±\n0. 08\n95 .2\n1± 0.\n09\nA D\nP (P\nan g\net al\n., 20\n19 )\nPr ed\n. 76\n.3 7±\n0. 11\n77 .2\n1± 0.\n21 77\n.6 7±\n0. 25\n77 .5\n1± 0.\n25 80\n.7 3±\n0. 38\n81 .4\n0± 0.\n27 79\n.2 1±\n0. 19\n79 .7\n1± 0.\n18 80\n.0 1±\n0. 17\n94 .9\n2± 0.\n04 95\n.4 3±\n0. 12\nIB (e\nqu at\nio n\n8) V\nIB 76\n.0 1±\n0. 12\n76 .9\n3± 0.\n24 77\n.2 2±\n0. 19\n77 .7\n2± 0.\n12 80\n.4 3±\n0. 34\n81 .1\n2± 0.\n19 79\n.1 9±\n0. 35\n79 .1\n5± 0.\n12 80\n.1 5±\n0. 13\n94 .7\n6± 0.\n12 94\n.5 4±\n0. 07\nC E\nB (e\nqu at\nio n\n2) V\nC E\nB 76\n.3 6±\n0. 06\n76 .9\n8± 0.\n18 77\n.3 5±\n0. 14\n77 .6\n4± 0.\n15 81\n.0 8±\n0. 12\n81 .1\n7± 0.\n16 78\n.9 2±\n0. 08\n79 .2\n0± 0.\n13 80\n.3 8±\n0. 18\n94 .9\n3± 0.\n11 94\n.6 5±\n0. 05\nIB R\n(e qu\nat io\nn 9)\nR V\nIB 76\n.6 8±\n0. 13\n77 .2\n5± 0.\n13 77\n.7 7±\n0. 21\n77 .8\n4± 0.\n12 81\n.3 4±\n0. 21\n81 .3\n8± 0.\n08 79\n.3 3±\n0. 15\n79 .9\n0± 0.\n10 80\n.2 2±\n0. 10\n94 .9\n1± 0.\n14 95\n.6 8±\n0. 05\nC E\nB R\n(e qu\nat io\nn 10\n) R\nV C\nE B\n76 .7\n2± 0.\n08 77\n.3 0±\n0. 12\n77 .8\n1± 0.\n10 77\n.8 2±\n0. 11\n81 .5\n2± 0.\n11 81\n.5 5±\n0. 33\n79 .2\n5± 0.\n15 79\n.9 8±\n0. 07\n80 .3\n5± 0.\n15 94\n.9 4±\n0. 12\n95 .6\n7± 0.\n06\nD IC\nE (e\nqu at\nio n\n6) C\nR V\nC E\nB 76\n.8 9±\n0. 09\n77 .5\n1± 0.\n17 78\n.0 8±\n0. 18\n77 .9\n2± 0.\n08 81\n.6 7±\n0. 14\n81 .9\n3± 0.\n13 79\n.5 9±\n0. 13\n80 .0\n5± 0.\n11 80\n.5 5±\n0. 12\n95 .0\n1± 0.\n09 95\n.7 4±\n0. 08" } ]
2,021
null
SP:5561773ab024b083be4e362db079e371abf79653
[ "The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enables learning of disentangled representations of tangible attributes and achieve novel image synthesis by recombining those swappable components under a zero-shot setting. The framework leverages the underlying semantic links across samples which could be instantiated as a multigraph. Cycle-consistent reconstruction loss as well as reconstruction loss are computed on synthetic samples from swapped latent representations." ]
Visual cognition of primates is superior to that of artificial neural networks in its ability to “envision” a visual object, even a newly-introduced one, in different attributes including pose, position, color, texture, etc. To aid neural networks to envision objects with different attributes, we propose a family of objective functions, expressed on groups of examples, as a novel learning framework that we term Group-Supervised Learning (GSL). GSL allows us to decompose inputs into a disentangled representation with swappable components, that can be recombined to synthesize new samples. For instance, images of red boats & blue cars can be decomposed and recombined to synthesize novel images of red cars. We propose an implementation based on auto-encoder, termed group-supervised zeroshot synthesis network (GZS-Net) trained with our learning framework, that can produce a high-quality red car even if no such example is witnessed during training. We test our model and learning framework on existing benchmarks, in addition to a new dataset that we open-source. We qualitatively and quantitatively demonstrate that GZS-Net trained with GSL outperforms state-of-the-art methods.
[ { "affiliations": [], "name": "Yunhao Ge" }, { "affiliations": [], "name": "Sami Abu-El-Haija" }, { "affiliations": [], "name": "Gan Xin" }, { "affiliations": [], "name": "Laurent Itti" } ]
[ { "authors": [ "Yuval Atzmon", "Gal Chechik" ], "title": "Probabilistic and-or attribute grouping for zero-shot learning", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "A. Borji", "S. Izadi", "L. Itti" ], "title": "ilab-20m: A large-scale controlled object dataset to investigate deep learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-vae", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Ricky T.Q. Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yunhao Ge", "Jiaping Zhao", "Laurent Itti" ], "title": "Pose augmentation: Class-agnostic object pose transformation for object recognition", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": null, "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial networks", "venue": "In Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "I. Higgins", "L. Matthey", "A. Pal", "C. Burgess", "X. Glorot", "M. Botvinick", "S. Mohamed", "A. Lerchner" ], "title": "β-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Seunghoon Hong", "Dingdong Yang", "Jongwook Choi", "Honglak Lee" ], "title": "Inferring semantic layout for hierarchical text-to-image synthesis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "arXiv preprint arXiv:1802.05983,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Kevin Lai", "Liefeng Bo", "Xiaofeng Ren", "Dieter Fox" ], "title": "A large-scale hierarchical multi-view rgb-d object dataset", "venue": "In 2011 IEEE international conference on robotics and automation,", "year": 2011 }, { "authors": [ "C.H. Lampert" ], "title": "Learning to detect unseen object classes by between-class attribute transfer", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Oliver Langner", "Ron Dotsch", "Gijsbert Bijlstra", "Daniel HJ Wigboldus", "Skyler T Hawk", "AD Van Knippenberg" ], "title": "Presentation and validation of the radboud faces database", "venue": "Cognition and emotion,", "year": 2010 }, { "authors": [ "Nikos K. Logothetis", "Jon Pauls", "Tomaso Poggiot" ], "title": "Shape representation in the inferior temporal cortex of monkeys", "venue": "In Current Biology,", "year": 1995 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dsprites: Disentanglement testing sprites dataset", "venue": null, "year": 2017 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Luan Tran", "Xi Yin", "Xiaoming Liu" ], "title": "Disentangled representation learning gan for pose-invariant face recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Taihong Xiao", "Jiapeng Hong", "Jinwen Ma" ], "title": "Elegant: Exchanging latent encodings with gan for transferring multiple face attributes", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zhuoqian Yang", "Wentao Zhu", "Wayne Wu", "Chen Qian", "Qiang Zhou", "Bolei Zhou", "Chen Change Loy" ], "title": "Transmomo: Invariance-driven unsupervised video motion retargeting", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Han Zhang", "Tao Xu", "Hongsheng Li", "Shaoting Zhang", "Xiaogang Wang", "Xiaolei Huang", "Dimitris N Metaxas" ], "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In International Conference on Computer Vision", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Primates perform well at generalization tasks. If presented with a single visual instance of an object, they often immediately can generalize and envision the object in different attributes, e.g., in different 3D pose (Logothetis et al., 1995). Primates can readily do so, as their previous knowledge allows them to be cognizant of attributes. Machines, by contrast, are most-commonly trained on sample features (e.g., pixels), not taking into consideration attributes that gave rise to those features.\nTo aid machine cognition of visual object attributes, a class of algorithms focuses on learning disentangled representations (Kingma & Welling, 2014; Higgins et al., 2017; Burgess et al., 2018; Kim & Mnih, 2018; Chen et al., 2018), which map visual samples onto a latent space that separates the information belonging to different attributes. These methods show disentanglement by interpolating between attribute values (e.g., interpolate pose, etc). However, these methods usually process one sample at a time, rather than contrasting or reasoning about a group of samples. We posit that semantic links across samples could lead to better learning.\nWe are motivated by the visual generalization of primates. We seek a method that can synthesize realistic images for arbitrary queries (e.g., a particular car, in a given pose, on a given background), which we refer to as controlled synthesis. We design a method that enforces semantic consistency of attributes, facilitating controlled synthesis by leveraging semantic links between samples. Our method maps samples onto a disentangled latent representation space that (i) consists of subspaces, each encoding one attribute (e.g., identity, pose, ...), and, (ii) is such that two visual samples that share an attribute value (e.g., both have identity “car”) have identical latent values in the shared attribute subspace (identity), even if other attribute values (e.g., pose) differ. To achieve this, we propose a general learning framework: Group Supervised Learning (GSL, Sec. 3), which provides a learner (e.g., neural network) with groups of semantically-related training examples, represented as multigraph. Given a query of attributes, GSL proposes groups of training examples with attribute combinations that are useful for synthesizing a test example satisfying the query (Fig. 1). This endows the network with an envisioning capability. In addition to applications in graphics, controlled synthesis can also augment training sets for better generalization on machine learning tasks (Sec. 6.3).\nAs an instantiation of GSL, we propose an encoder-decoder network for zero-shot synthesis: GroupSupervised Zero-Shot Synthesis Network (GZS-Net, Sec. 4). While learning (Sec. 4.2 & 4.3), we repeatedly draw a group of semantically-related examples, as informed by a multigraph created by GSL. GZS-Net encodes group examples, to obtain latent vectors, then swaps entries for one or more attributes in the latent space across examples, through multigraph edges, then decodes into an example within the group (Sec. 4.2).\nOur contributions are: (i) We propose Group-Supervised Learning (GSL), explain how it casts its admissible datasets into a multigraph, and show how it can be used to express learning from semantically-related groups and to synthesize samples with controllable attributes; (ii) We show one instantiation of GSL: Group-supervised Zero-shot Synthesis Network (GZS-Net), trained on groups of examples and reconstruction objectives; (iii) We demonstrate that GZS-Net trained with GSL outperforms state-of-the-art alternatives for controllable image synthesis on existing datasets; (iv) We provide a new dataset, Fonts1, with its generating code. It contains 1.56 million images and their attributes. Its simplicity allows rapid idea prototyping for learning disentangled representations." }, { "heading": "2 RELATED WORK", "text": "We review research areas, that share similarities with our work, to position our contribution.\nSelf-Supervised Learning (e.g., Gidaris et al. (2018)) admits a dataset containing features of training samples (e.g., upright images) and maps it onto an auxiliary task (e.g., rotated images): dataset examples are drawn and a random transformation (e.g., rotate 90◦) is applied to each. The task could be to predict the transformation (e.g., =90◦) from the transformed features (e.g., rotated image). Our approach is similar, in that it also creates auxiliary tasks, however, the tasks we create involve semantically-related group of examples, rather than from one example at a time.\nDisentangled Representation Learning are methods that infer latent factors given example visible features, under a generative assumption that each latent factor is responsible for generating one semantic attribute (e.g. color). Following Variational Autoencoders (VAEs, Kingma & Welling, 2014), a class of models (including, Higgins et al., 2017; Chen et al., 2018) achieve disentanglement implicitly, by incorporating into the objective, a distance measure e.g. KL-divergence, encouraging the latent factors to be statistically-independent. While these methods can disentangle the factors without knowing them beforehand, unfortunately, they are unable to generate novel combinations not witnessed during training (e.g., generating images of red car, without any in training). On the other hand, our method requires knowing the semantic relationships between samples (e.g., which objects are of same identity and/or color), but can then synthesize novel combinations (e.g., by stitching latent features of “any car” plus “any red object”).\nConditional synthesis methods can synthesize a sample (e.g., image) and some use information external to the synthesized modalities, e.g., natural language sentence Zhang et al. (2017); Hong et al.\n1http://ilab.usc.edu/datasets/fonts\n(2018) or class label Mirza & Osindero (2014); Tran et al. (2017). Ours differ, in that our “external information” takes the form of semantic relationships between samples. There are methods based on GAN Goodfellow et al. (2014) that also utilize semantic relationships including Motion Re-targeting (Yang et al., 2020), which unfortunately requires domain-specific hand-engineering (detect and track human body parts). On the other hand, we design and apply our method on different tasks (including people faces, vehicles, fonts; see Fig. 1). Further, we compare against two recent GAN methods starGAN (Choi et al., 2018) and ELEGANT (Xiao et al., 2018), as they are state-of-the-art GAN methods for amending visual attributes onto images. While they are powerful in carrying local image transformations (within a small patch, e.g., changing skin tone or hair texture). However, our method better maintains global information: when rotating the main object, the scene also rotates with it, in a semantically coherent manner. Importantly, our learning framework allows expressing simpler model network architectures, such as feed-forward auto-encoders, trained with only reconstruction objectives, as opposed to GANs, with potential difficulties such as lack of convergence guarantees.\nZero-shot learning also consumes side-information. For instance, models of Lampert (2009); Atzmon & Chechik (2018) learn from object attributes, like our method. However, (i) these models are supervised to accurately predict attributes, (ii) they train and infer one example at a time, and (iii) they are concerned with classifying unseen objects. We differ in that (i) no learning gradients (supervision signal) are derived from the attributes, as (ii) these attributes are used to group the examples (based on shared attribute values), and (iii) we are concerned with generation rather than classification: we want to synthesize an object in previously-unseen attribute combinations.\nGraph Neural Networks (GNNs) (Scarselli et al., 2009) are a class of models described on graph structured data. This is applicable to our method, as we propose to create a multigraph connecting training samples. In fact, our method can be described as a GNN, with message passing functions (Gilmer et al., 2017) that are aware of the latent space partitioning per attribute (explained in Sec. 4). Nonetheless, for self-containment, we introduce our method in the absence of the GNN framework." }, { "heading": "3 GROUP-SUPERVISED LEARNING", "text": "" }, { "heading": "3.1 DATASETS ADMISSIBLE BY GSL", "text": "Formally, a dataset admissible by GSL containing n samples D = {x(i)}ni=1 where each example is accompanied with m attributes Da = {(a(i)1 , a (i) 2 , . . . a (i) m )}ni=1. Each attribute value is a member of a countable set: aj ∈ Aj . For instance, pertaining to visual scenes, A1 can denote foreground-colors A1 = {red, yellow, . . . }, A2 could denote background colors, A3 could correspond to foreground identity, A4 to (quantized) orientation. Such datasets have appeared in literature, e.g. in Borji et al. (2016); Matthey et al. (2017); Langner et al. (2010); Lai et al. (2011)." }, { "heading": "3.2 AUXILIARY TASKS VIA MULTIGRAPHS", "text": "Given a dataset of n samples and their attributes, we define a multigraph M with node set [1..n]. Two nodes, i, k ∈ [1..n] with i 6= k are connected with edge labels M(i, k) ⊆ [1..m] as:\nM(i, k) = { j ∣∣∣ a(i)j = a(k)j ; j ∈ [1..m]} .\nIn particular, M defines a multigraph, with |M(i, k)| denoting the number of edges connecting nodes i and k, which is equals the number of their shared attributes. Fig. 2 depicts a (sub-)multigraph for the Fonts dataset (Sec. 5.1).\nDefinition 1 COVER(S, i): Given node set S ⊆ [1..|Dg|] and node i ∈ [1..|Dg|] we say set S covers node i if every attribute value of i is in at least one member of S. Formally:\nCOVER(S, i)⇐⇒ [1..m] = ⋃ k∈S M(i, k). (1)\nWhen COVER(S, i) holds, there are two mutually-exclusive cases: either i ∈ S, or i /∈ S, respectively shaded as green and blue in Fig. 2 (b). The first case trivially holds even for small S, e.g. COVER({i}, i) holds for all i. However, we are interested in non-trivial sets where |S| > 1, as sets with |S| = 1 would cast our proposed network (Sec. 4) to a standard Auto-Encoder. The second case\nis crucial for zero-shot synthesis. Suppose the (image) features of node i (in Fig. 2 (b)) are not given, we can search for S1, under the assumption that if COVER(S1, i) holds, then S1 contains sufficient information to synthesize i’s features as they are not given (i /∈ S1). Until this point, we made no assumptions how the pairs (S, i) are extracted (mined) from the multigraph s.t. COVER(S, i) holds. In the sequel, we train with |S| = 2 and i ∈ S. We find that this particular specialization of GSL is easy to program, and we leave-out analyzing the impact of mining different kinds of cover sets for future work." }, { "heading": "4 GROUP-SUPERVISED ZERO-SHOT SYNTHESIS NETWORK (GZS-NET)", "text": "We now describe our ingredients towards our goal: synthesize holistically-semantic novel images." }, { "heading": "4.1 AUTO-ENCODING ALONG RELATIONS IN M", "text": "Auto-encoders (D ◦ E) : X → X are composed of an encoder network E : X → Rd and a decoder network D : Rd → X . Our networks further utilize M emitted by GSL. GZS-Net consists of\nan encoder E : X ×M→ Rd ×M ; and a decoder D : Rd ×M→ X . (2)\nM denotes the space of sample pairwise-relationships. GSL realizes such (X,M) ⊂ X ×M, where X contains (a batch of) training samples and M the (sub)graph of their pairwise relations. Rather than passing as-is the output ofE intoD, one can modify it using algorithmA by chaining: D◦A◦E. For notation brevity, we fold A into the encoder E, by designing a swap operation, next.\n4.2 DISENTANGLEMENT BY SWAP OPERATION\nWhile training our auto-encoder D(E(X,M)), we wish to disentangle the latents output by E, to provide use for using D to decode samples not given to E. D (/ E) outputs (/ inputs) one or more images, onto (/ from) the image space. Both networks can access feature and relationship information.\nAt a high level, GZS-Net aims to swap attributes across images by swapping corresponding entries across their latent representations. Before any training, we fix partitioning of the the latent space Z = E(X,M). Let row-vector z(1) = [g(1)1 , g (1) 2 , . . . , g (1) m ] be the concatenation of m row vectors\n{g(1)j ∈ Rdj}mj=1 where d = ∑m j=1 dj and the values of {dj}mj=1 are hyperparameters.\nTo simplify the notation to follow, we define an operation swap : Rd × Rd × [1..m] → Rd × Rd, which accepts two latent vectors (e.g., z(1) and z(2)) and an attribute (e.g., 2) and returns the input vectors except that the latent features corresponding to the attribute are swapped. E.g.,\nswap(z(1), z(2), 2) = swap([g(1)1 , g (1) 2 , g (1) 3 , . . . , g (1) m ], [g (2) 1 , g (2) 2 , g (2) 3 , . . . , g (2) m ], 2)\n= [g (1) 1 , g (2) 2 , g (1) 3 , . . . , g (1) m ], [g (2) 1 , g (1) 2 , g (2) 3 , . . . , g (2) m ]\nOne-Overlap Attribute Swap. To encourage disentanglement in the latent representation of attributes, we consider group S and example x s.t. COVER(S, x) holds, and for all xo ∈ S, x 6= xo, the\npair (xo, x) share exactly one attribute value (|M(xo, x)| = 1). Encoding those pairs, swapping the latent representation of the attribute, and decoding should then be a no-op if the swap did not affect other attributes (Fig. 3b). Specifically, we would like for a pair of examples, x (red border in Fig. 3b) and xo (blue border) sharing only attribute j (e.g., identity)2, with z = E(x) and zo = E(xo), be s.t.\nD (zs) ≈ x and D ( z(o)s ) ≈ x(o); with zs, z(o)s = swap(z, zo, j). (3)\nIf, for each attribute, sufficient sample pairs share only that attribute, and Eq. 3 holds for all with zero residual loss, then disentanglement is achieved for that attribute (on the training set).\nCycle Attribute Swap. This operates on all example pairs, regardless of whether they share an attribute or not. Given two examples and their corresponding latent vectors, if we swap latent information corresponding to any attribute, we should end up with a sensible decoding. However, we may not have ground-truth supervision samples for swapping all attributes of all pairs. For instance, when swapping the color attribute between pair orange truck and white airplane, we would like to learn from this pair, even without any orange airplanes in the dataset. To train from any pair, we are motivated to follow a recipe similar to CycleGAN (Zhu et al., 2017). As shown in Fig. 3c, given two examples x and x̄: (i) sample an attribute j ∼ U [1..m]; (ii) encode both examples, z = E(x) and z̄ = E(x̄); (iii) swap features corresponding to attribute j with zs, z̄s = swap(z, z̄, j); (iv) decode, x̂ = D(zs) and ̂̄x = D(z̄s); (v) on a second round (hence, cycle), encode again as ẑ = E(x̂) and̂̄z = E(̂̄x); (vi) another swap, which should reverse the first swap, ẑs, ̂̄zs = swap(ẑ, ̂̄z, j); (vii) finally, one last decoding which should approximately recover the original input pair, such that:\nD (ẑs) ≈ x and D (̂̄zs) ≈ x̄; (4)\nIf, after the two encode-swap-decode, we are able to recover the input images, regardless of which attribute we sample, this implies that swapping one attribute does not destroy latent information for other attributes. As shown in Sec. 5, this can be seen as a data augmentation, growing the effective training set size by adding all possible new attribute combinations not already in the training set.\n2It holds that COVER({x, xo}, x) and COVER({x, xo}, xo)\nAlgorithm 1: Training Regime; for sampling data and calculating loss terms Input: Dataset D and Multigraph M Output: Lr, Lsr, Lcsr\n1 Sample x ∈ D, S ⊂ D such that COVER(S, x) and |S| = m and ∀k ∈ S, |M(x, k)| = 1 2 for x(o) ∈ S do 3 z ← E(x); z(o) ← E(x(o)); ( zs, z (o) s ) ← swap(z, z(o), j)\n4 Lsr ← Lsr + ||D (zs)− x||l1 + ∣∣∣∣∣∣D (z(o)s )− x(o)∣∣∣∣∣∣\nl1 # Swap reconstruction loss\n5 x̄ ∼ D and j ∼ U [1..m] # Sample for Cycle swap 6 z ← E(x); z̄ ← E(x̄); (zs, z̄s)← swap(z, z̄, j); x̂← D(zs); ̂̄x← D(z̄s) 7 ẑ ← E(x̂); ̂̄z ← E(̂̄x); (ẑs, ̂̄zs)← swap(ẑ, ̂̄z, j) 8 Lcsr ← ||D (ẑs)− x||l1 +\n∣∣∣∣D (̂̄zs)− x̄∣∣∣∣l1 # Cycle reconstruction loss 9 Lr ← ||D (E(x))− x||l1 # Standard reconstruction loss" }, { "heading": "4.3 TRAINING AND OPTIMIZATION", "text": "Algorithm 1 lists our sampling strategy and calculates loss terms, which we combine into a total loss\nL(E,D;D,M) = Lr + λsrLsr + λcsrLcsr, (5) where Lr, Lsr and Lcsr, respectively are the reconstruction, swap-reconstruction, and cycle construction losses. Scalar coefficients λsr, λcsr > 0 control the relative importance of the loss terms. The total loss L can be minimized w.r.t. parameters of encoder (E) and decoder (D) via gradient descent." }, { "heading": "5 QUALITATIVE EXPERIMENTS", "text": "We qualitatively evaluate our method on zero-shot synthesis tasks, and on its ability to learn disentangled representations, on existing datasets (Sec. 5.2), and on a dataset we contribute (Sec. 5.1).\nGZS-Net architecture. For all experiments, the encoder E is composed of two convolutional layers with stride 2, followed by 3 residual blocks, followed by a convolutional layer with stride 2, followed by reshaping the response map to a vector, and finally two fully-connected layers to output 100-dim vector as latent feature. The decoder D mirrors the encoder, and is composed of two fully-connected layers, followed by reshape into cuboid, followed by de-conv layer with stride 2, followed by 3 residual blocks, then finally two de-conv layers with stride 2, to output a synthesized image." }, { "heading": "5.1 FONTS DATASET & ZERO-SHOT SYNTHESIS PERFORMANCE", "text": "Design Choices. Fonts is a computer-generated image datasets. Each image is of an alphabet letter and is accompanied with its generating attributes: Letters (52 choices, of lower- and upper-case English alphabet); size (3 choices); font colors (10); background colors (10); fonts (100); giving a total of 1.56 million images, each with size (128× 128) pixels. We propose this dataset to allow fast testing and idea iteration on zero-shot synthesis and disentangled representation learning. Samples from the dataset are shown in Fig. 2. Details and source code are in the Appendix. We partition the 100-d latents equally among the 5 attributes. We use a train:test split of 75:25. Baselines. We train four baselines: • The first three are a standard Autoencoder, a β-VAE (Higgins et al., 2017), and β-TCVAE (Chen\net al., 2018). β-VAE and β-TCVAE show reasonable disentanglement on the dSprites dataset (Matthey et al., 2017). Yet, they do not make explicit the assignment between latent variables and attributes, which would have been useful for precisely controlling the attributes (e.g. color, orientation) of synthesized images. Therefore, for these methods, we designed a best-effort approach by exhaustively searching for the assignments. Once assignments are known, swapping attributes between images might become possible with these VAEs, and hopefully enabling for controllable-synthesis. We denote these three baselines with this Exhaustive Search, using suffix +ES. Details on Exhaustive Search are in the Appendix.\n• The fourth baseline, AE+DS, is an auto-encoder where its latent space is partitioned and each partition receives direct supervision from one attribute. Further details are in the Appendix.\nAs shown in Fig. 4, our method outperforms baselines, with second-runner being AE+DS: With discriminative supervision, the model focus on the most discriminative information, e.g., can distinguish e.g. across size, identity, etc, but can hardly synthesize photo-realistic letters." }, { "heading": "5.2 ZERO-SHOT SYNTHESIS ON ILAB-20M AND RAFD", "text": "iLab-20M (Borji et al., 2016): is an attributed dataset containing images of toy vehicles placed on a turntable using 11 cameras at different viewing points. There are 3 attribute classes: vehicle identity: 15 categories, each having 25-160 instances; pose; and backgrounds: over 14 for each identity: projecting vehicles in relevant contexts. Further details are in the Appendix. We partition the 100-d latent space among attributes as: 60 for identity, 20 for pose, and 20 for background. iLab-20M has limited attribute combinations (identity shows only in relevant background; e.g., cars on roads but not in deserts), GZS-Net can disentangle these three attributes and reconstruct novel combinations (e.g., cars on desert backgrounds) Fig. 5 shows qualitative generation results.\nWe compare against (AE+DS), confirming that maintains discriminative information, and against two state-of-the-art GAN baselines: starGAN (Choi et al., 2018) and ELEGANT (Xiao et al., 2018). GAN baselines are strong in knowing what to change but not necessarily how to change it: Where change is required, pixels are locally perturbed (within a patch) but the perturbations often lack global correctness (on the image). See Appendix for further details and experiments on these GAN methods.\nRaFD (Radboud Faces Database, Langner et al., 2010): contains pictures of 67 models displaying 8 emotional expressions taken by 5 different camera angles simultaneously. There are 3 attributes: identity, camera position (pose), and expression. We partition the 100-d latent space among the attributes as 60 for identity, 20 for pose, and 20 for expression. We use a 80:20 split for train:test, and use GZS-Net to synthesize images with novel combination of attributes (Fig. 6). The synthesized images can capture the corresponding attributes well, especially for pose and identity." }, { "heading": "6 QUANTITATIVE EXPERIMENTS", "text": "" }, { "heading": "6.1 QUANTIFYING DISENTANGLEMENT THROUGH ATTRIBUTE CO-PREDICTION", "text": "Can latent features of one attribute predict the attribute value? Can it also predict values for other attributes? Under perfect disentanglement, we should answer always for the first and never for the second. Our network did not receive attribute information through supervision, but rather, through swapping. We quantitatively assess disentanglement by calculating a model-based confusion matrix between attributes: We analyze models trained on the Fonts dataset. We take the Test examples from Font, and split them 80:20 for trainDR:testDR. For each attribute pair j, r ∈ [1..m]× [1..m], we train a classifier (3 layer MLP) from gj of trainDR to the attribute values of r, then obtain the accuracy of each attribute by testing with gj of testDR. Table 1 compares how well features of each attribute (row) can predict an attribute value (column): perfect should be as close as possible to Identity matrix, with off-diagonal entries close to random (i.e., 1 / |Ar|). GZS-Net outperforms other methods, except for (AE + DS) as its latent space was Directly Supervised for this particular task, though it shows inferior synthesis performance." }, { "heading": "6.2 DISTANCE OF SYNTHESIZED IMAGE TO GROUND TRUTH", "text": "The construction of the Fonts dataset allows programmatic calculating ground-truth images corresponding to synthesized images (recall, Fig. 4). We measure how well do our generated images compare to the ground-truth test images. Table 2 shows image similarity metrics, averaged over the test set, comparing our method against baselines. Our method significantly outperforms baselines." }, { "heading": "6.3 GZS-NET BOOST OBJECT RECOGNITION", "text": "We show-case that our zero-shot synthesised images by GZS-Net can augment and boost training of a visual object recognition classifier (Ge et al., 2020). Two different training datasets (Fig. 7a) are tailored from iLab-20M, pose and background unbalanced datasets (DUB) (half classes with 6 poses per object instance, other half with only 2 poses; as we cut poses, some backgrounds are also\neliminated), as well as pose and background balanced dataset (DB) (all classes with all 6 poses per object instance).\nWe use GZS-Net to synthesize the missing images ofDUB and synthesize a new (augmented) balanced dataset DB-s. We alternatively use common data augmentation methods (random crop, horizontal flip, scale resize, etc) to augment theDUB dataset to the same number of images asDB-s, calledDUB-a. We show object recognition performance on the test set using these four datasets respectively. Comparing DB-s withDUB shows≈ 7% points improvements on classification performance, due to augmentation with synthesized images for missing poses in the training set, reaching the level of when all real poses are available (DB). Our synthesized poses outperform traditional data augmentation (DUB-a)" }, { "heading": "7 CONCLUSION", "text": "We propose a new learning framework, Group Supervised Learning (GSL), which admits datasets of examples and their semantic relationships. It provides a learner groups of semantically-related samples, which we show is powerful for zero-shot synthesis. In particular, our Group-supervised Zero-Shot synthesis network (GZS-Net) is capable of training on groups of examples, and can learn disentangled representations by explicitly swapping latent features across training examples, along edges suggested by GSL. We show that, to synthesize samples given a query with custom attributes, it is sufficient to find one example per requested attribute and to combine them in the latent space. We hope that researchers find our learning framework useful and extend it for their applications." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), the Army Research Office (W911NF2020053), and the Intel and CISCO Corporations. The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof." }, { "heading": "A FONTS DATASET", "text": "Fonts is a computer-generated RGB image datasets. Each image, with 128× 128 pixels, contains an alphabet letter rendered using 5 independent generating attributes: letter identity, size, font color, background color and font. Fig.1 shows some samples: in each row, we keep all attributes values the same but vary one attribute value. Attribute details are shown in Table 1. The dataset contains all\npossible combinations of these attributes, totaling to 1560000 images. Generating attributes for all images are contained within the dataset. Our primary motive for creating the Fonts dataset, is that it allows fast testing and idea iteration, on disentangled representation learning and zero-shot synthesis.\nYou can download the dataset and its generating code from: http://ilab.usc.edu/ datasets/fonts , which we plan to keep up-to-date with contributions from ourselves and the community." }, { "heading": "B BASELINES", "text": "B.1 EXHAUSTIVE SEARCH (ES) AFTER TRAINING AUTO-ENCODER BASED METHODS\nAfter training the baselines: standard Autoencoder, a β-VAE (Higgins et al., 2017), and TC-VAE (Chen et al., 2018). We want to search for the assignment between latent variables and attributes, as these VAEs do not make explicit the assignment. This knowing the assignment should hypothetically allow us to trade attributes between two images by swapping feature values belonging to the attribute we desire to swap.\nTo discover the assignment from latent dimension to attribute, we map all n training images through the encoder, giving a 100D vector per training sample ∈ Rn×100. We make an 80:20 split on the vectors, obtaining XtrainES ∈ R0.8n×100 and XtestES ∈ R0.2n×100. Then, we randomly sample K different partitionings P of the 100D space evenly among the 5 attributes. For each partitioning p ∈ P , we create 5 classification tasks, one task per attribute, according to p:{( XtrainES [:, pj ] ∈ R0.8n×20, XtestES [:, pj ] ∈ R0.2n×20 )}5 j=1\n. For each task j, we train a 3-layer MLP to map XtrainES [:, pj ] to their known attribute values and measure its performance on XtestES [:, pj ]. Finally, we commit to the partitioning p ∈ P with highest average performance on the 5 attribute tasks. This p represents our best effort to determine which latent feature dimensions correspond to which attributes. For zero-shot synthesis with baselines, we swap latent dimensions indicated by partitioning p. We denote three baselines with this Exhaustive Search, using suffix +ES (Fig. 4).\nB.2 DIRECT SUPERVISION (DS) ON AUTO-ENCODER LATENT SPACE\nThe last baseline (AE+DS) directly uses attribute labels to supervise the latent disentangled representation of the auto-encoder by adding auxiliary classification modules. Specifically, the encoder maps an image sample x(i) to a 100-d latent vector z(i) = E(x(i)), equally divided into 5 partitions corresponding to 5 attributes: z(i) = [g(i)1 , g (i) 2 , . . . , g (i) 5 ]. Each attribute partition has a attribute label, [y(i)1 , y (i) 2 , . . . , y (i) 5 ], which represent the attribute value (e.g. for font color attribute, the label represent different colors: red, green, blue,.etc). We use 5 auxiliary classification modules to predict the corresponding class label given each latent attribute partitions as input. We use Cross Entropy loss as the classification loss and the training goal is to minimize both the reconstruction loss and classification loss.\nAfter training, we have assignment between latent variables and attributes, so we can achieve attribute swapping and controlled synthesis (Fig. 4 (AE+DS)). The inferior synthesis performance demonstrates that: The supervision (classification task) preserves discriminative information that is insufficient for photo-realistic generation. While our GZS-Net uses one attribute swap and cross swap which enforce disentangled information to be sufficient for photo-realistic synthesis.\nB.3 ELEGANT (XIAO ET AL., 2018)\nWe utilize the author’s open-sourced code: https://github.com/Prinsphield/ELEGANT. For ELEGANT and starGAN (Section B.4), we want to synthesis a target image has same identity as id provider image, same background as background provider image, and same pose as pose provider image. To achieve this, we want to change the background and pose attribute of id image.\nAlthough ELEGANT is strong in making image transformations that are local to relatively-small neighborhoods, however, it does not work well for our datasets, where image-wide transformations are required for meaningful synthesis. This can be confirmed by their model design: their final output is a pixel-wise addition of a residual map, plus the input image. Further, ELEGANT treats all attribute values as binary: they represent each attribute value in a different part of the latent space, whereas our method devotes part of the latent space to represents all values for an attribute. For investigation, we train dozens of ELEGANT models with different hyperparameters, detailed as:\n• For iLab-20M, the pose and background contain a total of 117 attribute values (6 for pose, 111 for background). As such, we tried training it on all attribute values (dividing their latent space among 117 attribute values). We note that this training regime was too slow and the loss values do not seem to change much during training, even with various learning rate choices (listed below).\n• To reduce the difficulty of the task for ELEGANT, we ran further experiments restricting attribute variation to only 17 attribute values (6 for pose, 11 for background) and this shows more qualitative promise than 117 attributes. This is what we report. • Fig 10 shows that ELEGANT finds more challenge in changing the pose than in changing the\nbackground. We now explain how we generated Columns 3 and 4 of Fig 10 for modifying the background. We modify the latent features for the identity image before decoding. Since the Identity input image and the Background input image have known but different background values, their background latent features are represented in two different latent spaces. One can swap on one or on both of these latent spaces. Column 3 and 4 of Fig.10 swap only on one latent space. However, in Fig. 5 of the main paper, we swap on both positions. We also show swapping only the pose attribute (across 2 latent spaces) in Column 1 of Fig.10 and swapping both pose and background in Column 2. • To investigate if the model’s performance is due to poor convergence of the generator, we\nqualitatively assess its performance on the training set. Fig. 11 shows output of ELEGANT on training samples. We see that the reconstruction (right) of input images (left) shows decent quality, suggesting that the generator network has converged to decently good parameters. Nonetheless, we see artefacts in its outputs when amending attributes, particularly located in pixel locations where a change is required. This shows that the model setup of ELEGANT is aware that these pixel values need to be updated, but the actual change is not coherent across the image. • For the above, we applied a generous sweep of training hyperparameters, including:\n– Learning rate: author’s original is 2e-4, we tried several values between 1e-5 and 1e-3, including different rates for generator and discriminator.\n– Objective term coefficients: There are multiple loss terms for the generator, adversarial loss and reconstruction loss. We used a grid search method by multiplying the original parameters by a number from [0.2, 0.5, 2, 5] for each of the loss terms and tried several combinations.\n– The update frequency of weights on generator (G) and discriminator (D). Since D is easier to learn, we performing k update steps on G for every update step on D. We tried k = 5, 10, 15, 20, 30, 40, 50.\nWe report ELEGANT results showing best qualitative performance.\nOverall, ELEGANT does not work well for holistic image manipulation (though works well for local image edits, per experiments by authors (Xiao et al., 2018)).\nB.4 STARGAN (CHOI ET AL., 2018)\nWe utilize the author’s open-sourced code: https://github.com/yunjey/stargan. Unlike ELEGANT (Xiao et al., 2018) and our method, starGAN only accepts one input image and an edit information: the edit information, is not extracted from another image – this is following their method and published code." }, { "heading": "C ZERO-SHOT SYNTHESIS PERFORMANCE ON DSPRITES DATASET", "text": "We qualitatively evaluate our method, Group-Supervised Zero-Shot Synthesis Network (GZS-Net), against three baseline methods, on zero-shot synthesis tasks on the dSprites dataset.\nC.1 DSPRITES\ndSprites (Matthey et al., 2017) is a dataset of 2D shapes procedurally generated from 6 ground truth independent latent factors. These factors are color, shape, scale, rotation, x- and y-positions of a sprite. All possible combinations of these latents are present exactly once, generating 737280 total images. Latent factor values (Color: white; Shape: square, ellipse, heart; Scale: 6 values linearly\nspaced in [0.5, 1]; Orientation: 40 values in [0, 2 pi]; Position X: 32 values in [0, 1]; Position Y: 32 values in [0, 1])\nC.2 EXPERIMENTS OF BASELINES AND GZS-NET\nWe train a 10-dimensional latent space and partition the it equally among the 5 attributes: 2 for shape, 2 for scale, 2 for orientation, 2 for position X , and 2 for position Y . We use a train:test split of 75:25.\nWe train 3 baselines: a standard Autoencoder, a β-VAE (Higgins et al., 2017), and TC-VAE (Chen et al., 2018). To recover the latent-to-attribute assignment for these baselines, we utilize the Exhaustive Search best-effort strategy, described in the main paper: the only difference is that we change the dimension of Z space from 100 to 10. Once assignments are known, we utilize these baseline VAEs by attribute swapping to do controlled synthesis. We denote these baselines using suffix +ES.\nAs is shown in Figure 2, GZS-Net can precisely synthesize zero-shot images with new combinations of attributes, producing images similar to the groud truth. The baselines β-VAE and TC-VAE produce realistic images of good visual quality, however, not satisfying the requested query: therefore, they cannot do controllable synthesis even when equipped with our best-effort Exhaustive Search to discover the disentanglement. Standard auto-encoders can not synthesis meaningful images when combining latents from different examples, giving images outside the distribution of training samples (e.g. showing multiple sprites per image)." } ]
2,021
ZERO-SHOT SYNTHESIS WITH GROUP-SUPERVISED LEARNING
SP:9f70871f0111b58783f731748d8750c635998f32
[ "This paper presents an approach to learn goal conditioned policies by relying on self-play which sets the goals and discovers a curriculum of tasks for learning. Alice and Bob are the agents. Alice's task is to set a goal by following a number of steps in the environment and she is rewarded when the goal is too challenging for Bob to solve. Bob's task is to solve the task by trying to reproduce the end state of Alice's demonstration. As a result, the learned policy performs various tasks and can work in zero-shot settings." ]
We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. We rely on asymmetric self-play for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob aims to solve them. We show that this method can discover highly diverse and complex goals without any human priors. Bob can be trained with only sparse rewards, because the interaction between Alice and Bob results in a natural curriculum and Bob can learn from Alice’s trajectory when relabeled as a goal-conditioned demonstration. Finally, our method scales, resulting in a single policy that can generalize to many unseen tasks such as setting a table, stacking blocks, and solving simple puzzles. Videos of a learned policy is available at https://robotics-self-play.github.io.
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Trapit Bansal", "Jakub Pachocki", "Szymon Sidor", "Ilya Sutskever", "Igor Mordatch" ], "title": "Emergent complexity via multi-agent competition", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Adrien Baranes", "Pierre-Yves Oudeyer" ], "title": "Active learning of inverse models with intrinsically motivated goal exploration in robots", "venue": "Robotics and Autonomous Systems,", "year": 2013 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Angel X Chang", "Thomas Funkhouser", "Leonidas Guibas", "Pat Hanrahan", "Qixing Huang", "Zimo Li", "Silvio Savarese", "Manolis Savva", "Shuran Song", "Hao Su" ], "title": "Shapenet: An information-rich 3d model repository", "venue": "arXiv preprint arXiv:1512.03012,", "year": 2015 }, { "authors": [ "Marc Peter Deisenroth", "Carl Edward Rasmussen", "Dieter Fox" ], "title": "Learning to control a low-cost manipulator using data-efficient reinforcement learning. Robotics: Science and Systems VII, pp", "venue": "arXiv preprint arXiv:1611.02779,", "year": 2011 }, { "authors": [ "Yan Duan", "Marcin Andrychowicz", "Bradly Stadie", "OpenAI Jonathan Ho", "Jonas Schneider", "Ilya Sutskever", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "One-shot imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Go-explore: a new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "First return then explore", "venue": "arXiv preprint arXiv:2004.12919,", "year": 2020 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Vlad Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Carlos Florensa", "David Held", "Markus Wulfmeier", "Michael Zhang", "Pieter Abbeel" ], "title": "Reverse curriculum generation for reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy Lillicrap", "Sergey Levine" ], "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "venue": "In IEEE international conference on robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "Abhishek Gupta", "Vikash Kumar", "Corey Lynch", "Sergey Levine", "Karol Hausman" ], "title": "Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Karol Hausman", "Jost Tobias Springenberg", "Ziyu Wang", "Nicolas Heess", "Martin Riedmiller" ], "title": "Learning an embedding space for transferable robot skills", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jemin Hwangbo", "Joonho Lee", "Alexey Dosovitskiy", "Dario Bellicoso", "Vassilios Tsounis", "Vladlen Koltun", "Marco Hutter" ], "title": "Learning agile and dynamic motor skills for legged robots", "venue": "Science Robotics,", "year": 2019 }, { "authors": [ "Stephen James", "Paul Wohlhart", "Mrinal Kalakrishnan", "Dmitry Kalashnikov", "Alex Irpan", "Julian Ibarz", "Sergey Levine", "Raia Hadsell", "Konstantinos Bousmalis" ], "title": "Sim-to-real via sim-to-sim: Dataefficient robotic grasping via randomized-to-canonical adaptation networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Leslie Pack Kaelbling" ], "title": "Learning to achieve goals", "venue": "In Proceedings of the 13th International Joint Conference on Artificial Intelligence,", "year": 1993 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Andrew Levy", "George Konidaris", "Robert Platt", "Kate Saenko" ], "title": "Learning multi-level hierarchies with hindsight", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Richard Li", "Allan Jabri", "Trevor Darrell", "Pulkit Agrawal" ], "title": "Towards practical multi-object manipulation using relational reinforcement learning", "venue": "arXiv preprint arXiv:1912.11032,", "year": 2019 }, { "authors": [ "Hao Liu", "Alexander Trott", "Richard Socher", "Caiming Xiong" ], "title": "Competitive experience replay", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Corey Lynch", "Mohi Khansari", "Ted Xiao", "Vikash Kumar", "Jonathan Tompson", "Sergey Levine", "Pierre Sermanet" ], "title": "Learning latent plans from play", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Tambet Matiisen", "Avital Oliver", "Taco Cohen", "John Schulman" ], "title": "Teacher-student curriculum learning", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ashvin Nair", "Bob McGrew", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Overcoming exploration in reinforcement learning with demonstrations", "venue": "In IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Ashvin V Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine" ], "title": "Visual reinforcement learning with imagined goals", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "OpenAI", "Ilge Akkaya", "Marcin Andrychowicz", "Maciek Chociej", "Mateusz Litwin", "Bob McGrew", "Arthur Petron", "Alex Paino", "Matthias Plappert", "Glenn Powell", "Raphael Ribas", "Jonas Schneider", "Nikolas Tezak", "Jerry Tworek", "Peter Welinder", "Lilian Weng", "Qiming Yuan", "Wojciech Zaremba", "Lei Zhang" ], "title": "Solving rubik’s cube with a robot hand", "venue": "arXiv preprint arXiv:1910.07113,", "year": 2019 }, { "authors": [ "OpenAI", "Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Jozefowicz", "Bob McGrew", "Jakub Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray", "Jonas Schneider", "Szymon Sidor", "Josh Tobin", "Peter Welinder", "Lilian Weng", "Wojciech Zaremba" ], "title": "Learning dexterous in-hand manipulation", "venue": "The International Journal of Robotics Research,", "year": 2020 }, { "authors": [ "Pierre-Yves Oudeyer", "Frdric Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Ivaylo Popov", "Nicolas Heess", "Timothy Lillicrap", "Roland Hafner", "Gabriel Barth-Maron", "Matej Vecerik", "Thomas Lampe", "Yuval Tassa", "Tom Erez", "Martin Riedmiller" ], "title": "Data-efficient deep reinforcement learning for dexterous manipulation", "venue": "arXiv preprint arXiv:1704.03073,", "year": 2017 }, { "authors": [ "Sebastien Racaniere", "Andrew Lampinen", "Adam Santoro", "David Reichert", "Vlad Firoiu", "Timothy Lillicrap" ], "title": "Automated curriculum generation through setter-solver interactions", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Martin Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom Van de Wiele", "Volodymyr Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by playing solving sparse reward tasks from scratch, 2018", "venue": null, "year": 2018 }, { "authors": [ "Andrei A Rusu", "Matej Večerík", "Thomas Rothörl", "Nicolas Heess", "Razvan Pascanu", "Raia Hadsell" ], "title": "Sim-to-real robot learning from pixels with progressive nets", "venue": "In Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Fereshteh Sadeghi", "Sergey Levine" ], "title": "CAD2RL: real single-image flight without a single real image", "venue": "In Robotics: Science and Systems XIII, Massachusetts Institute of Technology,", "year": 2017 }, { "authors": [ "Fereshteh Sadeghi", "Sergey Levine" ], "title": "Cad2rl: Real single-image flight without a single real image", "venue": "In Proceedings of Robotics: Science and Systems,", "year": 2017 }, { "authors": [ "Tim Salimans", "Richard Chen" ], "title": "Learning montezuma’s revenge from a single demonstration", "venue": "arXiv preprint arXiv:1812.03381,", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Rupesh Kumar Srivastava", "Bas R Steunebrink", "Jürgen Schmidhuber" ], "title": "First experiments with powerplay", "venue": "Neural Networks,", "year": 2013 }, { "authors": [ "Sainbayar Sukhbaatar", "Emily Denton", "Arthur Szlam", "Rob Fergus" ], "title": "Learning goal embeddings via self-play for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1811.09083,", "year": 2018 }, { "authors": [ "Sainbayar Sukhbaatar", "Zeming Lin", "Ilya Kostrikov", "Gabriel Synnaeve", "Arthur Szlam", "Rob Fergus" ], "title": "Intrinsic motivation and automatic curricula via asymmetric self-play", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "MuJoCo: A physics engine for model-based control", "venue": "In Intelligent Robots and Systems (IROS),", "year": 2012 }, { "authors": [ "Mel Vecerik", "Todd Hester", "Jonathan Scholz", "Fumin Wang", "Olivier Pietquin", "Bilal Piot", "Nicolas Heess", "Thomas Rothörl", "Thomas Lampe", "Martin Riedmiller" ], "title": "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards, 2018", "venue": null, "year": 2018 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Jane X Wang", "Zeb Kurth-Nelson", "Dhruva Tirumala", "Hubert Soyer", "Joel Z Leibo", "Remi Munos", "Charles Blundell", "Dharshan Kumaran", "Matt Botvinick" ], "title": "Learning to reinforcement learn", "venue": "arXiv preprint arXiv:1611.05763,", "year": 2016 }, { "authors": [ "Rui Wang", "Joel Lehman", "Jeff Clune", "Kenneth O Stanley" ], "title": "Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions", "venue": null, "year": 1901 }, { "authors": [ "Rui Wang", "Joel Lehman", "Aditya Rawal", "Jiale Zhi", "Yulun Li", "Jeff Clune", "Kenneth O Stanley" ], "title": "Enhanced poet: Open-ended reinforcement learning through unbounded invention of learning challenges and their solutions", "venue": "arXiv preprint arXiv:2003.08536,", "year": 2020 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Yunzhi Zhang", "Pieter Abbeel", "Lerrel Pinto" ], "title": "Automatic curriculum learning through value disagreement", "venue": "arXiv preprint arXiv:2006.09641,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "We are motivated to train a single goal-conditioned policy (Kaelbling, 1993) that can solve any robotic manipulation task that a human may request in a given environment. In this work, we make progress towards this goal by solving a robotic manipulation problem in a table-top setting where the robot’s task is to change the initial configuration of a variable number of objects on a table to match a given goal configuration. This problem is simple in its formulation but likely to challenge a wide variety of cognitive abilities of a robot as objects become diverse and goals become complex.\nMotivated by the recent success of deep reinforcement learning for robotics (Levine et al., 2016; Gu et al., 2017; Hwangbo et al., 2019; OpenAI et al., 2019a), we tackle this problem using deep reinforcement learning on a very large training distribution. An open question in this approach is how we can build a training distribution rich enough to achieve generalization to many unseen manipulation tasks. This involves defining both an environment’s initial state distribution and a goal distribution. The initial state distribution determines how we sample a set of objects and their configuration at the beginning of an episode, and the goal distribution defines how we sample target states given an initial state. In this work, we focus on a scalable way to define a rich goal distribution.\nThe research community has started to explore automated ways of defining goal distributions. For example, previous works have explored learning a generative model of goal distributions (Florensa et al., 2018; Nair et al., 2018b; Racaniere et al., 2020) and collecting teleoperated robot trajectories\nto identify goals (Lynch et al., 2020; Gupta et al., 2020). In this paper, we extend an alternative approach called asymmetric self-play (Sukhbaatar et al., 2018b;a) for automated goal generation. Asymmetric self-play trains two RL agents named Alice and Bob. Alice learns to propose goals that Bob is likely to fail at, and Bob, a goal-conditioned policy, learns to solve the proposed goals. Alice proposes a goal by manipulating objects and Bob has to solve the goal starting from the same initial state as Alice’s. By embodying these two agents into the same robotic hardware, this setup ensures that all proposed goals are provided with at least one solution: Alice’s trajectory.\nThere are two main reasons why we consider asymmetric self-play to be a promising goal generation and learning method. First, any proposed goal is achievable, meaning that there exists at least one solution trajectory that Bob can follow to achieve the goal. Because of this property, we can exploit Alice’s trajectory to provide additional learning signal to Bob via behavioral cloning. This additional learning signal alleviates the overhead of heuristically designing a curriculum or reward shaping for learning. Second, this approach does not require labor intensive data collection.\nIn this paper, we show that asymmetric self-play can be used to train a goal-conditioned policy for complex object manipulation tasks, and the learned policy can zero-shot generalize to many manually designed holdout tasks, which consist of either previously unseen goals, previously unseen objects, or both. To the best of our knowledge, this is the first work that presents zero-shot generalization to many previously unseen tasks by training purely with asymmetric self-play.1" }, { "heading": "2 PROBLEM FORMULATION", "text": "Our training environment for robotic manipulation consists of a robot arm with a gripper attached and a wide range of objects placed on a table surface (Figure 1a,1b). The goal-conditioned policy learns to control the robot to rearrange randomly placed objects (the initial state) into a specified goal configuration (Figure 1c). We aim to train a policy on a single training distribution and to evaluate its performance over a suite of holdout tasks which are independently designed and not explicitly present during training (Figure 2a). In this work, we construct the training distribution via asymmetric self-play (Figure 2b) to achieve generalization to many unseen holdout tasks (Figure 1c).\nMathematical formulation Formally, we model the interaction between an environment and a goal-conditioned policy as a goal-augmented Markov decision process M = hS,A,P,R,Gi, where S is the state space, A is the action space, P : S ⇥ A ⇥ S 7! R denotes the transition probability, G ✓ S specifies the goal space and R : S ⇥ G 7! R is a goal-specific reward function. A goalaugmented trajectory sequence is {(s0, g, a0, r0), . . . , (st, g, at, rt)}, where the goal is provided to the policy as part of the observation at every step. We say a goal is achieved if st is sufficiently close to g (Appendix A.2). With a slightly overloaded notation, we define the goal distribution G(g|s0) as the probability of a goal state g 2 G conditioned on an initial state s0 2 S .\n1Asymmetric self-play is proposed in Sukhbaatar et al. (2018b;a), but to supplement training while the majority of training is conducted on target tasks. Zero-shot generalization to unseen tasks was not evaluated.\nTraining goal distribution A naive design of the goal distribution G(g|s0) is to randomly place objects uniformly on the table, but it is unlikely to generate interesting goals, such as an object picked up and held above the table surface by a robot gripper. Another possible approach, collecting tasks and goals manually, is expensive and hard to scale. We instead sidestep these issues and automatically generate goals via training based on asymmetric self-play (Sukhbaatar et al., 2018b;a). Asymmetric self-play involves using a policy named Alice ⇡A(a|s) to set goals and a goal-conditioned policy Bob ⇡B(a|s, g) to solve goals proposed by Alice, as illustrated in Figure 2b. We run ⇡A to generate a trajectory ⌧A = {(s0, a0, r0), . . . , (sT , aT , rT )} and the last state is labelled as a goal g for ⇡B to solve. The goal distribution G(sT = g|s0) is fully determined by ⇡A and we train Bob only on this goal distribution. We therefore say zero-shot generalization when Bob generalizes to a holdout task which is not explicitly encoded into the training distribution.\nEvaluation on holdout tasks To assess zero-shot generalization of ⇡B(a|s, g) from our training setup, we hand-designed a suite of holdout tasks with goals that are never directly incorporated into the training distribution. Some holdout tasks also feature previously unseen objects. The holdout tasks are designed to either test whether a specific skill has been learned, such as the ability to pick up objects (Figure 3), or represent a semantically interesting task, such as setting a table (Figure 1c). Appendix B.6 describes the list of holdout tasks that we use in our experiments. Note that none of the holdout tasks are used for training ⇡B(a|s, g)." }, { "heading": "3 ASYMMETRIC SELF-PLAY", "text": "To train Alice policy ⇡A(a|s) and Bob policy ⇡B(a|s, g), we run the following multi-goal game within one episode, as illustrated in Figure 2b:\n1. An initial state s0 is sampled from an initial state distribution. Alice and Bob are instantiated into their own copies of the environment. Alice and Bob alternate turns as follows.\n2. Alice’s turn. Alice interacts with its environment for a fixed number of T steps and may rearrange the objects. The state at the end of Alice’s turn sT will be used as a goal g for Bob. If the proposed goal is invalid (e.g. if Alice has not moved any objects, or if an object has fallen off the table), the episode terminates.\n3. Bob’s turn. Bob receives reward if it successfully achieves the goal g in its environment. Bob’s turn ends when it succeeds at achieving the goal or reaches a timeout. If Bob’s turn ends in a failure, its remaining turns are skipped and treated as failures, while we let Alice to keep generating goals.\n4. Alice receives reward if Bob fails to solve the goal that Alice proposed. Steps 2–3 are repeated until 5 goals are set by Alice or Alice proposes an invalid goal, and then the episode terminates.\nThe competition created by this game encourages Alice to propose goals that are increasingly challenging to Bob, while Bob is forced to solve increasingly complex goals. The multi-goal setup was chosen to allow Bob to take advantage of environmental information discovered earlier in the episode to solve its remaining goals, which OpenAI et al. (2019a) found to be important for transfer to physical systems. Note however that in this work we focus on solving goals in simulation only. To improve stability and avoid forgetting, we have Alice and Bob play against past versions of their respective opponent in 20% of games. More details about the game structure and pseudocode for training with asymmetric self-play are available in Appendix A." }, { "heading": "3.1 REWARD STRUCTURE", "text": "For Bob, we assign sparse goal-conditioned rewards. We measure the positional and rotational distance between an object and its goal state as the Euclidean distance and the Euler angle rotational distance, respectively. Whenever both distance metrics are below a small error (the success threshold), this object is deemed to be placed close enough to the goal state and Bob receives 1 reward immediately. But if this object is moved away from the goal state that it has arrived at in past steps, Bob obtains -1 reward such that the sum of per-object reward is at most 1 during a given turn. When all of the objects are in their goal state, Bob receives 5 additional reward and its turn is over.\nFor Alice, we assign a reward after Bob has attempted to solve the goal: 5 reward if Bob failed at solving the goal, and 0 if Bob succeeded. We shape Alice’s reward slightly by adding 1 reward if it has set a valid goal, defined to be when no object has fallen off the table and any object has been moved more than the success threshold. An additional penalty of 3 reward is introduced when Alice sets a goal with objects outside of the placement area, defined to be a fixed 3D volume within the view of the robot’s camera. More details are discussed in Appendix A.2." }, { "heading": "3.2 ALICE BEHAVIORAL CLONING (ABC)", "text": "One of the main benefits of using asymmetric self-play is that the generated goals come with at least one solution to achieve it: Alice’s trajectory. Similarly to Sukhbaatar et al. (2018a), we exploit this property by training Bob with Behavioral Cloning (BC) from Alice’s trajectory, in addition to the reinforcement learning (RL) objective. We call this learning mechanism Alice Behavioral Cloning (ABC). We propose several improvements over the original formulation in Sukhbaatar et al. (2018a).\nDemonstration trajectory filtering Compared to BC from expert demonstrations, using Alice’s trajectory needs extra care. Alice’s trajectory is likely to be suboptimal for solving the goal, as Alice might arrive at the final state merely by accident. Therefore, we only consider trajectories with goals that Bob failed to solve as demonstrations, to avoid distracting Bob with suboptimal examples. Whenever Bob fails, we relabel Alice’s trajectory ⌧A to be a goal-augmented version ⌧BC = {(s0, sT , a0, r0), . . . , (sT , sT , aT , rT )} as a demonstration for BC, where sT is the goal.\nPPO-style BC loss clipping The objective for training Bob is L = LRL + Labc, where LRL is an RL objective and Labc is the ABC loss. is a hyperparameter controlling the relative importance of the BC loss. We set = 0.5 throughout the whole experiment. A naive BC loss is to minimize the negative log-likelihood of demonstrated actions, E(st,gt,at)2DBC ⇥ log ⇡B(at|st, gt; ✓) ⇤ where DBC is a mini-batch of demonstration data and ⇡B is parameterized by ✓. We found that overly-aggressive policy changes triggered by BC sometimes led to learning instabilities. To prevent the policy from changing too drastically, we introduce PPO-style loss clipping (Schulman et al., 2017) on the BC loss by setting the advantage  = 1 in the clipped surrogate objective:\nLabc = E(st,gt,at)2DBC clip ⇣ ⇡B(at|st, gt; ✓) ⇡B(at|st, gt; ✓old) , 1 ✏, 1 + ✏ ⌘\nwhere ⇡B(at|st, gt; ✓) is Bob’s likelihood on a demonstration based on the parameters that we are optimizing, and ⇡B(at|st, gt; ✓old) is the likelihood based on Bob’s behavior policy (at the time of demonstration collection) evaluated on a demonstration. This behavior policy is identical to the policy that we use to collect RL trajectories. By setting  = 1, this objective optimizes the naive BC loss, but clips the loss whenever ⇡B(at|st,gt;✓)⇡B(at|st,gt;✓old) is bigger than 1 + ✏, to prevent the policy from changing too much. ✏ is a clipping threshold and we use ✏ = 0.2 in all the experiments." }, { "heading": "4 RELATED WORK", "text": "Training distribution for RL In the context of multi-task RL (Beattie et al., 2016; Hausman et al., 2018; Yu et al., 2020), multi-goal RL (Kaelbling, 1993; Andrychowicz et al., 2017), and meta RL (Wang et al., 2016; Duan et al., 2016), previous works manually designed a distribution of tasks or goals to see better generalization of a policy to a new task or goal. Domain randomization (Sadeghi & Levine, 2017b; Tobin et al., 2017; OpenAI et al., 2020) manually defines a distribution of simulated environments, but in service of generalizing to the same task in the real world.\nThere are approaches to grow the training distribution automatically (Srivastava et al., 2013). Selfplay (Tesauro, 1995; Silver et al., 2016; 2017; Bansal et al., 2018; OpenAI et al., 2019b; Vinyals et al., 2019) constructs an ever-growing training distribution where multiple agents learn by competing with each other, so that the resulting agent shows strong performance on a single game. OpenAI et al. (2019a) automatically grew a distribution of domain randomization parameters to accomplish better generalization in the task of solving a Rubik’s cube on the physical robot. Wang et al. (2019;\n2020) studied an automated way to keep discovering challenging 2D terrains and locomotion policies that can solve them in a 2D bipedal walking environment.\nWe employ asymmetric self-play to construct a training distribution for learning a goal-conditioned policy and to achieve generalization to unseen tasks. Florensa et al. (2018); Nair et al. (2018b); Racaniere et al. (2020) had the same motivation as ours, but trained a generative model instead of a goal setting policy. Thus, the difficulties of training a generative model were inherited by these methods: difficulty of modeling a high dimensional space and generation of unrealistic samples. Lynch et al. (2020); Gupta et al. (2020) used teleoperation to collect arbitrary robot trajectories, and defined a goal distribution from the states in the collected trajectories. This approach likely requires a large number of robot trajectories for each environment configuration (e.g. various types of objects on a table), and randomization of objects was not studied in this context.\nAsymmetric self-play Asymmetric self-play was proposed by Sukhbaatar et al. (2018b) as a way to supplement RL training. Sukhbaatar et al. (2018b) mixed asymmetric self-play training with standard RL training on the target task and measured the performance on the target task. Sukhbaatar et al. (2018a) used asymmetric self-play to pre-train a hierarchical policy and evaluated the policy after fine-tuning it on a target task. Liu et al. (2019) adopted self-play to encourage efficient learning with sparse reward in the context of an exploration competition between a pair of agents. As far as we know, no previous work has trained a goal-conditioned policy purely based on asymmetric selfplay and evaluated generalization to unseen holdout tasks.\nCurriculum learning Many previous works showed the difficulty of RL and proposed an automated curriculum (Andrychowicz et al., 2017; Florensa et al., 2017; Salimans & Chen, 2018; Matiisen et al., 2019; Zhang et al., 2020) or auxiliary exploration objectives (Oudeyer et al., 2007; Baranes & Oudeyer, 2013; Pathak et al., 2017; Burda et al., 2019; Ecoffet et al., 2019; 2020) to learn predefined tasks. When training goal-conditioned policies, relabeling or reversing trajectories (Andrychowicz et al., 2017; Florensa et al., 2017; Salimans & Chen, 2018) or imitating successful demonstrations (Oh et al., 2018; Ecoffet et al., 2019; 2020) naturally reduces the task complexity. Our work shares a similarity in that asymmetric self-play alleviates the difficulty of learning a goal-conditioned policy via an intrinsic curriculum and imitation from the goal setter’s trajectory, but our work does not assume any predefined task or goal distribution.\nHierarchical reinforcement learning (HRL) Some HRL methods jointly trained a goal setting policy (high-level or manager policy) and a goal solving policy (low-level or worker policy) (Vezhnevets et al., 2017; Levy et al., 2019; Nachum et al., 2018). However, the motivation for learning a goal setting policy in HRL is not to challenge the goal solving policy, but to cooperate to tackle a task that can be decomposed into a sequence of sub-goals. Hence, this goal setting policy is trained to optimize task reward for the target task, unlike asymmetric self-play where the goal setter is rewarded upon the other agent’s failure.\nRobot learning for object manipulation. It has been reported that training a policy for multiobject manipulation is very challenging with sparse rewards (Riedmiller et al., 2018; Vecerik et al., 2018). One example is block stacking, which has been studied for a long time in robotics as it involves complex contact reasoning and long horizon motion planning (Deisenroth et al., 2011). Learning block stacking often requires a hand-designed curriculum (Li et al., 2019), meticulous reward shaping (Popov et al., 2017), fine-tuning (Rusu et al., 2017), or human demonstrations (Nair et al., 2018a; Duan et al., 2017). In this work, we use block stacking as one of the holdout tasks to test zero-shot generalization, but without training on it." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we first show that asymmetric self-play generates an effective training curriculum that enables generalization to unseen hold-out tasks. Then, the experiment is scaled up to train in an environment containing multiple random complex objects and evaluate it with a set of holdout tasks containing unseen objects and unseen goal configurations. Finally, we demonstrate how critical ABC is for Bob to make progress in a set of ablation studies.\nFigure 3: Holdout tasks in the environment using 1 or 2 blocks. The transparent blocks denote the desired goal state, while opaque blocks are the current state. (a) push: The blocks must be moved to their goal locations and orientations. There is no differentiation between the six block faces. (b) flip: Each side of the block is labelled with a unique letter. The blocks must be moved to make every face correctly positioned as what the goal specifies. (c) pick-and-place: One goal block is in the air. (d) stack: Two blocks must be stacked in the right order at the right location.\n0 5 10 15 20 25 30\n0\n20\n40\n60\n80\n100\nPush\n0 5 10 15 20 25 30\nFlip\n0 5 10 15 20 25 30\nPick-and-place\n0 5 10 15 20 25 30\nStack\nNo curriculum Curriculum: distance Curriculum: distribution Curriculum: full Self-play Training steps (x100)\nS uc\nce ss\nra te\n(% )\nFigure 4: Generalization to unseen holdout tasks for blocks. Baselines are trained over a mixture of all holdout tasks. The solid lines represent 2-blocks, while the dashed lines are for 1-block. The x-axis denotes the number of training steps via asymmetric self-play. The y-axis is the zero-shot generalization performance of Bob policy at corresponding training checkpoints. Note that success rate curves of completely failed baselines are occluded by others." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "We implement the training environment2 described in Sec. 2 with randomly placed ShapeNet objects (Chang et al., 2015) as an initial state distribution. In addition, we set up another simpler environment using one or two blocks of fixed size, used for small-scale comparisons and ablation studies. Figure 3 visualizes four holdout tasks for this environment. Each task is designed to evaluate whether the robot has acquired certain manipulation skills: pushing, flipping, picking up and stacking blocks. Experiments in Sec. 5.2, 5.3 and 5.5 focus on blocks and experimental results based on ShapeNet objects are present on Sec. 5.4. More details on our training setups are in Appendix B.\nWe implement Alice and Bob as two independent policies of the same network architecture with memory (Appendix B.4), except that Alice has no observation on goal state. The policies take state observations (“state policy”) for experiments with blocks (Sec. 5.2, 5.3, and 5.5), and take both vision and state observations (“hybrid policy”) for experiments with ShapeNet objects (Sec. 5.4). Both policies are trained with Proximal Policy Optimization (PPO) (Schulman et al., 2017)." }, { "heading": "5.2 GENERALIZATION TO UNSEEN GOALS WITHOUT MANUAL CURRICULA", "text": "One way to train a single policy to acquire all the skills in Figure 3 is to train a goal-conditioned policy directly over a mixture of these tasks. However, training directly over these tasks without a curriculum turns out to be very challenging, as the policy completely fails to make any progress.3 In contrast, Bob is able to solve all these holdout tasks quickly when learning via asymmetric self-play, without explicitly encoding any prior knowledge of the holdout tasks into the training distribution.\nTo gauge the effect of an intrinsic curriculum introduced by self-play, we carefully designed a set of non-self-play baselines using explicit curricula controlled by Automatic Domain Randomization (OpenAI et al., 2019a). All baselines are trained over a mixture of block holdout tasks as the\n2Our training and evaluation environments are publicly available at hide-for-anonymous-purpose 3The tasks was easier when we ignored object rotation as part of the goal, and used a smaller table.\ngoal distribution. We measure the effectiveness of a training setup by tracking the success rate for each holdout task, as shown in Figure 4. The no curriculum baseline fails drastically. The curriculum:distance baseline expands the distance between the initial and goal states gradually as training progresses, but only learns to push and flip a single block. The curriculum:distribution baseline, which slowly increases the proportion of pick-and-place and stacking goals in the training distribution, fails to acquire any skill. The curriculum:full baseline incorporates all hand-designed curricula yet still cannot learn how to pick up or stack blocks. We have spent a decent amount of time iterating and improving these baselines but found it especially difficult to develop a scheme good enough to compete with asymmetric self-play. See Appendix C.1 for more details of our baselines." }, { "heading": "5.3 DISCOVERY OF NOVEL GOALS AND SOLUTIONS", "text": "Asymmetric self-play discovers novel goals and solutions that are not covered by our holdout tasks. As illustrated in Figure 5, Alice can lift multiple blocks at the same time, build a tower and then keep it balanced using an arm joint. Although it is a tricky strategy for Bob to learn on its own, with ABC, Bob eventually acquires the skills for solving such complex tasks proposed by Alice. Videos are available at https://robotics-self-play.github.io.\nFigure 6 summarizes Alice and Bob’s learning progress against each other. For every pair of Alice and Bob, we ran multiple self-play episodes and measured the success rate. We observe an interesting trend with 2 blocks. As training proceeds, Alice tends to generate more challenging goals, where Bob shows lower success rate. With past sampling, Bob continues to make progress against versions of Alices from earlier optimization steps. This visualization suggests a desired dynamic of asymmetric self-play that could potentially lead to unbounded complexity: Alice continuously generates goals to challenge Bob, and Bob keeps making progress on learning to solve new goals." }, { "heading": "5.4 GENERALIZATION TO UNSEEN OBJECTS AND GOALS", "text": "The experiments above show strong evidence that efficient curricula and novel goals can autonomously emerge in asymmetric self-play. To further challenge our approach, we scale it up to work with many more complex objects using more computational resources for training. We train a hybrid policy in an environment containing up to 10 random ShapeNet (Chang et al., 2015) objects. During training, we randomize the number of objects and the object sizes via Automatic Domain Randomization (OpenAI et al., 2019a). The hybrid policy uses vision observations to extract information about object geometry and size. We evaluate the Bob policy on a more diverse set of manipulation tasks, including semantically interesting ones. Many tasks contain unseen objects and complex goals, as illustrated in Figure 7.\nThe learned Bob policy achieves decent zero-shot generalization performance for many tasks. Success rates are reported in Figure 8. Several tasks are still challenging. For example, ball-capture\n0\n20\n40\n60\n80\n100\nS uc\nce ss\nra te\n(% )\n100 99 97 96 100 99 100\n66\n0\n98 93 84\n47\n82 64\n28\n94 96 100 99 76\n52\n23 19 9\n1 number of objects3 5 8 1 3 2 3 4 1 3 5 8 1 3 6 4 5 5 7 2 3 4 5 6 Blocks YCB objects Customized objects\nPus h\nPic k&p\nlac e\nSta cki\nng Pus h\nPic k&p\nlac e\nBal l ca\nptu re\nMin i ch\ness\nDom ino\ns Tab le s ett ing Tan gra m Rai nbo w\nFigure 8: Success rates of a single goal-conditioned policy solving a variety of holdout tasks, averaged over 100 trials. The error bars indicate the 99% confidence intervals. Yellow, orange and blue bars correspond to success rates of manipulation tasks with blocks, YCB4objects and other uniquely built objects, respectively. Videos are available at https://robotics-self-play.github.io.\nrequires delicate handling of rolling objects and lifting skills. The rainbow tasks call for an understanding of concave shapes. Understanding the ordering of placement actions is crucial for stacking more than 3 blocks in the desired order. The Bob policy learns such an ordering to some degree, but fails to fully generalize to an arbitrary number of stacked blocks." }, { "heading": "5.5 ABLATION STUDIES", "text": "We present a series of ablation studies designed for measuring the importance of each component in our asymmetric self-play framework, including Alice behavioral cloning (ABC), BC loss clipping, demonstration filtering, and the multi-goal game setup. We disable a single ingredient in each ablation run and compare with the complete self-play baseline in Figure 9.\nThe no ABC baseline shows that Bob completely fails to solve any holdout task without ABC, indicating that ABC is a critical mechanism in asymmetric self-play. The no BC loss clipping baseline shows slightly slower learning on pick-and-place and stack, as well as some instabilities in the middle of training. Clipping in the BC loss is expected to help alleviate this instability by con-\ntrolling the rate of policy change per optimizer iteration. The no demonstration filter baseline shows noticeable instability on flip, suggesting the importance of excluding suboptimal demonstrations from behavioral cloning. Finally, the single-goal baseline uses a single goal instead of 5 goals per episode during training. The evaluation tasks are also updated to require a single success per episode. Generalization of this baseline to holdout tasks turns out to be much slower and less stable. It signifies some advantages of using multiple goals per episode, perhaps due to the policy memory internalizing environmental information during multiple trials of goal solving.\nThe results of the ablation studies suggest that ABC with proper configuration and multi-goal gameplay are critical components of asymmetric self-play, alleviating the importance of manual curricula and facilitating efficient learning." }, { "heading": "6 CONCLUSION", "text": "One limitation of our asymmetric self-play approach is that it depends on a resettable simulation environment as Bob needs to start from the same initial state as Alice’s. Therefore asymmetric self-play training has to happen in a simulator which can be easily updated to a desired state. In order to run the goal-solving policy on physical robots, we plan to adopt sim-to-real techniques in future work. Sim-to-real has been shown to achieve great performance on many robotic tasks in the real world (Sadeghi & Levine, 2017a; Tobin et al., 2017; James et al., 2019; OpenAI et al., 2020). One potential approach is to pre-train two agents via asymmetric self-play in simulation, and then fine-tune the Bob policy with domain randomization or data collected on physical robots.\nIn conclusion, we studied asymmetric self-play as a framework for defining a single training distribution to learn many arbitrary object manipulation tasks. Even without any prior knowledge about the target tasks, asymmetric self-play is able to train a strong goal-conditioned policy that can generalize to many unseen holdout tasks. We found that asymmetric self-play not only generates a wide range of interesting goals but also alleviates the necessity of designing manual curricula for learning such goals. We provided evidence that using the goal setting trajectory as a demonstration for training a goal solving policy is essential to enable efficient learning. We further scaled up our approach to work with various complex objects using more computation, and achieved zero-shot generalization to a collection of challenging manipulation tasks involving unseen objects and unseen goals.\n4https://www.ycbbenchmarks.com/object-models/" } ]
2,020
null
SP:038a1d3066f8273977337262e975d7a7aab5002f
[ "The paper introduces a theoretical framework for analyzing GNN transferability. The main idea is to view a graph as subgraph samples with the information of both the connections and the features. Based on this view, the authors define EGI score of a graph as a learnable function that needs to be optimized by maximizing the mutual information between the subgraph and the GNN output embedding of the center node. Then, the authors give an upper bound for the difference of EGI scores of two graphs based on the difference of eigenvalues of the graph Laplacian of the subgraph samples from the two graphs. The implication is that if the difference of the eigenvalues is small, then the EGI scores are similar, which means the GNN has a similar ability to encode the structure of the two graphs. ", "This paper develops a novel measure for assessing the transferability of graph neural network models to new data sets. The measure is based on a decomposition of graphs into 'ego networks' (essentially, a distribution of $k$-hop subgraph, extracted from a given larger graph). Transferability is then assessed by means of a spectral criterion using the graph Laplacian. Experiments demonstrate the utility in assessing transferability in such a manner, as the new measure appears to be aligned with improvements in predictive performance.", "This work aims to provide fundamental understanding towards the mechanism and transferability of GNNs, and develops an unsupervised GNN training objective based on their understanding. Novel theoretical analysis has been done to support the design of EGI and establish its transferability bound, while the effectiveness of EGI and the utility of the transferability bound are verified by extensive experiments. The whole story looks new, comprehensive and convincing to me." ]
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards their transferability. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of EGI (Ego-Graph Information maximization) to analytically achieve this goal. Secondly, when node features are structure-relevant, we conduct an analysis of EGI transferability regarding the difference between the local graph Laplacians of the source and target graphs. We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two real-world network datasets show consistent results in the analyzed setting of direct-transfering, while those on large-scale knowledge graphs show promising results in the more practical setting of transfering with fine-tuning.1
[ { "affiliations": [], "name": "Qi Zhu" }, { "affiliations": [], "name": "Carl Yang" }, { "affiliations": [], "name": "Yidan Xu" }, { "affiliations": [], "name": "Haonan Wang" }, { "affiliations": [], "name": "Chao Zhang" }, { "affiliations": [], "name": "Jiawei Han" } ]
[ { "authors": [ "Réka Albert", "Albert-László Barabási" ], "title": "Statistical mechanics of complex networks", "venue": "Reviews of modern physics,", "year": 2002 }, { "authors": [ "Sanjeev Arora", "Elad Hazan", "Satyen Kale" ], "title": "Fast algorithms for approximate semidefinite programming using the multiplicative weights update method", "venue": "In FOCS,", "year": 2005 }, { "authors": [ "Jinheon Baek", "Dong Bok Lee", "Sung Ju Hwang" ], "title": "Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Lu Bai", "Edwin R Hancock" ], "title": "Fast depth-based subgraph kernels for unattributed graphs", "venue": "Pattern Recognition,", "year": 2016 }, { "authors": [ "Albert-László Barabási", "Réka Albert" ], "title": "Emergence of scaling in random networks. science", "venue": null, "year": 1999 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "venue": "In NIPS,", "year": 2002 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In NIPS,", "year": 2007 }, { "authors": [ "Karsten Borgwardt", "Elisabetta Ghisu", "Felipe Llinares-López", "Leslie O’Bray", "Bastian Rieck" ], "title": "Graph kernels: State-of-the-art and future challenges", "venue": "arXiv preprint arXiv:2011.03854,", "year": 2020 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Fan RK Chung", "Fan Chung Graham" ], "title": "Spectral graph theory", "venue": "Number 92. American Mathematical Soc.,", "year": 1997 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In KDD,", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "David K Hammond", "Pierre Vandergheynst", "Rémi Gribonval" ], "title": "Wavelets on graphs via spectral graph theory", "venue": "ACHA, 30(2):129–150,", "year": 2011 }, { "authors": [ "Kaveh Hassani", "Amir Hosein Khasahmadi" ], "title": "Contrastive multi-view representation learning on graphs", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Keith Henderson", "Brian Gallagher", "Tina Eliassi-Rad", "Hanghang Tong", "Sugato Basu", "Leman Akoglu", "Danai Koutra", "Christos Faloutsos", "Lei Li" ], "title": "Rolx: structural role extraction & mining in large graphs", "venue": "In KDD,", "year": 2012 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Weihua Hu", "Bowen Liu", "Joseph Gomes", "Marinka Zitnik", "Percy Liang", "Vijay Pande", "Jure Leskovec" ], "title": "Strategies for pre-training graph neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ziniu Hu", "Yuxiao Dong", "Kuansan Wang", "Kai-Wei Chang", "Yizhou Sun" ], "title": "Gpt-gnn: Generative pre-training of graph neural networks", "venue": "In KDD,", "year": 2020 }, { "authors": [ "Ziniu Hu", "Changjun Fan", "Ting Chen", "Kai-Wei Chang", "Yizhou Sun" ], "title": "Pre-training graph neural networks for generic structural feature extraction", "venue": "arXiv preprint arXiv:1905.13728,", "year": 1905 }, { "authors": [ "Suk-Geun Hwang" ], "title": "Cauchy’s interlace theorem for eigenvalues of hermitian matrices", "venue": "The American Mathematical Monthly,", "year": 2004 }, { "authors": [ "Xuan Kan", "Hejie Cui", "Carl Yang" ], "title": "Zero-shot scene graph relation prediction through commonsense knowledge integration", "venue": "In ECML-PKDD,", "year": 2021 }, { "authors": [ "Nicolas Keriven", "Gabriel Peyré" ], "title": "Universal invariant and equivariant graph neural networks", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "arXiv preprint arXiv:1611.07308,", "year": 2016 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Nils M Kriege", "Fredrik D Johansson", "Christopher Morris" ], "title": "A survey on graph kernels", "venue": "Applied Network Science,", "year": 2020 }, { "authors": [ "Lin Lan", "Pinghui Wang", "Xuefeng Du", "Kaikai Song", "Jing Tao", "Xiaohong Guan" ], "title": "Node classification on graphs with few-shot novel labels via meta transformed network embedding", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Jure Leskovec", "Jon Kleinberg", "Christos Faloutsos" ], "title": "Graphs over time: densification laws, shrinking diameters and possible explanations", "venue": "In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining,", "year": 2005 }, { "authors": [ "Ron Levie", "Wei Huang", "Lorenzo Bucci", "Michael M Bronstein", "Gitta Kutyniok" ], "title": "Transferability of spectral graph convolutional neural networks", "venue": null, "year": 1907 }, { "authors": [ "Ron Levie", "Elvin Isufi", "Gitta Kutyniok" ], "title": "On the transferability of spectral graph filters", "venue": "13th International conference on Sampling Theory and Applications (SampTA),", "year": 2019 }, { "authors": [ "Jenny Liu", "Aviral Kumar", "Jimmy Ba", "Jamie Kiros", "Kevin Swersky" ], "title": "Graph normalizing flows", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Miller McPherson", "Lynn Smith-Lovin", "James M Cook" ], "title": "Birds of a feather: Homophily in social networks", "venue": "Annual review of sociology,", "year": 2001 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "arXiv preprint arXiv:1310.4546,", "year": 2013 }, { "authors": [ "Giannis Nikolentzos", "Giannis Siglidis", "Michalis Vazirgiannis" ], "title": "Graph kernels: A survey", "venue": "arXiv preprint arXiv:1904.12218,", "year": 2019 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Lawrence Page", "Sergey Brin", "Rajeev Motwani", "Terry Winograd" ], "title": "The pagerank citation ranking: Bringing order to the web", "venue": "Technical report, Stanford InfoLab,", "year": 1999 }, { "authors": [ "Zhen Peng", "Wenbing Huang", "Minnan Luo", "Qinghua Zheng", "Yu Rong", "Tingyang Xu", "Junzhou Huang" ], "title": "Graph representation learning via graphical mutual information maximization", "venue": "In WWW,", "year": 2020 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In KDD,", "year": 2014 }, { "authors": [ "Jiezhong Qiu", "Qibin Chen", "Yuxiao Dong", "Jing Zhang", "Hongxia Yang", "Ming Ding", "Kuansan Wang", "Jie Tang" ], "title": "Gcc: Graph contrastive coding for graph neural network pre-training", "venue": "In KDD,", "year": 2020 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Leonardo FR Ribeiro", "Pedro HP Saverese", "Daniel R Figueiredo" ], "title": "struc2vec: Learning node representations from structural identity", "venue": "In KDD,", "year": 2017 }, { "authors": [ "Sam T Roweis", "Lawrence K Saul" ], "title": "Nonlinear dimensionality reduction by locally linear embedding", "venue": null, "year": 2000 }, { "authors": [ "Luana Ruiz", "Luiz Chamon", "Alejandro Ribeiro" ], "title": "Graphon neural networks and the transferability of graph neural networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Yu Shi", "Qi Zhu", "Fang Guo", "Chao Zhang", "Jiawei Han" ], "title": "Easing embedding learning by comprehensive transcription of heterogeneous information networks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Fabian M Suchanek", "Gjergji Kasneci", "Gerhard Weikum" ], "title": "Yago: a core of semantic knowledge", "venue": "In WWW,", "year": 2007 }, { "authors": [ "Fan-Yun Sun", "Jordan Hoffman", "Vikas Verma", "Jian Tang" ], "title": "Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Jian Tang", "Meng Qu", "Mingzhe Wang", "Ming Zhang", "Jun Yan", "Qiaozhu Mei" ], "title": "Line: Largescale information network embedding", "venue": "In WWW,", "year": 2015 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Petar Velickovic", "William Fedus", "William L Hamilton", "Pietro Lio", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Saurabh Verma", "Zhi-Li Zhang" ], "title": "Stability and generalization of graph convolutional neural networks", "venue": "In KDD,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Tim Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Boris Weisfeiler", "Andrei A Lehman" ], "title": "A reduction of a graph to a canonical form and an algebra arising during this reduction", "venue": "Nauchno-Technicheskaya Informatsia,", "year": 1968 }, { "authors": [ "Man Wu", "Shirui Pan", "Chuan Zhou", "Xiaojun Chang", "Xingquan Zhu" ], "title": "Unsupervised domain adaptive graph convolutional networks", "venue": "In WWW,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Bishan Yang", "Wen-tau Yih", "Xiaodong He", "Jianfeng Gao", "Li Deng" ], "title": "Embedding entities and relations for learning and inference in knowledge bases", "venue": "arXiv preprint arXiv:1412.6575,", "year": 2014 }, { "authors": [ "Carl Yang", "Yichen Feng", "Pan Li", "Yu Shi", "Jiawei Han" ], "title": "Meta-graph based hin spectral embedding: Methods, analyses, and insights", "venue": null, "year": 2018 }, { "authors": [ "Carl Yang", "Aditya Pal", "Andrew Zhai", "Nikil Pancha", "Jiawei Han", "Chuck Rosenberg", "Jure Leskovec" ], "title": "Multisage: Empowering graphsage with contextualized multi-embedding on webscale multipartite networks", "venue": "In KDD,", "year": 2020 }, { "authors": [ "Carl Yang", "Yuxin Xiao", "Yu Zhang", "Yizhou Sun", "Jiawei Han" ], "title": "Heterogeneous network representation learning: A unified framework with survey and benchmark", "venue": "In TKDE,", "year": 2020 }, { "authors": [ "Carl Yang", "Chao Zhang", "Xuewen Chen", "Jieping Ye", "Jiawei Han" ], "title": "Did you enjoy the ride? understanding passenger experience via heterogeneous network embedding", "venue": null, "year": 2018 }, { "authors": [ "Carl Yang", "Jieyu Zhang", "Jiawei Han" ], "title": "Co-embedding network nodes and hierarchical labels with taxonomy based generative adversarial nets", "venue": "In ICDM,", "year": 2020 }, { "authors": [ "Carl Yang", "Jieyu Zhang", "Haonan Wang", "Sha Li", "Myungwan Kim", "Matt Walker", "Yiou Xiao", "Jiawei Han" ], "title": "Relation learning on social networks with multi-modal graph edge variational autoencoders", "venue": "In WSDM,", "year": 2020 }, { "authors": [ "Carl Yang", "Peiye Zhuang", "Wenhan Shi", "Alan Luu", "Pan Li" ], "title": "Conditional structure generation through graph variational generative adversarial nets", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "Will Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": null, "year": 2018 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William Hamilton", "Jure Leskovec" ], "title": "GraphRNN: Generating realistic graphs with deep auto-regressive models", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 } ]
[ { "heading": "1 Introduction", "text": "Graph neural networks (GNNs) have been intensively studied recently [29, 26, 39, 68], due to their established performance towards various real-world tasks [15, 69, 53], as well as close connections to spectral graph theory [12, 9, 16]. While most GNN architectures are not very complicated, the training of GNNs can still be costly regarding both memory and computation resources on real-world large-scale graphs [10, 63]. Moreover, it is intriguing to transfer learned structural information across different graphs and even domains in settings like few-shot learning [56, 44, 25]. Therefore, several very recent studies have been conducted on the transferability of GNNs [21, 23, 22, 59, 31, 3, 47]. However, it is unclear in what situations the models will excel or fail especially when the pre-training and fine-tuning tasks are different. To provide rigorous analysis and guarantee on the transferability of GNNs, we focus on the setting of direct-transfering between the source and target graphs, under an analogous setting of “domain adaptation” [7, 59].\nIn this work, we establish a theoretically grounded framework for the transfer learning of GNNs, and leverage it to design a practically transferable GNN model. Figure 1 gives an overview of our framework. It is based on a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features, which allows us to define graph information and similarity,\n∗These two authors contribute equally. 1Code and processed data are available at https://github.com/GentleZhu/EGI.\n35th Conference on Neural Information Processing Systems (NeurIPS 2021), Online.\nso as to analyze GNN transferability (§3). This view motivates us to design EGI, a novel GNN training objective based on ego-graph information maximization, which is effective in capturing the graph information as we define (§3.1). Then we further specify the requirement on transferable node features and analyze the transferability of EGI that is dependent on the local graph Laplacians of source and target graphs (§3.2).\nAll of our theoretical conclusions have been directly validated through controlled synthetic experiments (Table 1), where we use structural-equivalent role identification in an direct-transfering setting to analyze the impacts of different model designs, node features and source-target structure similarities on GNN transferability. In §4, we conduct real-world experiments on multiple publicly available network datasets. On the Airport and Gene graphs (§4.1), we closely follow the settings of our synthetic experiments and observe consistent but more detailed results supporting the design of EGI and the utility of our theoretical analysis. On the YAGO graphs (§4.2), we further evaluate EGI on the more generalized and practical setting of transfer learning with task-specific fine-tuning. We find our theoretical insights still indicative in such scenarios, where EGI consistently outperforms state-of-the-art GNN representation and transfer learning frameworks with significant margins." }, { "heading": "2 Related Work", "text": "Representation learning on graphs has been studied for decades, with earlier spectral-based methods [6, 46, 52] theoretically grounded but hardly scaling up to graphs with over a thousand of nodes. With the emergence of neural networks, unsupervised network embedding methods based on the Skip-gram objective [37] have replenished the field [51, 14, 42, 45, 66, 62, 65]. Equipped with efficient structural sampling (random walk, neighborhood, etc.) and negative sampling schemes, these methods are easily parallelizable and scalable to graphs with thousands to millions of nodes. However, these models are essentially transductive as they compute fully parameterized embeddings only for nodes seen during training, which are impossible to be transfered to unseen graphs.\nMore recently, researchers introduce the family of graph neural networks (GNNs) that are capable of inductive learning and generalizing to unseen nodes given meaningful node features [29, 12, 15, 67]. Yet, most existing GNNs require task-specific labels for training in a semi-supervised fashion to achieve satisfactory performance [29, 15, 53, 64], and their usage is limited to single graphs where the downstream task is fixed. To this end, several unsupervised GNNs are presented, such as the auto-encoder-based ones like VGAE [28] and GNFs [35], as well as the deep-infomax-based ones like DGI [54] and InfoGraph [50]. Their potential in the transfer learning of GNN remains unclear when the node features and link structures vary across different graphs.\nAlthough the architectures of popular GNNs such as GCN [29] may not be very complicated compared with heavy vision and language models, training a dedicated GNN for each graph can still\nbe cumbersome [10, 63]. Moreover, as pre-training neural networks are proven to be successful in other domains [13, 18], the idea is intriguing to transfer well-trained GNNs from relevant source graphs to improve the modeling of target graphs or enable few-shot learning [59, 31, 3] when labeled data are scarce. In light of this, pioneering works have studied both generative [22] and discriminative [21, 23] GNN pre-training schemes. Though Graph Contrastive Coding [43] shares the most similar view towards graph structures as us, it utilizes contrastive learning across all graphs instead of focusing on the transfer learning between any specific pairs. On the other hand, unsupervised domain adaptive GCNs [59] study the domain adaption problem only when the source and target tasks are homogeneous.\nMost previous pre-training and self-supervised GNNs lack a rigorous analysis towards their transferability and thus have unpredictable effectiveness. The only existing theoretical work on GNN transferability studies the performance of GNNs across different permutations of a single original graph [33, 34] and the tradeoff between discriminability and transferability of GNNs [47]. We, instead, are the first to rigorously study the more practical setting of transferring GNNs across pairs of different source and target graphs." }, { "heading": "3 Transferable Graph Neural Networks", "text": "In this paper, we design a more transferable training objective for GNN (EGI) based on our novel view of essential graph information (§3.1). We then analyze its transferability as the gap between its abilities to model the source and target graphs, based on their local graph Laplacians (§3.2).\nBased on the connection between GNN and spectral graph theory [29], we describe the output of a GNN as a combination of its input node features X , fixed graph Laplacian L and learnable graph filters Ψ. The goal of training a GNN is then to improve its utility by learning the graph filters that are compatible with the other two components towards specific tasks.\nIn the graph transfer learning setting where downstream tasks are often unknown during pre-training, we argue that the general utility of a GNN should be optimized and quantified w.r.t. its ability of capturing the essential graph information in terms of the joint distribution of its topology structures and node features, which motivates us to design a novel ego-graph information maximization model (EGI) (§3.1). The general transferability of a GNN is then quantified by the gap between its abilities to model the source and target graphs. Under reasonable requirements such as using structurerespecting node features as the GNN input, we analyze this gap for EGI based on the structural difference between two graphs w.r.t. their local graph Laplacians (§3.2)." }, { "heading": "3.1 Transferable GNN via Ego-graph Information Maximization", "text": "In this work, we focus on the direct-transfering setting where a GNN is pre-trained on a source graph Ga in an unsupervised fashion and applied on a target graph Gb without fine-tuning.2 Consider a graph G = {V,E}, where the set of nodes V are associated with certain features X and the set of edges E form graph structures. Intuitively, the transfer learning will be successful only if both the features and structures of Ga and Gb are similar in some ways, so that the graph filters of a GNN learned on Ga are compatible with the features and structures of Gb.\nGraph kernels [57, 8, 30, 38] are well-known for their capability of measuring similarity between pair of graphs. Motivated by k-hop subgraph kernels [4], we introduce a novel view of a graph as samples from the joint distribution of its k-hop ego-graph structures and node features. Since GNN essentially encodes such k-hop ego graph samples, this view allows us to give concrete definitions towards structural information of graphs in the transfer learning setting, which facilitates the measuring of similarity (difference) among graphs. Yet, none of the existing GNN training objectives are capable of recovering such distributional signals of ego graphs. To this end, we design Ego-Graph Information maximization (EGI), which alternatively reconstructs the k-hop ego-graph of each center node via mutual information maximization [20].\nDefinition 3.1 (K-hop ego-graph). We call a graph gi = {V (gi), E(gi)} a k-hop ego-graph centered at node vi if it has a k-layer centroid expansion [4] such that the greatest distance between vi and\n2In the experiments, we show our model to be generalizable to the more practical settings with task-specific pre-training and fine-tuning, while the study of rigorous bound in such scenarios is left as future work.\nany other nodes in the ego-graph is k, i.e. ∀vj ∈ V (gi), |d(vi, vj)| ≤ k, where d(vi, vj) is the graph distance between vi and vj .\nIn this paper, we use directed k-hop ego-graph and its direction is decided by whether it is composed of incoming or outgoing edges to the center node, i.e., gi and g̃i. The results apply trivially to undirected graphs with gi = g̃i.\nDefinition 3.2 (Structural information). Let G be a topological space of sub-graphs, we view a graph G as samples of k-hop ego-graphs {gi}ni=1 drawn i.i.d. from G with probability µ, i.e., gi\ni.i.d.∼ µ ∀i = 1, · · · , n. The structural information of G is then defined to be the set of k-hop ego-graphs of {gi}ni=1 and their empirical distribution.\nAs shown in Figure 1, three graphs G0, G1 and G2 are characterized by a set of 1-hop ego-graphs and their empirical distributions, which allows us to quantify the structural similarity among graphs as shown in §3.2 (i.e., G0 is more similar to G1 than G2 under such characterization). In practice, the nodes in a graph G are characterized not only by their k-hop ego-graph structures but also their associated node features. Therefore, G should be regarded as samples {(gi, xi)} drawn from the joint distribution P on the product space of G and a node feature space X .\nEgo-Graph Information Maximization. Given a set of ego-graphs {(gi, xi)}i drawn from an empirical joint distribution (gi, xi) ∼ P. We aim to train an GNN encoder Ψ to maximize the mutual informaion (MI (gi,Ψ(gi, xi))) between the defined structural information gi3 (i.e. k-hop ego-graph) and node embedding zi = Ψ(gi, xi). To maximize the MI, another discriminator D(gi, zi) : E(gi)× zi → R+ is introduced to compute the probability of an edge e belongs to the given ego-graph gi. We use the Jensen-Shannon MI estimator [20] in the EGI objective,\nLEGI = −MI(JSD) (G,Ψ) = 1N N∑ i=1 [sp (D(gi, z′i)) + sp (−D(gi, zi))] , (1)\nwhere sp(x) = log(1+ex) is the softplus function and (gi, z′i) is randomly drawn from the product of marginal distributions, i.e. z′i = Ψ(gi′ , xi′), (gi′ , xi′) ∼ P, i′ 6= i. In general, we can also randomly draw negative g′i in the topological space, while enumerating all possible graphs gi′ leads to high computation cost.\nIn Eq. 1, the computation of D on E(gi) depends on the node orders. Following the common practice in graph generation [70], we characterize the decision process of D with a fixed graph ordering, i.e., the BFS-ordering π over edges E(gi). D = f ◦ Φ is composed by another GNN encoder Φ and scoring function f over an edge sequence Eπ : {e1, e2, ..., en}, which makes predictions on the BFS-ordered edges.\n3Later in section 3.2, we will discuss the equivalence between MI(gi, zi) and MI((gi, xi), zi) when node feature is structure-respecting.\nRecall our previous definition on the direction of k-hop ego-graph, the center node encoder Ψ receives pairs of (gi, xi) while the neighbor node encoder Φ in discriminator D receives (g̃i, xi). Both encoders are parameterized as GNNs,\nΨ(gi, xi) = GNNΨ(Ai, Xi),Φ(g̃i, xi) = GNNΦ(A′i, Xi),\nwhere Ai, A′i is the adjacency matrix with self-loops of gi and g̃i, respectively. The self-loops are added following the common design of GNNs, which allows the convolutional node embeddings to always incorporate the influence of the center node. Ai = A′i\nᵀ. The output of Ψ, i.e., zi ∈ Rn, is the center node embedding, while Φ outputs representation H ∈ R|gi|×n for neighbor nodes in the ego-graph.\nOnce node representation H is computed, we now describe the scoring function f . For each of the node pair (p, q) ∈ Eπ, hp is the source node representation from Φ, xq is the destination node features. The scoring function is,\nf(hp, xq, zi) = σ ( UT · τ ( WT [hp||xq||zi] )) , (2)\nwhere σ and τ are Sigmoid and ReLU activation functions. Thus, the discriminator D is asked to distinguish a positive ((p, q), zi) and negative pair ((p, q), z′i)) for each edge in gi.\nD(gi, zi) = ∑\n(p,q)∈Eπ log f(hp, xq, zi), D(gi, z′i) = Eπ∑ (p,q) log f(hp, xq, z ′ i). (3)\nThere are two types of edges (p, q) in our consideration of node orders, type-a - the edges across different hops (from the center node), and type-b - the edges within the same hop (from the center node). The aforementioned BFS-based node ordering guarantees that Eq. 3 is sensitive to the ordering of type-a edges, and invariant to the ordering of type-b edges, which is consistent with the requirement of our theoretical analysis on ∆D. Due to the fact that the output of a k-layer GNN only depends on a k-hop ego-graph for both encoders Ψ and Φ, EGI can be trained in parallel by sampling batches of gi’s. Besides, the training objective of EGI is transferable as long as (gi, xi) across source graph Ga and Gb satisfies the conditions given in §3.2. More model details in Appendix §B and source code in the Supplementary Materials.\nConnection with existing work. To provide more insights into the EGI objective, we also present it as a dual problem of ego-graph reconstruction. Recall our definition of ego-graph mutual information MI(gi,Ψ(gi, xi)). It can be related to an ego-graph reconstruction loss R(gi|Ψ(gi, xi)) as\nmax MI(gi,Ψ(gi, xi)) = H(gi)−H(gi|Ψ(gi, xi)) ≤ H(gi)−R(gi|Ψ(gi, xi)). (4) When EGI is maximizing the mutual information, it simultaneously minimizes the upper error bound of reconstructing an ego-graph gi. In this view, the key difference between EGI and VGAE [28] is they assume each edge in a graph to be observed independently during the reconstruction. While in EGI, edges in an ego-graph are observed jointly during the GNN decoding. Moreover, existing mutual information based GNNs such as DGI [54] and GMI [41] explicitly measure the mutual information between node features x and GNN output Ψ. In this way, they tend to capture node features instead of graph structures, which we deem more essential in graph transfer learning as discussed in §3.2.\nUse cases of EGI framework. In this paper, we focus on the classical domain adaption (directtransferring) setting [7], where no target domain labels are available and transferability is measured by the performance discrepancy without fine-tuning. In this setting, the transferability of EGI is theoretically guaranteed by Theorem 3.1. In §4.1, we validated this with the airport datasets. Beyond direct-transferring, EGI is also useful in the more generalized and practical setting of transfer learning with fine-tuning, which we introduced in §4.2 and validated with the YAGO datasets. In this setting, the transferability of EGI is not rigorously studied yet, but is empirically shown promising.\nSupportive observations. In the first three columns of our synthetic experimental results (Table 1), in both cases of transfering GNNs between similar graphs (F-F) and dissimilar graphs (B-F), EGI significantly outperforms all competitors when using node degree one-hot encoding as transferable node features. In particular, the performance gains over the untrained GIN show the effectiveness of training and transfering, and our gains are always larger than the two state-of-the-art unsupervised GNNs. Such results clearly indicate advantageous structure preserving capability and transferability of EGI." }, { "heading": "3.2 Transferability analysis based on local graph Laplacians", "text": "We now study the transferability of a GNN (in particular, with the training objective of LEGI) between the source graph Ga and target graph Gb based on their graph similarity. We firstly establish the requirement towards node features, under which we then focus on analyzing the transferability of EGI w.r.t. the structural information of Ga and Gb.\nRecall our view of the GNN output as a combination of its input node features, fixed graph Laplacian and learnable graph filters. The utility of a GNN is determined by the compatibility among the three. In order to fulfill such compatibility, we require the node features to be structure-respecting: Definition 3.3 (Structure-respecting node features). Let gi be an ordered ego-graph centered on node vi with a set of node features {xip,q} k,|Vp(gi)| p=0,q=1 , where Vp(gi) is the set of nodes in p-th hop of gi. Then we say the node features on gi are structure-respecting if xip,q = [f(gi)]p,q ∈ Rd for any node vq ∈ Vp(gi), where f : G → Rd×|V (gi)| is a function. In the strict case, f should be injective.\nIn its essence, Def 3.3 requires the node features to be a function of the graph structures, which is sensitive to changes in the graph structures, and in an ideal case, injective to the graph structures (i.e., mapping different graphs to different features). In this way, when the learned graph filters of a transfered GNN is compatible to the structure of G, they are also compatible to the node features of G. As we will explain in Remark 2 of Theorem 3.1, this requirement is also essential for the analysis of EGI transferability which eventually only depends on the structural difference between two graphs.\nIn practice, commonly used node features like node degrees, PageRank scores [40], spectral embeddings [11], and many pre-computed unsupervised network embeddings [42, 51, 14] are all structure-respecting in nature. However, other commonly used node features like random vectors [68] or uniform vectors [60] are not and thus non-transferable. When raw node attributes are available, they are transferable as long as the concept of homophily [36] applies, which also implies Def 3.3, but we do not have a rigorous analysis on it yet.\nSupportive observations. In the fifth and sixth columns in Table 1, where we use same fixed vectors as non-transferable node features to contrast with the first three columns, there is almost no transferability (see δ(acc.)) for all compared methods when non-transferable features are used, as the performance of trained GNNs are similar to or worse than their untrained baselines. More detailed experiments on different transferable and non-transferable features can be found in Appendix §C.1.\nWith our view of graphs and requirement on node features both established, now we derive the following theorem by characterizing the performance difference of EGI on two graphs based on Eq. 1. Theorem 3.1 (GNN transferability). Let Ga = {(gi, xi)}ni=1 and Gb = {(gi′ , xi′)}mi′=1 be two graphs, and assume node features are structure-relevant. Consider GCN Ψθ with k layers and a 1-hop polynomial filter φ. With reasonable assumptions on the local spectrum of Ga and Gb, the empirical performance difference of Ψθ evaluated on LEGI satisfies\n|LEGI(Ga)− LEGI(Gb)| ≤ O (∆D(Ga, Gb) + C) . (5) On the RHS, C is only dependent on the graph encoders and node features, while ∆D(Ga, Gb) measures the structural difference between the source and target graphs as follows,\n∆D(Ga, Gb) = C̃ 1\nnm n∑ i=1 m∑ i′=1 λmax(L̃gi − L̃gi′ ) (6)\nwhere λmax(A) := λmax(ATA)1/2, and L̃gi denotes the normalised graph Laplacian of g̃i by its in-degree. C̃ is a constant dependant on λmax(L̃gi) and D.\nProof. The full proof is detailed in Appendix §A.\nThe analysis in Theorem 3.1 naturally instantiates our insight about the correspondence between structural similarity and GNN transferability. It allows us to tell how well an EGI trained on Ga can work on Gb by only checking the local graph Laplacians of Ga and Gb without actually training any model. In particular, we define the EGI gap as ∆D in Eq. 6, as other term C is the same for different methods using same GNN encoder. It can be computed to bound the transferability of EGI regarding its loss difference on the source and target graphs.\nRemark 1. Our view of a graph G as samples of k-hop ego-graphs is important, as it allows us to obtain node-wise characterization of GNN similarly as in [55]. It also allows us to set the depth of ego-graphs in the analysis to be the same as the number of GNN layers (k), since the GNN embedding of each node mostly depends on its k-hop ego-graph instead of the whole graph. Remark 2. For Eq. 1, Def 3.3 ensures the sampling of GNN embedding at a node always corresponds to sampling an ego-graph from G, which reduces to uniformly sampling from G = {gi}ni=1 under the setting of Theorem 3.1. Therefore, the requirement of Def 3.3 in the context of Theorem 3.1 guarantees the analysis to be only depending on the structural information of the graph.\nSupportive observations. In Table 1, in the d̄ columns, we compute the average structural difference between two Forest-fire graphs (∆D(F,F)) and between Barabasi and Forest-fire graphs (∆D(B,F)), based on the RHS of Eq. 5. The results validate the topological difference between graphs generated by different random-graph models, while also verifying our view of graph as k-hop ego-graph samples and the way we propose based on it to characterize structural information of graphs. We further highlight in the δ(acc) columns the accuracy difference between the GNNs transfered from Forest-fire graphs and Barabasi graphs to Forest-fire graphs. Since Forest-fire graphs are more similar to Forest-fire graphs than Barabasi graphs (as verified in the ∆D columns), we expect δ(acc.) to be positive and large, indicating more positive transfer between the more similar graphs. Indeed, the behaviors of EGI align well with the expectation, which indicates its well-understood transferability and the utility of our theoretical analysis.\nUse cases of Theorem 3.1. Our Theorem 3.1 naturally allows for two practical use cases among many others: point-wise pre-judge and pair-wise pre-selection for EGI pre-training. Suppose we have a target graph Gb which does not have sufficient training labels. In the first setting, we have a single source graph Ga which might be useful for pre-training a GNN to be used on Gb. The EGI gap ∆D(Ga, Gb) in Eq. 6 can then be computed between Ga and Gb to pre-judge whether such transfer learning would be successful before any actual GNN training (i.e., yes if ∆D(Ga, Gb) is empirically much smaller than 1.0; no otherwise). In the second setting, we have two or more source graphs {G1a, G2a, . . .} which might be useful for pre-training the GNN. The EGI gap can then be computed between every pair of Gia and Gb to pre-select the best source graph (i.e., select the one with the least EGI gap).\nIn practice, the computation of eigenvalues on the small ego-graphs can be rather efficient [2], and we do not need to enumerate all pairs of ego-graphs on two compared graphs especially if the graphs are really large (e.g., with more than a thousand nodes). Instead, we can randomly sample pairs of ego-graphs from the two graphs, update the average difference on-the-fly, and stop when it converges. Suppose we need to sample M pairs of k-hop ego-graphs to compare two large graphs, and the average size of ego-graphs are L, then the overall complexity of computing Eq. 5 is O(ML2), where M is often less than 1K and L less than 50. In Appendix §C.4, we report the approximated ∆D’s w.r.t. different sampling frequencies, and they are indeed pretty close to the actual value even with smaller sample frequencies, showing the feasible efficiency of computing ∆D through sampling.\nLimitations. EGI is designed to account for the structural difference captured by GNNs (i.e., khop ego-graphs). The effectiveness of EGI could be limited if the tasks on target graphs depend on different structural signals. For example, as Eq. 6 is computing the average pairwise distances between the graph Laplacians of local ego-graphs, ∆D is possibly less effective in explicitly capturing global graph properties such as numbers of connected components (CCs). In some specific tasks (such as counting CCs or community detection) where such properties become the key factors, ∆D may fail to predict the transferability of GNNs." }, { "heading": "4 Real Data Experiments", "text": "Baselines. We compare the proposed model against existing self-supervised GNNs and pre-training GNN algorithms. To exclude the impact of different GNN encoders Ψ on transferability, we always use the same encoder architecture for all compared methods (i.e., GIN [60] for direct-transfering experiments, GCN [29] for transfering with fine-tuning).\nThe self-supervised GNN baselines are GVAE [28], DGI [54] and two latest mutual information estimation methods GMI [41] and MVC [17]. As for pre-training GNN algorithms, MaskGNN\nand ContextPredGNN are two node-level pre-training models proposed in [21] Besides, Structural Pre-train [23] also conducts unsupervised node-level pre-training with structural features like node degrees and clustering coefficients.\nExperimental Settings. The main hyperparameter k is set 2 in EGI as a common practice. We use Adam [27] as optimizer and learning rate is 0.01. We provide the experimental result with varying k in the Appendix §C.4. All baselines are set with the default parameters. Our experiments were run on an AWS g4dn.2xlarge machine with 1 Nvidia T4 GPU. By default, we use node degree one-hot encoding as the transferable feature across all different graphs. As stated before, other transferable features like spectral and other pre-computed node embeddings are also applicable. We focus on the setting where the downstream tasks on target graphs are unspecified but assumed to be structure-relevant, and thus pre-train the GNNs on source graphs in an unsupervised fashion.4 In terms of evaluation, we design two realistic experimental settings: (1) Direct-transfering on the more structure-relevant task of role identification without given node features to directly evaluate the utility and transferability of EGI. (2) Few-shot learning on relation prediction with task-specific node features to evaluate the generalization ability of EGI." }, { "heading": "4.1 Direct-transfering on role identification", "text": "First, we use the role identification without node features in a direct-transfering setting as a reliable proxy to evaluate transfer learning performance regarding different pre-training objectives. Role in a network is defined as nodes with similar structural behaviors, such as clique members, hub and bridge [19]. Across graphs in the same domain, we assume the definition of role to be consistent, and the task of role identification is highly structure-relevant, which can directly reflect the transferability of different methods and allows us to conduct the analysis according to Theorem 3.1. Upon convergence of pre-training each model on the source graphs, we directly apply them to the target graphs and further train a multi-layer perceptron (MLP) upon their outputs. The GNN parameters are frozen during the MLP training. We refer to this strategy as direct-transfering since there is no fine-tuning of the models after transfering to the target graphs.\nWe use two real-world network datasets with role-based node labels: (1) Airport [45] contains three networks from different regions– Brazil, USA and Europe. Each node is an airport and each link is the flight between airports. The airports are assigned with external labels based on their level of popularity. (2) Gene [68] contains the gene interactions regarding 50 different cancers. Each gene has a binary label indicating whether it is a transcription factor. More details about the results and dataset can be found in Appendix C.2.\nThe experimental setup on the Airport dataset closely resembles that of our synthetic experiments in Table 1, but with real data and more detailed comparisons. We train all models (except for the untrained ones) on the Europe network, and test them on all three networks. The results are presented in Table 2. We notice that the node degree features themselves (with MLP) show reasonable performance in all three networks, which is not surprising since the popularity-based airport role labels are highly relevant to node degrees. The untrained GIN encoder yields a significant margin over just node features, as GNN encoder incorporates structural information to node representations.\n4The downstream tasks are unspecified because we aim to study the general transferability of GNNs that is not bounded to specific tasks. Nevertheless, we assume the tasks to be relevant to graph structures.\nWhile training of the DGI can further improve the performance on the source graph, EGI shows the best performance there with the structure-relevant node degree features, corroborating the claimed effectiveness of EGI in capturing the essential graph information (i.e. recover the k-hop ego-graph distributions) as we stress in §3.\nWhen transfering the models to USA and Brazil networks, EGI further achieves the best performance compared with all baselines when structure relevant features are used (64.55 and 73.15), which reflects the most significant positive transfer. Interestingly, direct application of GVAE, DGI and MVC that do not capture the input k-hop graph jointly, leads to rather limited and even negative transferrability (through comparison against the untrained GIN encoders). The recently proposed transfer learning frameworks for GNN like MaskGNN and Structural Pre-train are able to mitigate negative transfer to some extent, but their performances are still inferior to EGI. We believe this is because their models are prone to learn the graph-specific information that is less transferable across different graphs. GMI is also known to capture the graph structure and node features, so it achieves second best result comparing with EGI.\nSimilarly as in Table 1, we also compute the structural differences among three networks w.r.t. the EGI gap in Eq. 6. The structural difference is 0.869 between the Europe and USA networks, and 0.851 between the Europe and Brazil datasets, which are pretty close. Consequently, the transferability of EGI regarding its performance gain over the untrained GIN baseline is 4.8% on the USA network and 4.4% on the Brazil network, which are also close. Such observations again align well with our conclusion in Theorem 3.1 that the transferability of EGI is closely related to the structural differences between source and target graphs.\nOn the Gene dataset, with more graphs available, we focus on EGI to further validate the utility of Eq. 5 in Theorem 3.1, regarding the connection between the EGI gap (Eq. 6) and the performance gap (micro-F1) of EGI on them. Due to severe label imbalance that removes the performance gaps, we only use the seven brain cancer networks that have a more consistent balance of labels. As shown in Figure 3, we train EGI on one graph and test it on the other graphs. The x-axis shows the EGI gap, and y-axis shows the improvement on micro-F1 compared with an untrained GIN. The negative correlation between two quantities is obvious. Specifically, when the structural difference is smaller than 1, positive transfer is observed (upper left area) as the performance of transferred EGI is better than untrained GIN, and when the structural difference becomes large (> 1), negative transfer is observed. We also notice a similar graph pattern, i.e. single dense cluster, between source graph and positive transferred target graph G2." }, { "heading": "4.2 Few-shot learning on relation prediction", "text": "Here we evaluate EGI in the more generalized and practical setting of few-shot learning on the less structure-relevant task of relation prediction, with task-specific node features and fine-tuning. The source graph contains a cleaned full dump of 579K entities from YAGO [49], and we investigate 20- shot relation prediction on a target graph with 24 relation types, which is a sub-graph of 115K entities sampled from the same dump. In post-fine-tuning, the models are pre-trained with an unsupervised loss on the source graph and fine-tuned with the task-specific loss on the target graph. In joint-finetuning, the same pre-trained models are jointly optimized w.r.t. the unsupervised pre-training loss\nand task-specific fine-tuning loss on the target graph. In Table 3, we observe most of the existing models fail to transfer across pre-training and fine-tuning tasks, especially in the joint-fine-tuning setting. In particular, both Mask-GIN and ContextPred-GIN rely a lot on task-specific fine-tuning, while EGI focuses on the capturing of similar ego-graph structures that are transferable across graphs. The mutual information based method GMI also demonstrates considerable transferability and we believe the ability to capture the graph structure is the key to the transferability. As a consequence, EGI significantly outperforms all compared methods in both settings. More detailed statistics and running time are in Appendix §C.3." }, { "heading": "5 Conclusion", "text": "To the best of our knowledge, this is the first research effort towards establishing a theoretically grounded framework to analyze GNN transferability, which we also demonstrate to be practically useful for guiding the design and conduct of transfer learning with GNNs. For future work, it is intriguing to further strengthen the bound with relaxed assumptions, rigorously extend it to the more complicated and less restricted settings regarding node features and downstream tasks, as well as analyze and improve the proposed framework over more transfer learning scenarios and datasets. It is also important to protect the privacy of pre-training data to avoid potential negative societal impacts.\nAcknowledgments and Disclosure of Funding\nResearch was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004, SocialSim Program No. W911NF-17-C-0099, and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897. Chao Zhang is supported NSF IIS-2008334, IIS-2106961, and ONR MURI N00014-17-1-2656. We would like to thank AWS Machine Learning Research Awards program for providing computational resources for the experiments in this paper. This work is also partially supported by the internal funding and GPU servers provided by the Computer Science Department of Emory University. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of DARPA or the U.S. Government." } ]
2,022
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
SP:40cba7b6c04d7e44709baed351382c27fa89a129
[ "The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projection and lifting operations are defined to relate descriptions of partition cells to one another through rules. The quality of a description is measured by the divergence between the data and the (special) lifting of the rule set, under the constraint that rules satisfy an upper bound on their entropy." ]
Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some human-interpretable insight into what might govern the nature of the original signal. To summarize the signal, we need several disentangled rules arranged in a hierarchy, formalized by a lattice structure. ILL focuses on explainability and generalizability from “small data”, and aims for rules akin to those humans distill from experience (rather than a representation optimized for a specific task like classification). This paper focuses on a mathematical and algorithmic presentation of ILL, then demonstrates how ILL addresses the core question “what makes X an X” or “what makes X different from Y” to create effective, rule-based explanations designed to help human learners understand. The key part here is what rather than tasks like generating X or predicting labels X,Y. Typical applications of ILL are presented for artistic and scientific knowledge discovery. These use ILL to learn music theory from scores and chemical laws from molecule data, revealing relationships between domains. We include initial benchmarks and assessments for ILL to demonstrate efficacy.
[]
[ { "authors": [ "Amina Adadi", "Mohammed Berrada" ], "title": "Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "Karell Bertet", "Michel Morvan" ], "title": "Computing the sublattice of a lattice generated by a set of elements", "venue": "In Proc. 3rd Int. Conf. Orders, Algorithms Appl.,", "year": 1999 }, { "authors": [ "Karell Bertet", "Michel Morvan", "Lhouari Nourine" ], "title": "Lazy completion of a partial order to the smallest lattice", "venue": "In Proc. 2nd Int. Symp. Knowl. Retr., Use and Storage for Effic. (KRUSE", "year": 1997 }, { "authors": [ "Christina Bodurow" ], "title": "Music and chemistry—what’s the connection", "venue": "Chem. Eng. News,", "year": 2018 }, { "authors": [ "Nathalie Caspard", "Bruno Leclerc", "Bernard Monjardet" ], "title": "Finite Ordered Sets: Concepts, Results and Uses. Number 144 in Encyclopedia of Mathematics and its Applications", "venue": null, "year": 2012 }, { "authors": [ "Gregory J Chaitin" ], "title": "Algorithmic Information Theory", "venue": null, "year": 1987 }, { "authors": [ "Nick Chater", "Paul Vitányi" ], "title": "Simplicity: A unifying principle in cognitive science", "venue": "Trends Cogn. Sci.,", "year": 2003 }, { "authors": [ "François Chollet" ], "title": "On the measure of intelligence", "venue": "arXiv:1911.01547v2 [cs.AI],", "year": 2019 }, { "authors": [ "Erhan Çınlar" ], "title": "Probability and Stochastics, volume 261", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Thomas M Cover", "Joy A Thomas" ], "title": "Elements of Information Theory", "venue": null, "year": 2012 }, { "authors": [ "Constantinos Daskalakis", "Richard M Karp", "Elchanan Mossel", "Samantha J Riesenfeld", "Elad Verbin" ], "title": "Sorting and selection in posets", "venue": "SIAM J. Comput.,", "year": 2011 }, { "authors": [ "Brian A Davey", "Hilary A Priestley" ], "title": "Introduction to Lattices and Order", "venue": null, "year": 2002 }, { "authors": [ "Benjamin Eva" ], "title": "Principles of indifference", "venue": "J. Philos.,", "year": 2019 }, { "authors": [ "Ruma Falk", "Clifford Konold" ], "title": "Making sense of randomness: Implicit encoding as a basis for judgment", "venue": "Psychol. Rev.,", "year": 1997 }, { "authors": [ "Bernhard Ganter", "Rudolf Wille" ], "title": "Formal Concept Analysis: Mathematical Foundations", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Vijay K Garg" ], "title": "Introduction to Lattice Theory with Computer Science Applications", "venue": "Wiley Online Library,", "year": 2015 }, { "authors": [ "Lejaren Hiller", "Leonard Maxwell Isaacson" ], "title": "Illiac Suite, for String Quartet", "venue": "New Music Edition,", "year": 1957 }, { "authors": [ "Steven Holtzen", "Todd Millstein", "Guy Van den Broeck" ], "title": "Generating and sampling orbits for lifted probabilistic inference", "venue": "arXiv:1903.04672v3 [cs.AI],", "year": 2019 }, { "authors": [ "Anubhav Jain", "Shyue Ping Ong", "Geoffroy Hautier", "Wei Chen", "William Davidson Richards", "Stephen Dacek", "Shreyas Cholia", "Dan Gunter", "David Skinner", "Gerbrand Ceder", "Kristin A. Persson" ], "title": "The Materials Project: A materials genome approach to accelerating materials innovation", "venue": "APL Materials,", "year": 2013 }, { "authors": [ "Michael I Jordan" ], "title": "Artificial intelligence—the revolution hasn’t happened yet", "venue": "Harvard Data Science Review,", "year": 2019 }, { "authors": [ "David Kaiser", "Jonathan Moreno" ], "title": "Self-censorship is not", "venue": "enough. Nature,", "year": 2012 }, { "authors": [ "Martin Kauer", "Michal Krupka" ], "title": "Subset-generated complete sublattices as concept lattices", "venue": "In Proc. 12th Int. Conf. Concept Lattices Appl., pp", "year": 2015 }, { "authors": [ "Kristian Kersting" ], "title": "Lifted probabilistic inference", "venue": "In Proc. 20th European Conf. Artif. Intell. (ECAI", "year": 2012 }, { "authors": [ "Risi Kondor", "Shubhendu Trivedi" ], "title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups", "venue": "[stat.ML],", "year": 2018 }, { "authors": [ "Holbrook Mann MacNeille" ], "title": "Partially ordered sets", "venue": "Trans. Am. Math. Soc.,", "year": 1937 }, { "authors": [ "Gary Marcus" ], "title": "Innateness, AlphaZero, and artificial intelligence", "venue": "[cs.AI],", "year": 2018 }, { "authors": [ "Christoph Molnar" ], "title": "Interpretable Machine Learning", "venue": "Lulu.com,", "year": 2019 }, { "authors": [ "Andreas D Pape", "Kenneth J Kurtz", "Hiroki Sayama" ], "title": "Complexity measures and concept learning", "venue": "J. Math. Psychol.,", "year": 2015 }, { "authors": [ "Uta Priss" ], "title": "Formal concept analysis in information science", "venue": "Ann. Rev. Inform. Sci. Tech.,", "year": 2006 }, { "authors": [ "Anna Rogers", "Olga Kovaleva", "Anna Rumshisky" ], "title": "A primer in BERTology: What we know about how BERT works", "venue": "[cs.CL],", "year": 2020 }, { "authors": [ "Andrew D Selbst", "Danah Boyd", "Sorelle A Friedler", "Suresh Venkatasubramanian", "Janet Vertesi" ], "title": "Fairness and abstraction in sociotechnical systems", "venue": "In Proc. Conf. Fairness, Account., and Transpar.,", "year": 2019 }, { "authors": [ "Claude Shannon" ], "title": "The lattice theory of information", "venue": "Trans. IRE Prof. Group Inf. Theory,", "year": 1953 }, { "authors": [ "Charles Percy Snow" ], "title": "The Two Cultures", "venue": null, "year": 1959 }, { "authors": [ "Harini Suresh", "John V Guttag" ], "title": "A framework for understanding unintended consequences of machine learning", "venue": "[cs.LG],", "year": 2019 }, { "authors": [ "Dmitri Tymoczko" ], "title": "A Geometry of Music: Harmony and Counterpoint in the Extended Common Practice", "venue": null, "year": 2010 }, { "authors": [ "Haizi Yu", "Lav R. Varshney" ], "title": "Towards deep interpretability (MUS-ROVER II): Learning hierarchical representations of tonal music", "venue": "In Proc. 5th Int. Conf. Learn. Represent", "year": 2017 }, { "authors": [ "Haizi Yu", "Lav R Varshney", "Guy E Garnett", "Ranjitha Kumar" ], "title": "MUS-ROVER: A self-learning system for musical compositional rules", "venue": "In Proc. 4th Int. Workshop Music. Metacreation (MUME", "year": 2016 }, { "authors": [ "Haizi Yu", "Tianxi Li", "Lav R Varshney" ], "title": "Probabilistic rule realization and selection", "venue": "In Proc. 31th Annu. Conf. Neural Inf. Process. Syst. (NeurIPS 2017),", "year": 2017 }, { "authors": [], "title": "PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many", "venue": null, "year": 2016 }, { "authors": [ "Sokol (Sokol" ], "title": "2016), a music professor at York University, which we quote below: “The idea of Figured Soprano is simply a way of taking this thinking from the top-down and bringing it into greater prominence as a creative gesture. So these exercises are not anything new in their ideation, but they can bring many new ideas, chord progressions and much else. It’s a somewhat neglected area of harmonic study and it’s a lot of fun to play with.", "venue": null, "year": 2016 }, { "authors": [ "transparency", "explainability" ], "title": "Extensions to ILL could enable it to better cooperate with other models, e.g., as a pre-processing or a post-interpretation tool to achieve superior task performance as well as controllability and interpretability. One such possibility could leverage ILL to analyze the attention matrices (as signals) learned from a Transformer-based NLP model like BERT or GPT (Rogers et al., 2020)", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "With rapid progress in AI, there is an increasing desire for general AI (Goertzel & Pennachin, 2007; Chollet, 2019) and explainable AI (Adadi & Berrada, 2018; Molnar, 2019), which exhibit broad, human-like cognitive capacities. One common pursuit is to move away from “black boxes” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples, with neither unlimited priors nor unlimited data (“primitive priors” & “small data” instead). In this pursuit, we present a new, task-nonspecific framework—Information Lattice Learning (ILL)— to learn representations akin to human-distilled rules, e.g., producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students (shown at the end).\nThe term information lattice was first defined by Shannon (1953), but remains largely conceptual and unexplored. In the context of abstraction and representation learning, we independently develop representation lattices that coincide with Shannon’s information lattice when restricted to his context. Instead of inventing a new name, we adopt Shannon’s. However, we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice, yielding the name ILL.\nILL explains a signal (e.g., a probability distribution) by disentangled representations, called rules. A rule explains some but not all aspects of the signal, but together the collection of rules aims to capture a large part of the signal. ILL is specially designed to address the core question “what makes X an X” or “what makes X different from Y”, emphasizing the what rather than generating X or predicting labels X,Y in order to facilitate effective, rule-based explanations designed to help human learners understand. A music AI classifying concertos, or generating one that mimics the masters, does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one. Our focus represents a shift from much representation-learning work (Bengio et al., 2013) that aim to find the best representation for solving a specific task (e.g., classification) rather than strong concern for explainability. Instead of optimizing a task-specific objective function (e.g., classification error), ILL balances more general objectives that favor fewer, simpler rules for interpretability, and more essential rules for effectiveness—all formalized later.\nOne intuition behind ILL is to break the whole into simple pieces, similar to breaking a signal into a Fourier series. Yet, rather than decomposition via projection to orthonormal basis and synthesis\nvia weighted sum, we decompose a signal in a hierarchical space called a lattice. Another intuition behind ILL is feature selection. Yet, rather than features, we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning. The goal is to restore human-like, hierarchical rule abstraction-and-realization through signal decomposition-and-synthesis in a lattice (called projection-and-lifting, Figure 1: left), resulting in more than a sum of parts.\nILL comprises two phases: (a) lattice construction; (b) learning (i.e., searching) in the lattice. This is similar to many machine learning (ML) models comprising (a) function class specification then (b) learning in the function class, e.g., constructing a neural network then learning—finding optimal parameters via back-propagation—in the network. ILL’s construction phase is prior-efficient: it builds in universal priors that resemble human innate cognition (cf. the Core Knowledge priors (Spelke & Kinzler, 2007)), then grows a lattice of abstractions. The priors can be customized, however, to cater to a particular human learner, or facilitate more exotic knowledge discovery. ILL’s learning phase is data-efficient: it learns from “small data” encoded by a signal, but searches for rich explanations of the signal via rule learning, wherein abstraction is key to “making small data large”. Notably, the construction phase is prior-driven, not data-driven—data comes in only at the learning phase. Hence, the same construction may be reused in different learning phases for different data sets or even data on different topics (Figure 1: right). Featuring these two phases, ILL is thus a hybrid model that threads the needle between a full data-driven model and a full prior-driven model, echoing the notion of “starting like a baby; learning like a child” (Hutson, 2018).\nILL is related to many research areas. It draws ideas and approaches from lattice theory, information theory, group theory, and optimization. It shares algorithmic similarity with a range of techniques including MaxEnt, data compression, autoencoders, and compressed sensing, but with a much greater focus on achieving human-like explainability and generalizability. Below, we broadly compares ILL to prominent, related models, leaving more comparisons to the Appendix for most similar ones.\nCompared to ILL is deep learning a “white-box” model balancing human-explainability and task performance Bayesian inference modeling human reasoning with widely shared, common priors and few, simple rules rather than using probabilistic inference as the driving force tree-like models structurally more general: a tree (e.g., decision tree or hierarchical clustering)\nis essentially a linear lattice (called a chain formally) depicting a unidirectional refinement or coarsening process\nconcept lattice in FCA (Ganter & Wille, 2012) conceptually more general and may include both known and unknown concepts; ILL does not require but discovers domain knowledge (more details in Appendix A)\nWe illustrate ILL applications by learning music theory from scores, chemical laws from compounds, and show how ILL’s common priors facilitate mutual interpretation between the two subjects. To begin, imagine Tom and Jerry are playing two 12-key pianos simultaneously, one note at a time (Figure 1: right). The frequency of the played two-note chords gives a 2D signal plotted as a 12× 12 grayscale heatmap. Inspecting this heatmap, what might be the underlying rules that govern their co-play? (Check: all grey pixels have a larger “Jerry-coordinate” and project to a black key along the “Tom-axis”.) We now elaborate on ILL and use it to distill rules for complex, realistic cases." }, { "heading": "2 INFORMATION LATTICE: ABSTRACTIONS AND RULES OF A SIGNAL", "text": "Signal. A signal is a function ξ : X → R. For notational brevity and computational reasons, assume ξ is non-negative and X ⊆ Rn is finite (not a limitation: see Appendix B). For example, a signal ξ : {1, . . . , 6} → R being a probability mass function (pmf) of a dice roll, or a signal ξ : {0, . . . , 27}2 → R being a 28× 28 grayscale image. We denote the set of all signals on X by SX . Partition / abstraction. We use a partition P of a set X to denote an abstraction of X; we call a cell C ∈ P an (abstracted) concept. The intuition is simple: a partition of a set renders a “coarse-grained view” of the set, or more precisely, an equivalence relation on the set. In this view, we identify equivalence classes of elements (concepts) instead of individual elements. For example, the partition P = {{1, 3, 5}, {2, 4, 6}} of the six outcomes of the roll of a die identify two concepts (odd, even). Rule / representation. A rule of a signal ξ : X → R is a “coarsened” signal rξ : P → R defined on a partition P of X with rξ(C) := ∑ x∈C ξ(x) for any C ∈ P . In this paper, a rule of a signal is what we mean by a representation of a signal. If the signal is a grayscale image, a rule can be a special type of blurring or downsampling of the image; if the signal is a probability distribution, a rule can be a pmf of the “orbits” of the distribution for lifted inference algorithms (Holtzen et al., 2019; Kersting, 2012). More generally, we define a rule (regardless of any signal) over a set X by any signal on any partition of X; accordingly, we denote the set of all rules over X byRX := ∪P∈{all partitions of X}SP . Partition lattice. Abstractions are hierarchical: one coarse-grained view can be coarser than another. Let the partition lattice (PX , ) of a setX be the partially ordered set (poset) containing all partitions of X equipped with the partial order coarser than ( ), or finer than ( ), defined in the standard way. Let P := {{x} | x ∈ X} and P := {X} denote the finest and the coarsest partition, respectively. Per general lattice theory (Davey & Priestley, 2002), PX is a complete lattice: every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P, where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening. Information lattice. The information lattice (Rξ,⇐) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than: for any two rules r, r′ ∈ Rξ, we say r is more general than r′ (or r′ is more specific), denoted r ⇐ r′, if domain(r) domain(r′). Notably, Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below. Projection and lifting. For any signal ξ ∈ SX , we define the projection operator ↓ξ: PX → Rξ by letting ↓ξ (P) be the rule of ξ on P . One can check that ↓ξ: (PX , )→ (Rξ,⇐) is an isomorphism. Conversely, we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X (r) denote the set of all signals that satisfy the rule r, i.e., ⇑X (r) := {ξ ∈ SX | ↓ξ (domain(r)) = r} ⊆ SX . To make lifting unique and per Principles of Indifference (Eva, 2019), we introduce a special lifting ↑X (r) to pick the most “uniform” signal in ⇑X (r). Formally, define ‖ · ‖q : SX → R by ‖ξ‖q := ( ∑ x∈X ξ(x)\nq)1/q. For any ξ, ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1, we say that ξ is more uniform than ξ′ (or ξ′ is more deterministic) if ‖ξ‖2 ≤ ‖ξ′‖2. We define the (special) lifting operator ↑X : RX → SX by ↑X (r) := argminξ∈⇑X(r)‖ξ‖2 (can be computed by simply averaging). Notation here follows the convention as to function projections to quotient spaces (Kondor & Trivedi, 2018). Lifting a single rule to the signal domain can be extended in two ways: (a) lift to a finer rule domain P instead of X , i.e., ⇑P (r) or ↑P (r); (b) lift more than one rules. Accordingly, we write ⇑ := ⇑X and ↑ := ↑X as defaults, write R = ↓ξ (P) := {↓ξ (P) | P ∈ P} ⊆ Rξ to denote a rule set, and write ⇑(R) := ∩r∈R ⇑(r) = {η ∈ SX | ↓η (P) = R} and ↑(R) := argminη∈⇑(R)‖η‖2 to denote signals that satisfy all rules in R (general lifting) and the most uniform one (special lifting), respectively. More computational details on lifting and its intimate relation to join are in Appendix C." }, { "heading": "3 INFORMATION LATTICE LEARNING (ILL)", "text": "We first formalize ILL as a single optimization problem and then solve it practically in two phases. Let ξ : X → R be a signal we want to explain. By explaining, we mean to search for a rule set R = ↓ξ (P) ⊆ Rξ such that: (a)R recovers ξ well, orR is essential; (b)R is simple. The main idea agrees with Algorithm Information Theory (Chaitin, 1987; Chater & Vitányi, 2003), but we use an information-lattice based formulation focusing on explainability. We start our formulation below.\nWe say a rule setR recovers the signal ξ exactly if ↑(R) = ξ. Yet, exact recovery may not always be achieved. The information loss occurs for two reasons: (a) insufficient abstractions, i.e., the join ∨P is strictly coarser than P; (b) the choice made in favor of uniformity is inappropriate. Instead of pursuing exact recovery, we introduce ∆(↑ (R), ξ)—a distance (e.g., `p distance) or a divergence (e.g., KL divergence) function—to measure the loss, with a smaller ∆ indicating a more essentialR. We say a rule setR is simpler if it contains fewer and simpler rules. Formally, we wantR minimal, i.e., each rule r ∈ R is indispensable so as to achieve the same ↑(R). Also, we want each rule r ∈ R informationally simple, measured by smaller Shannon entropy Ent(r), so r is more deterministic (Falk & Konold, 1997), easier to remember (Pape et al., 2015) and closer to our common-sense definition of a “rule”. Notably, the partial order renders a tradeoff between the two criteria: r ⇐ r′ implies r is dispensable in anyR ⊇ {r, r′} but on the other hand Ent(r) ≤ Ent(r′), so including more-specific rules makes the rule set small yet each individual rule (informationally) hard.\nThe main problem. The formal definition of an ILL problem is: given a signal ξ : X → R, minimize R⊆Rξ ∆(↑(R), ξ) subject to R is minimal; Ent(r) ≤ for any r ∈ R. (1) The search space involves the full information lattice (Rξ,⇐), or isomorphically, the full partition lattice (PX , ). Yet, the size of this lattice, i.e., the Bell numberB|X|, scales faster than exponentially in |X|. It is unrealistic to compute all partitions of X (unless X is tiny), let alone the partial order. Besides computational concerns, there are two reasons to avoid the full lattice (but to leave it implicitly in the background): (a) the full lattice has unnecessarily high resolution, comprising many nearlyidentical partitions particularly when X is large; (b) considering explainability, not every partition has an easy-to-interpret criterion by which the abstraction is made. As such, Formulation (1) is only conceptual and impractical. Next, we relax it and make it practical via two ILL phases." }, { "heading": "3.1 PRACTICAL LATTICE CONSTRUCTION: TO START LIKE A BABY (PHASE I)", "text": "Information lattice construction plays a role similar to building a function class in ML, sometimes called meta-learning. While its importance is commonly understood, the construction phase in many data-driven models is often treated cursorily—using basic templates and/or ad-hoc priors—leaving most of the computation to the learning phase. In contrast, we put substantial effort into our priordriven construction phase. Pursuing generality and interpretability, we want universal, simple priors that are domain-agnostic and close to the innate cognition of a human baby (Marcus, 2018). Here we draw those from Core Knowledge (Spelke & Kinzler, 2007; Chollet, 2019), which include “the (small) natural numbers and elementary arithmetic prior” and “the elementary geometry and topology prior”. We then give algorithms to construct abstractions from these priors, and consider such a construction prior-efficient if it is interpretable, expressive, and systematic. In the following flowchart, we summarize information lattice construction as generating a partition sublattice. 20.00 pt\nhPhF,Sii_ hPhF,Sii_^···(PhF,Si, )PhF,Si = P hF i [ PGhSiF, S hF i, GhSi\nseeds (priors)\nfeatures/ symmetries\npartition multiset\npartition poset\npartition semilattice\npartition sublattice\n1 2 4 53\nhierarchy stageprior-driven stage completion stage\n1 2 Feature / Symmetry-induced partitions. Unlike data clustering, our prior-driven partitions are induced from two data-independent sources—features and symmetries. We draw priors—in the form of seed features F and seed transformations S—from Core Knowledge as a basis, and then generate a set of partitions P〈F,S〉 as follows: as an example, for X = R2:\nF = {w[1], w[2], w[1,2], sort, argsort, sum, diff, div2, . . . , div19, mod2, . . . , mod19} (2) S = {horizontal, vertical, diagonal translations} ∪ {rotations} ∪ {reflections} (3)\nΦ〈F 〉 : set of features generated by F via function composition G〈S〉 : set of subgroups generated by subsets of S via subgroup generation PΦ〈F 〉 : set of partitions generated by features in Φ〈F 〉 via preimages PG〈S〉 : set of partitions generated by subgroups in G〈S〉 via orbits\nIn (2), wI denotes coordinate selection (like indexing/slicing in python) and the other functions are defined as in python (div and mod are like in python divmod). Then, P〈F,S〉 = PΦ〈F 〉 ∪ PG〈S〉. 3 Partition poset. We next sort P〈F,S〉, computationally a multiset, into the poset (P〈S,F 〉, ). We import algorithmic skeleton from generic poset-sorting algorithms (Caspard et al., 2012; Daskalakis et al., 2011), with an outer routine incrementally adding elements and querying an inner subroutine (an oracle) for pairwise comparison. Yet, our poset is special: its elements are called tagged partitions where a tag records the generating source(s) of its tagged partition, e.g., features and/or symmetries. So, we have specially designed both the outer routine ADD PARTITION and the oracle COMPARE by leveraging (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or filter relations. We relegate details to Appendix E. The data structures for posets include po matrix and hasse diagram, encoding the partial order ≺ (ancestors/descendants) and the cover relation ≺c (parents/children), respectively (Garg, 2015). 4 5 Partition semi/sublattice. To complete (P〈F,S〉, ) into a lattice, we compute the sublattice (of PX ) generated by P〈F,S〉. We follow the idea of alternating-join-and-meet completions borrowed from one of the two generic sublattice-completion methods (Bertet & Morvan, 1999). A discussion on our choice and other related methods is in Appendix D. However, we implement join-semilattice completion (meet-semilattice is dual) in our special context of tagged partitions, which echoes what we did in 3 and reuses ADD PARTITION. The adjustments are (a) changing tags from features and symmetries to join formulae and (b) changing the inner subroutine from pairwise comparison to computing join. We then run a sequence of alternating joins and meets to complete the lattice. For interpretability, one may want to stop early in the completion sequence. While a single join or meet remains simple for human interpretation—often understood as the intersection or union of concepts (e.g., the join of colored items and sized items gives items indexed by color and size)—having alternating joins and meets may hinder comprehension. More details on a single-step join-semilatticecompletion, the completion sequence, and tips on early stopping are relegated to Appendix E." }, { "heading": "3.2 PRACTICAL LATTICE LEARNING: TO LEARN LIKE A CHILD (PHASE II)", "text": "Learning in an information lattice means solving the optimization Problem (1), i.e., to search for a minimal subset of simple rules from the information lattice of a signal so as to best explain that signal. Let P• be the sublattice (or semilattice, poset, if early stopped) from the construction phase. Projecting a signal ξ : X → R to P• yields the information sublattice R• := ↓ξ (P•) ⊆ Rξ. It is worth reiterating that (a) P• is constructed first and is data-independent; (b) ξ (data) comes after P•; (c) (R•,⇐) is isomorphic to (P•, ): R• retains the partial order (po matrix and hasse diagram) and interpretability from P•. As such,R• is what is given at the beginning of the learning phase. The main problem (relaxed). For practicality, we relax Problem (1): instead of the full latticeRξ, we restrict the search space toR•; instead of minimal rule sets, we consider only antichains (whose elements are mutually incomparable), necessary for minimality. This yields:\nminimize R⊆R•\n∆(↑(R), ξ) subject to R is an antichain; Ent(r) ≤ for any r ∈ R. (4)\nTo solve Problem (4), we adopt a (greedy) idea similar to principal component analysis (PCA): we first search for the most essential rule—which decreases ∆ most—in explaining the signal, then the second most essential rule in explaining the rest of the signal, and so on. Specifically, we start with an empty rule setR(0) := ∅, and add rules iteratively. LetR(k) be the rule set formed by Iteration (Iter) k andR(k)⇐ := {r ∈ R• | r ⇐ r′ for some r′ ∈ R(k)}. LetR≤ := {r ∈ R• | Ent(r) ≤ }. Then,\n(in Iter k + 1) minimize ∆(↑(R(k) ∪ {r}), ξ) subject to r ∈ R(k)feasible := R≤ −R(k)⇐ . (5) We pre-computeR≤ (instead of the wholeR•) before iterations, which can be done by a breadth-first search (BFS) on P•’s hasse diagram, from bottom (the coarsest) up. As to the monotonicity of Ent w.r.t. the partial order (cf. the grouping axiom of entropy (Cover & Thomas, 2012)), any BFS branch ends once the entropy exceeds . (For later use, we save the setR> of ending rules in BFS, i.e., the lower frontier ofR> .) In contrast,R(k)⇐ is computed per iteration (by querying P•’s po matrix).\nUnder review as a conference paper at ICLR 2021\nmnistt_7_3_solve_alternate_5000\nNested vs. alternating optimization. Computing ↑(R(k)∪{r}) requires solving a minimization, so Problem (5) is a nested optimization: argmin\nr∈R(k)feasible ∆(argminη∈⇑(R(k)∪{r})‖η‖2, ξ). One may\nde-nest the two: instead of comparing rules by lifting them up to the signal domain, we compare them “downstairs” on their own rule domains. So, instead of minimizing (5)’s objective, we\nmaximize r ∈ R≤ −R(k)⇐\n∆(↓↑(R(k)) (domain(r)), ↓ξ (domain(r))) = ∆(↓↑(R(k)) (domain(r)), r). (6)\nThe idea is to find the rule domain on which the recovered ↑(R(k)) and the target signal ξ exhibit the largest gap. Adding this rule to the rule set maximally closes the gap in (6), and tends to minimize the original objective in (5). Nicely, in (6) the lifting does not involve r, so (5) is de-nested, which further iterates into an alternating min max (or lift project) optimization. Let r(k)? be the solution and ∆ (k) ? be the optimal value in Iter k. We updateR(k+1) := R(k) ∪ {r(k+1)? } − {r(k+1)? ’s descendants} (so always an antichain), and proceed to the next iteration. Iterations end whenever the feasible set is empty, or may end early if the rule becomes less essential, measured by |∆(k+1)? −∆(k)? | ≤ γ in the nested setting, and ∆(k)? ≤ γ in the alternating setting (for some γ). The full learning path & complexity. We denote a solve process for Problem (6) by SOLVE( , γ), or SOLVE( ) if γ is fixed ahead. To avoid tuning manually, we solve an -path. For 1 < 2 < · · · , assume SOLVE( i) takes Ki iterations, we run the following to solve the main relaxed Problem (6):\n∅ = R(0) → SOLVE( 1)→ R(K1) → SOLVE( 2)→ R(K1+K2) → · · · (7) So, lattice learning boils down to solving a sequence of combinatorial optimizations on the Hasse diagram of a lattice. We walk through the full process (7) via a toy example, starting with a signal ξ : {0, . . . , 27}2 → [0, 1] denoting an image of “7” and a toy-sized information lattice of the signal (Figure 3A). The sequence of optimizations (7) proceeds at two paces concurrently: the slower pace is indexed by i; the faster pace is indexed by iteration number k. As mentioned earlier, the setsR≤ i\nare pre-computed at the slower pace, with the (i+ 1)th BFS initialized fromR> i (the ending rules in the ith BFS). The monotonicity of Ent w.r.t. the partial order assures that these BFSs add up to a single (global) BFS on the entire Hasse diagram, climbing up the lattice from the bottom. This is shown in Figure 3B as the monotonic expansion of the blue region (R≤ ) explored by BFS. Locally at each iteration along the slower pace, solving Problem (6) is quadratic in the worst case when the feasible set is an antichain (i.e., no order), and linear in the best case when the feasible set is a chain (i.e., totally ordered). Since local BFSs add up to a single BFS with a standard linear complexity, the entire learning phase has a total complexity between linear and quadratic in the number of vertices and edges in the whole Hasse diagram. In general, the denser the diagram is, the lower the complexity is. This is because R(k)⇐ tends to be large in this case with more descendants activated (i.e., red in Figure 3B), which in turn effectively shrinks the feasible set (i.e., the blue region minus red). For example, unlike the first three iterations in Figure 3B, the 4th iteration ( = 3) activates more than one rules, including the one being extracted as well as all its unexplored descendants. Further, the upper bound is rarely reached. Unlike in this toy example, BFS in practice is often early stopped when becomes large, i.e., when later rules become more random. Hence, targeting at more deterministic and disentangled rules only, not all vertices and edges are traversed by BFS. In the end of the learning process, for explanatory purposes, we store the entire -path and the (R(k))k≥0 sequence instead of just the very last one. This yields a rule trace as the standard ILL output, which we present below.\nHow to read ILL output. ILL outputs a rule trace comprising an evolving sequence of rules, rule sets, and recovered signals (Figure 3C). The three sequences are indexed by iteration and by -path, so the rule set by the last iteration under any (starred) is the returned solution to the main Problem (4). We depict a rule by its lifting, since it sketches both the partition and the rule values. Figure 3C gives a full presentation of a rule trace. We also introduce a two-line shorthand (Figure 3D), keeping only the sequence of the recovered signals and that of the rules. A rule trace answers what makes ξ an ξ, or what are the best -simple rules explaining ξ. ILL rules are more interpretable than just eyeballing patterns. (a) The interpretability of the trace is manifest in its controllability via , γ: smaller for simpler rules and larger γ for more essential rules. (b) The interpretability of each rule is gained from its partition tag—the criteria by which the abstraction is made. A tag may contain several generating sources as different interpretations of the same rule abstraction. Like different proofs of a theorem, a partition tag with multiple sources reveals equivalent characterizations of a structure and thus, more insights of the signal. So, tags are not only computationally beneficial in constructing lattices, but also key to interpretation. We present in-depth analyses on tags in the applications below." }, { "heading": "4 ILL EXAMPLES", "text": "We show typical ILL examples on knowledge discovery in art and science: learning music theory from scores and chemical laws from compounds (while relegating more analyses on handwritten digits to Appendix F). For both, we fix the same priors—F, S in (2)(3)—thus the same lattice. We fix the same parameters: -path is 0.2 < 3.2 < 6.2 < · · · (tip: a small offset at the beginning, e.g., 0.2, is used to get nearly-deterministic rules) and γ is 20% of the initial signal gap. This fixed setting is used to show generality and for comparison. Yet, the parameters can be fine tuned in practice.\nMusic illustration. Signals are probability distributions of chords encoded as vectors of MIDI keys. Figure 4a) shows such a signal—the frequency distribution of two-note chords extracted from the soprano and bass parts of Bach’s C-score chorales (Illiac Software, Inc., 2020)—with the learned rule trace listed below. The first rule is tagged by argsort ◦w[1,2] and has probability all concentrated in one cell whose elements have a larger y-coordinate (the black region above the diagonal). So, this is a deterministic rule, echoing the law of “no voice crossing (N.V.C.)”, i.e., soprano higher than bass. Checking later rule tags finds laws of voice range (V.R.), diatonic scale (D.S.), and consonant interval (C.I.)—almost all of the main static rules on two-voice counterpoint. Notably, the third rule is tagged by both mod12 ◦ w[1] and vertical translation invariance. From both feature and symmetry views, this tag identifies the concept of all Cs, all Ds, etc., which is the music concept of pitch class. The feature view explicitly reveals a period of 12 in pitches—the notion of an octave (in defining pitch class); the symmetry view reveals the topology—the manifold where the concepts lie—in this case a 2D torus.\nChemistry illustration. Signals are boolean-valued functions indicating the presence of compound formulae encoded as vectors of atomic numbers in a molecule database. Figure 4b) shows a signal attained by collecting two-element compounds from the Materials Project database (Jain et al., 2013) of common compounds. The first rule tagged by div18 ◦w[2] is deterministic: Element 2 can never be\nAr, K, Ca. It nicely captures the visual pattern in Figure 4b) (the last three vacant columns) and hints suggestively at some chemistry rules. The second rule tagged by mod8 ◦w[2] has peaks at cells tagged by feature values 1, 7, 0, 6. These cells, for Element 2, are halogens (+H), pnictogens, chalcogens, crystallogens. The third rule shows alkali metals, alkaline earth metals, crystallogens, icosagens are the cells common for Element 1. Next rule shows the common combinations, e.g., alkali metals and halogens. Note that the 2nd, 3rd, 4th rules for chemistry and the 5th, 3rd, 4th rules for music share the same tags, except that mod12 becomes mod8—period changes from 12 (a music octave) to 8 (number of main groups). So, when two chemical elements form a compound, they are like two music notes forming a chord! The music concepts of pitch classes and intervals parallel the chemical concepts of groups and their distances. Although abstractions are shared, rules differ. Instead of a diatonic scale in Bach’s chorales, chemistry uses a “cation scale” and an “anion scale”. It is interesting that our intention to show ILL’s generality (same lattice, parameters for different subjects) also suggests links between art and science by interpreting phenomena (signals) in one subject from the perspective of the other (Bodurow, 2018). Applications that extend the experiment here beyond a clustering model to restore the periodic table (Zhou et al., 2018) and render complex molecules in high dimensions are ongoing, aiming to discover new laws, new interpretations of existing laws, and new materials.\nReal-world deployment & evaluation. We generalized the music illustration to a real app of an automatic music theorist (Yu et al., 2016; Yu & Varshney, 2017). It specially implements the alternating min max setting into a “student teacher” model: the student is a (music) generator and the teacher is a discriminator. The two form a loop where the teacher guides the student towards a target style through iterative feedback (extracting rules) and exercise (applying rules). This app extends the above music illustration considerably. It considers more music voices, so now signals are in higher dimensions and rules are on more complex chord structure. It considers temporal structure, so now signals include many (un)conditional chord distributions (multi-n-grams), yielding both context-free and context-dependent rules, but new challenges too, namely rare contexts and contradictory rules. ILL’s core idea of abstraction makes “small data large” thus, rare contexts common (Yu & Varshney, 2017), and a redesigned lifting operator solves contradiction (Yu et al., 2017). Further, parameters like , γ are made into self-explanatory knobs for users to personalize their learning pace.\nWe conducted two studies to assess rule-learning capability and interpretability. We present the main results here and detail the procedures in Appendix G. In the first study, we compared ILL-discovered rules with human-codified domain knowledge to see how much known can be reproduced and how much new can be discovered. Trained on just 370 Bach’s chorales, our model reproduced in explicit\nUnder review as a conference paper at ICLR 2021\na.\ncovered 66%\nhinted 26%\nmissed 7%\nhow much known?\nUnder review as a conference paper at ICLR 2021 the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the two-week period, we asked the students to complete it independently, with no group work or office hours.\nAssess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubric-generating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 5. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the n-gram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the n-gram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AI-produced knowledge itself.\nTable 5: Students’ final scores.\nScore Range # of Students 50 3\n[40,50) 7 [30,40) 2 [20,30) 4 [10,20) 1 [0,10) 1\n0 5\nH CONCLUSION AND BROADER IMPACTS\nModel transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two Cultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a human-centered standpoint and our long-term pursuit of “getting humanity back into artificial intelligence”. We strive to develop human-like artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).\nAs such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g.,\n23\nb. how interpretable? c. figured soprano\nentropy = 4.76 figured alto entropy = 4.78\nfigured tenor entropy = 4.80 figured bass entropy = 4.34\nhow much new?\nFigure 5: ILL assessments on knowledge discovery tasks.\nforms 66% of a standard music theory curriculum (Figure 5A). In the rest, about 26% (e.g., harmonic functions and music forms) wa implicitly hi ted at by the cur ent n-gram based model, modeling only transitions of abstractions but not explicitly abstractions of transitions—a future direction. In the second study, we ran a human-subject experiment in the form of homework for a music class. The homework asked 23 students o write verbal interpretations of ILL-generated rules rendered as histograms over tagged partitions. Grading was based on a rubric of keywords generated via majority vote in a later discussion among students and teachers. Figure 5B shows that the majority (2/3) of the students who did the homework succeeded (w.r.t. the 30/50 passing grade) in the interpretation task, which in turn shows the interpretability of the AI-produced knowledge itself.\nIn the first study, our model also discovered new rules that interested our colleagues in the music school. (a) Tritone resolution is crucial in tonal music, yet in Bach’s chorales, tritones sometimes do not resolve in typical ways, but consistently transition to other dissonances like a minor seventh. (b) A new notion of “the interval of intervals” was consistently extracted in several rule traces. This “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure heretofore unconsidered. (c) New symmetry patterns reveal new harmonic foundations. As a parallel concept of harmony traditionally built on figured bass (dominant in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 5C). Although not the best view for explaining Bach according to ILL and is not included in any standard music theory class, it may be a valuable perspective for music starting deviating from classical. This was confirmed by domain experts (Sokol, 2016), with more details in the end of Appendix G.1." }, { "heading": "5 DISCUSSION: LIMITATIONS AND CHALLENGES", "text": "As a first step, we devise a new representation-learning model intended to be both theoretically sound and intrinsically interpretable. This paper shows typical setups and applications, but ILL is a general framework that admits new designs of its components, e.g., projection-and-lifting or priors. Notably, designing a lattice not only sets the rule-learning capacity but also the “vocabulary” for interpretation which, like the Sapir-Whorf hypothesis for human language, limits how a lattice explains signals. Likewise, priors have pros and cons based on what we seek to explain and to whom (e.g., not all signals are best explained by symmetry, nor can everyone reads symmetry equally well). One solution is to explore multiple lattices while balancing expressiveness and computation—a common practice in picking ML models too. Further, whether a signal is indeed governed by simple rules requires rethinking. Sometimes, no rules exist, then ILL will indicate this and a case-by-case study will be needed. Sometimes, rules are insufficient: is music in fact governed by music theory? Theory is better viewed as necessary but not sufficient for good music: great composers need not be great theorists.\nFollowing studies comparing human-codified knowledge and using human-subject experiments for interpretability, more systematic ILL benchmarking and assessment remain challenging and need long-term efforts. Benchmarking is not as easy as for task-specific settings (Chollet, 2019), requiring better comparison schemes or a downstream task. Effective ILL assessments must focus on new discoveries and the ability to assist people. Instead of a Turing test for machine-generated music, one may (at a meta-level) consider tests between independent and machine-aided compositions, but both are done by humans. Further, ILL may be incorporated with other models, having an ILL version of deep learning or vice versa. For example, using ILL as a pre-processing or post-interpretation module in other models to achieve superior task performance as well as controllability and interpretability. One other possibility may use ILL to analyze attention matrices (as signals) learned from BERT or GPT (Rogers et al., 2020). More future visions are in Appendix H." }, { "heading": "A CONNECTION TO CONCEPT LATTICE", "text": "Per our definition, a concept refers to a component of an abstraction, or more precisely, is a cell in a partition or an equivalence class under an equivalence relation. This definition is consistent with a formal concept defined in formal concept analysis (FCA) (Ganter & Wille, 2012; Ganter et al., 2016; Priss, 2006) as a set of objects (extent) sharing a set of attributes (intent), which can be also treated as objects that are equivalent under the attributes. However, our definition of a concept generalizes that of a formal concept in two ways. First, in our case, a partition or an equivalence relation is not induced from domain-specific attributes through formal logic and formal ontology, but from universal priors drawn from the Core Knowledge (detailed in Section 3.1 in the main paper). Second, specifying a partition considers all of its concepts, whereas specifying a set of formal concepts only considers those with respect to a given formal context. As a result, partition lattices in our case generalize concept lattices in FCA, and are not generated, hence not constrained, by domain knowledge such as those encoded in formal ontologies.\nMathematically, let (PX , ) be the partition lattice comprising all partitions of X and (2X ,⊆) be the subset lattice comprising all subsets of X . Clearly, the power set 2X is the same as {C ∈ P | P ∈ PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many cases, it tends to miss many concepts if they are not known in the existing ontology. We give two examples below to further illustrate the connection between a partition lattice and a concept lattice.\nFirst, consider biological taxonomy. Dogs and cats are two concepts in species which is an abstraction containing other concepts such as eagles. Likewise, mammals and birds are two concepts in class which is an abstraction containing other concepts such as reptiles and insects; further, animals and plants are two concepts in kingdom. In light of hierarchy, as abstractions, species class kingdom (in a partition lattice); as concepts, dogs ⊆ mammals ⊆ animals (in a concept lattice). Note that when forming a concept lattice, one may not need to include say, all species. Yet when having species as an abstraction in a partition lattice, this abstraction must contain all species including known species and unknowns, where the latter is usually of more interest for knowledge discovery.\nSecond, consider music theory. C major triads, C minor triads, and B diminished triads are concepts in an abstraction induced by music octave-shift and permutation invariance. Further, major triads, minor triads, and diminished triads are concepts in another abstraction induced by music octave-shift, permutation, and further transposition invariance. Clearly, for abstractions, the former abstraction is finer than the latter; for concepts, the set of C major triads is a subset (or a special case) of the set of major triads. However, chords that are not defined in traditional music theory but appear as new concepts in a known abstraction (e.g., the two above) may be more interesting, since they may suggest new composition possibilities while still obeying the same music abstraction, in this case the same music symmetry. New concepts from new abstractions may push the composition boundary even further, suggesting new types of chords discovered from e.g., new symmetry (but possibly within a known symmetry family). See the end of Appendix G.1 for more examples from new discoveries." }, { "heading": "B MORE GENERALIZED FORMALISM FOR INFORMATION LATTICE", "text": "The mathematical setting in the main paper is for a non-negative signal on a finite domain. However, this is not a limitation, but purely for notational brevity and computational reasons. First, regarding non-negativity, in many real scenarios, the signal is bounded and its value is only relative. In these cases, one can simply add an offset to the signal to make it non-negative. More generally, we can\nconsider a signal to be any measurable function ξ : X → Rn. Then the notions of an abstraction, a concept, a rule, as well as the partial order can be generalized as in Table 1. Hence, the notion of an information lattice is still well-defined in the generalized setting. The essence of the two settings lies in how we formalize an abstraction, whether using a partition or a σ-algebra. However, the two are not very different from each other: any partition of X generates a σ-algebra on X , and any σ-algebra on a countable X is uniquely generated by a partition of X (Çınlar, 2011).\nFurther, the main paper uses the summation functional in defining a rule of a signal, or the projection operator. However, other options are possible, e.g., mean, max, min, or a specially designed functional. The lifting operator can then be redesigned accordingly. In particular, besides always favoring the most uniform signal, the design of the special lifting can have extra freedom in considering other criteria for picking a signal from the general lifting." }, { "heading": "C MORE INSIGHTS ON THE SPECIAL LIFTING", "text": "Consider the special lifting ↑(R) for any rule setR = ↓ξ (P) of a given signal ξ. Computing ↑(R) is simple ifR = {r} contains only a single rule. In this case, ↑(R)(x) = ↑(r)(x) := r(C)/|C| for any x ∈ C ∈ domain(r), which requires simply averaging within each cell. However, computing ↑ (R) becomes much less trivial when |R| > 1. By definition, we need to solve the minimization problem:\n↑(R) := argminr∈⇑(R)‖r‖2. (8)\nInstead of directly throwing the above problem (8) into a generic optimization solver, there is a more efficient approach which also reveals more insights on the special lifting. More specifically, one can check that any multi-rule lifting ↑(R) can be computed as a single-rule lifting ↑(r?) where the single rule r? is defined on the join ∨P and is computed as follows:\nr? := argminr∈⇑(∨P)(R)‖r̃‖2, with the weighted norm ‖r̃‖2 := √∑\nC\nr(C)2\n|C| . (9)\nSo, instead of liftingR directly to the signal domain X , we liftR to the join ∨P first and then to X . Since | ∨P| ≤ |X|, the minimization problem (9) is in a smaller dimension compared to the original problem (8), and thus, can be solved more efficiently. In the minimization problem (9), by definition, ⇑(∨P) (R) := {r : ∨P → R | ↓r (P) = R}. Hence, every rule r ∈ ⇑(∨P) (R) can be treated as a single-rule summary of the rule setR, and r? is one of them—the one that yields the most uniform signal. Realizing the special lifting R → ↑ (R) as the two-step lifting R → r? → ↑ (r?) = ↑ (R) reveals the following insight: given rules abstracting ξ at different levels (coarser or finer), the best one can hope to faithfully explain ξ is at the level of the join. Determining ξ at any level finer than the join would then require additional assumptions other than the rule set itself, such as the preference of uniformity used here. This further explains the two sources of information loss (join and uniformity) discussed in the recovery process of a signal (cf. Section 3 in the main paper). Notably, to determine a signal even at the level of join may be ambigious, since the general lifting ⇑(∨P) (R) to the join is not necessarily a singleton. This particularly implies that r? as one of the single-rule summaries ofR of ξ is not necessarily a rule of ξ, i.e., there is no guarantee that r? = ↓ξ (∨P). To make it so, we need more rules." }, { "heading": "D EXISTING WORK ON SUBLATTICE GENERATION", "text": "General methods for computing the sublattice LB of a full lattice L generated by a subset B ⊆ L fall into two basic families, depending on whether the full lattice needs to be computed. The first uses alternating join- and meet-completions, with worse-case complexityO(2|B|); the second characterizes the elements of L that belong to the sublattice, with complexity O(min(|J(L)|, |M(L)|)2|L|) where J(L) and M(L) denote the number of join-irreducibles and meet-irreducibles, respectively (Bertet & Morvan, 1999). The latter requires computing the full lattice, which is intractable in our case of partition lattices, as |L| = |PX | grows faster than exponentially in |X| whereas |P〈F,S〉| is usually smaller than |X|. So, we use the first approach and compute alternating join- and meet-completions. The same principle of avoiding computing the full lattice has been applied to the special context of concept lattices (Kauer & Krupka, 2015), yet the technique there still requires the full formal context corresponding to the full concept lattice. Note that sublattice completion is, by definition, computing the smallest sublattice LB (in a full lattice L) containing the input subset B ⊆ L, where LB must inherit the meet and join operations from L. It generalizes but is not the same as Dedekind-MacNeille completion (Bertet & Morvan, 1999; MacNeille, 1937; Bertet et al., 1997)." }, { "heading": "E MORE DETAILS ON THE CONSTRUCTION PHASE", "text": "This section elaborates on the second half of Section 3.1 in the main paper, presenting more algorithmic details on poset construction and sublattice completion. The core data structures for posets are the so-called adjacency matrix and Hasse diagram, encoding the partial order ≺ and the cover relation ≺c, respectively (Garg, 2015). The former is best for querying ancestors and descendants of a partition within the lattice; the latter is best for querying parents and children of a partition. (A more advanced technique includes chain-decomposition, but the two here are sufficient for this paper.) More specifically,\nP ′ is an ancestor of P ⇐⇒ P ≺ P ′\nP ′ is a parent of P ⇐⇒ P ≺c P ′ (i.e., P ≺ P ′ but no P ′′ satisfies P ≺ P ′′ ≺ P ′). We introduce a few algorithmic notations. Given a partition poset (P, ), we use P.po matrix and P.hasse diagram to denote the adjacency matrix and Hasse diagram of P, respectively. For any partition P ∈ P, we use P.ancestors, P.descendants, P.parents, and P.children to denote the sets of ancestors, descendants, parents, and children of P , respectively. Notably, the two data structures are not only important for the construction phase but for the subsequent learning phase as well. The core subroutine in the construction phase is ADD PARTITION sketched as Algorithm 1. It is the key unit step in both poset construction and (join-)semilattice completion.\nPoset construction. This corresponds to Step 3 in the flowchart in Section 3.1 of the main paper. Recall that poset construction refers to the process of sorting a multiset P〈F,S〉 of tagged partitions into a poset (P〈F,S〉, ), where the partition tags are features and symmetries. Naively, if we write an inner subroutine COMPARE(P,P ′)—called an oracle in the related literature—to compare two partitions, sorting a multiset into a poset amounts to ( N 2 ) calls of this pairwise comparison where N is the size of the input multiset. So, the common idea shared in almost all poset sorting algorithms is to reduce the number of oracle calls as much as possible. As mentioned in the main paper, considering the additional properties in our case, we leverage (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to pre-determine or pre-filter relations. In other words, we want to infer from the context as many pairwise relations as possible, so that the number of actual pairwise comparisons can be minimized.\nMore specifically, we start from an empty poset, and call ADD PARTITION to incrementally add partitions from the input multiset to the poset. As the outer subroutine, ADD PARTITION leverages transitivity and partition size by maintaining three live data structures, namely size2partns, po matrix, and hasse diagram, so as to avoid calling COMPARE whenever possible. Consequently, COMPARE is called only at two places (underlined in Algorithm 1): one for = and one for ≺. When called as the inner subroutine, COMPARE(P,P ′) does not always perform an actual computation for pairwise comparison. Instead, it first checks if the tags are informative (e.g., compositions/supergroups imply coarser partitions) and only if not, makes an actual comparison. With the additional information from partition size, an actual comparison can be done in O(|X|) time\nAlgorithm 1: ADD PARTITION (Pτ ,P): adds a tagged partition Pτ to a partition poset (P, ) Input: a tagged partition Pτ , where the tag τ can be a feature/symmetry or a join/meet formula;\na partition poset (P, ), with the following members and hash tables: · every P ∈ P is a unique partition (indexed by a unique identifier) · P.partn2tags[P] := {τ | Pτ = P} denotes the set of all tags inducing P · P.size2partns[k] := {P | |P| = k} denotes the set of all P ∈ P with size k · P.po matrix encodes the partial order ≺, best for getting P.ancestors/descendants · P.hasse diagram encodes the cover relation ≺c, best for getting P.parents/children\nStep 1: determine if Pτ is new by COMPARE(P,Pτ ) (for =) for every P ∈ P.size2partns[|Pτ |]\nif Pτ ∈ P.size2partns[|Pτ |]: update P.partn2tags[Pτ] by adding τ ; return else: create a new hash entry P.partn2tags[Pτ] = {τ}; proceed to Step 2\nStep 2: add the new partition Pτ to P (2a) update P.size2partns[|Pτ |] by adding Pτ (2b) update P.po matrix and P.hasse diagram\n– for every existing size k < |Pτ | sorted in a descending order: for every P ∈ P.size2partns[k]:\nif P.parents ∩ Pτ .descendants 6= ∅: update P.po matrix by adding P ≺ Pτ else: COMPARE(P,Pτ ); update P.po matrix and P.hasse diagram if P ≺ Pτ\n(here one can check: it is necessarily the case that P ≺c Pτ ) – do the above symmetrically for every existing size k > |Pτ | sorted in an ascending order – (note: every P ∈ P.size2partns[k] for k = |Pτ | is incomparable with Pτ ) – clean cover relation: remove any P∗ ≺c P∗ from P.hasse diagram if P∗ ≺c Pτ ≺c P∗\nvia a mapping process. More specifically, given two partitions P,P ′, without loss of generality, we assume |P| ≤ |P ′|. An actual comparison is made by tentatively creating a mapping ν : P ′ → P . One can check that such a ν exists if and only if P P ′. Hence, if |P| = |P ′| (resp. |P| < |P ′|), one can determine = (resp.≺) if ν is created successfully or incomparability otherwise. The mapping complexity is linear in |X|, with linear coefficient 1 if mapping succeeds and with linear coefficient < 1 if mapping fails. In the worst case (e.g., if all partitions are incomparable), all ( N 2 ) pairwise comparisons are required. Our algorithm works best when partitions are richly related (i.e., the Hasse diagram is dense), which is indeed the case for our tagged partitions induced from systematically formed features and symmetries.\nSemilattice completion. This corresponds to Step 4 in the flowchart in Section 3.1 of the main paper. Recall that join-semilattice completion refers to the process of completing a partition poset into a semilattice. We only detail join-semilattice completion, since meet-semilattice completion can be done symmetrically. Formally, we want to compute the join-semilattice of PX generated by the input poset (P〈F,S〉, ). We denote the resulting join-semilattice by 〈P〈F,S〉〉∨. By definition,\n〈P〈F,S〉〉∨ := {∨P | P ⊆ P〈F,S〉}. Naively, if computing 〈P〈F,S〉〉∨ literally from the above definition, one has to iterate over all subsets of P〈F,S〉 and compute their joins. This amounts to 2N join computations where N = |P〈F,S〉| is the size of the input poset, and moreover, many of the joins are not pairwise. Yet, similar to our earlier poset construction, we may reduce the computations of joins by an incremental method, which also embeds ADD PARTITION as a subroutine and utilizes partition sizes and tags, but now the tags are join formulae instead of features or symmetries.\nMore specifically, we start with an empty semilattice P, and add partitions in P〈F,S〉 to P one by one from smaller-sized to larger-sized (note: the size information is maintained in P〈F,S〉.size2partns). When a partition P ∈ P〈F,S〉 is to be added, we make a tag named by itself, i.e., let Pτ := P with τ := {P}, and then call ADD PARTITION(Pτ ,P). There are two possibilities here: Pτ already exists in P (call ends by Step 1) or Pτ is new (call ends by Step 2). In the former, we are done with Pτ .\nIn the latter, for every P ′ ∈ P\\{Pτ}, compute the pairwise join J (P ′) := ∨{Pτ ,P ′} and its tags T (P ′) := {τ ∪ τ ′ | τ ′ ∈ P.partn2tags[P ′]}, and call ADD PARTITION(J (P ′)T (P′),P). Like COMPARE, computing join can be optimized by leveraging previously computed tags and partial order in the input poset P〈F,S〉, so as to avoid an actual join computation whenever possible. When inferring from the context is not possible, one can perform an actual join computation ∨(P,P ′) in O(|X|) time. This is done by collecting the unique pairs of cell IDs (C(x), C ′(x)) for every x ∈ X , where C(x) and C ′(x) denote the cell IDs of x in P and P ′, respectively. In the worst case (e.g., if all partitions are incomparable and join-irreducible), the complexity is inevitably O(2N ). However, like in poset construction, our algorithm works best when the partial order structure is rich.\nPractical tips for sublattice completion. This corresponds to Step 5 in the flowchart in Section 3.1 of the main paper. Recall that constructing the sublattice of PX generated by P〈S,F 〉 follows the alternating process: L0 := P〈S,F 〉, L1 := 〈L0〉∨, L2 := 〈L1〉∧, L3 := 〈L2〉∨, and so forth, which terminates as soon as Lk−1 = Lk. We denote the end result by 〈P〈S,F 〉〉∨∧···, which is the desired sublattice. However, we may want to stop early in the completion sequence, due to concerns from computation, interpretability, expressiveness, as well as their tradeoffs. We suggest a practical tip on deciding where to stop. If the input poset P〈F,S〉 is small, run alternating joins and meets, or even complete it to the sublattice if affordable. If P〈F,S〉 is moderate, complete the joins only (as join is closely related to rule lifting, see Appdenix C for more details). If P〈F,S〉 is large, just use it." }, { "heading": "F MORE ANALYSES IN THE LEARNING PHASE", "text": "This section elaborates on the last paragraph of Section 3.2 in the main paper, presenting more analyses and interpretations on the rule traces elicited from the toy handwritten-digit examples. Yet, as mentioned in the main paper, computer vision is currently not among the typical use cases of ILL. Learning rules of handwritten digits may not be of much independent interest unless for calligraphy. So, the analyses and interpretations here are for illustration purposes only. We refer readers to the Broader Impact section in the main paper for possible future directions on how ILL may be used, together with other ML models, to solve computer vision tasks.\nRecall that the main use case of ILL is to explain a signal ξ, answering what makes ξ an ξ. The same toy example illustrating an ILL process is replayed here in Figure 3. The signal ξ : {0, . . . , 27}2 → [0, 1] is a grayscale image of a handwritten “7”. In this case, a rule of ξ, or the projection of ξ to a partition of {0, . . . , 27}2, can be viewed as gathering “ink” within each partition cell. Accordingly, the (special) lifting can be viewed as redistributing the gathered “ink” (evenly) in each cell. Hence, we term this view the ink model. For visual convenience, we depict a rule of a 2D signal by its lifting (i.e., another grayscale image), since with pixels in the same cell colored the same, we can use the lifting to sketch both the partition and the rule values. More precisely, when a lifting represents a rule, it must be viewed in terms of blocks or superpixels; whereas a real lifting (i.e., a signal or a real image) is viewed normally by the regular pixels. To better clarify, all rules in Figure 3 are displayed in red boxes, whereas all liftings are in green ones.\nFor a simple illustration, we draw a small number of features and symmetries to generate a poset (P•) of 21 partitions. The corresponding part of the information lattice (R•) is shown by its Hasse diagram in Figure 3. Further, on top of the Hasse diagram, we demarcate the frontiers of the sublevel sets (R≤ ) by six blue dashed curves. Note that in this tiny diagram, we have sketched a full range of sublevel sets, yet for large diagrams, sublevel sets are constructed for small -values only in a single-pass BFS. The right part of Figure 3 illustrates a complete ILL process in the alternating setting, with lift project signified by the green up-arrows and red down-arrows, respectively. During the learning process, ILL tries to minimize the gap in the signal domain (upstairs) through iterative eliminations of the largest gap in the rule domain (downstairs). Eliminating a larger rule gap tends to imply a larger drop in the signal gap, but not necessarily in every iteration, since the special lifting may accidentally recover a better signal if the assumed uniformity is, by chance, present in the signal. The rule setR(k) formed per iteration is presented in the middle of the right part of Figure 3, which joinly shows the complete rule trace continuously progressing along the -path.\nThe rule set in the last iteration under any (marked by ? in Figure 3) is the returned solution to the main relaxed Problem (4) in the main paper. This rule set is used to answer what makes ξ an ξ. For example, let rj denote the rule with ID j (here a rule ID is the same as the partition ID, the unique identifier used in Algorithm 1 during the construction phase). Then, among all rules whose entropies\nare no larger than = 2, the third rule set in the traceR(3) = {r9, r1, r18} best explains what makes ξ an ξ. However, if more complex rules are allowed, say if all rule entropies are now capped by = 6, R(7) = {r13, r15, r19} is the best. Recall that we do not just eyeball the rules to get intuitive understandings. Every rule is the projection of the signal to a tagged partition, where the tag, generated in a prior-driven way, explicitly explains the underlying abstraction criteria. For example, r19 in Figure 3 comes from a symmetry tag representing a permutation invariance, which visually renders as a reflection invariance. Rules r8 and r9 come from two feature tags div7 ◦ w[1] and div7 ◦ w[2], respectively. These two feature tags represent the continuous and even collapsing in the first and the second coordinate, respectively, which visually render as horizontal and vertical strips in either case. Both rules are later absorbed into r13 tagged by div7 ◦w[1,2], since its rule domain is strictly finer. These rules (r8, r9, r13) apparently summarize the horizontal and vertical parts of the handwritten “7”. Further, the vertical part of the “7” is longer and slants more, so we see more vertically-patterned rules in the rule trace (r9, r11, r15). These rules are obtained from finer and finer abstractions along the horizontal direction, so as to capture more details on the vertical part of that “7” such as its slope. Notably, among these vertically-patterned rules, r11 is induced from the symmetry representing a horizontal translation invariance, but it is quickly absorbed into r15 whose entropy is not much higher. This transient appearance of r11 implies that it plays a less important role in explaining this handwritten “7”. In fact, from more experiments, symmetries in general play a less important role in explaining many “7”s. This is, however, not the case in explaining many “8”s, where symmetries occur much more often. For example, consider a symmetry fused from translation and permutation invariances whose fundamental domain is homeomorphic to a Möbius strip. We hypothesize that this topological property might be related to the twisted nature of an “8”. For a visual comparison, we present the rule traces learned from a “7” and an “8” below in Figure 6, as well as the visual similarity between a Möbius strip and an “8”. mnistt_7_3_solve_alternate_5000\nmnistc_8_2_solve_alternate_5000" }, { "heading": "G STUDIES ON ILL-BASED MUSIC APPLICATION", "text": "We introduce two tests associated with a real-world application. The first is to assess rule-learning efficacy, where we compare machine-discovered rules to human-codified domain knowledge. The second is to assess human-interpretability, where we use human subject experiments on interpreting machine-generated rules.\nThe application here is our first step towards building an automatic music theorist and pedagogue, which is to be deployed as an assistant in music research and education. The two tests are our initial effort towards a systematic benchmarking and assessment platform. In the continuing effort of bridging human and machine intelligence, new standards are to be set and commonly agreed upon, so as to reasonably compare machine-codified discoveries with human-codified knowledge, as well as to use human-subject experiments for assessing interpretability. Fully developing assessment protocols is a challenging, long-term endeavor. Here, we use the two tests as starting points, and present results from each. Respectively, the first experiment tests music rule discovery, a basic requirement to be a theorist; the second tests interpretability, a basic requirement to be a pedagogue.\nTo conduct the two tests, we first build a user-friendly web application, which is used to better see and control the ILL learning process and results. Figure 7 illustrates the web interface. Users learn music rules—each as a histogram over a tagged partition (i.e., machine-codified music concepts)—and control their learning pace via self-explanatory knobs whose set values are automatically converted to internal parameters (e.g., , γ). One critical music-specific extension to the vanilla ILL presented in the main paper is adding a temporal component, since music is highly contextual. This amounts to considering more than one signal simultaneously, which include various (un)conditional chord distributions (multiple n-grams with varying n’s and varying conditionals) encoding information of individual chords as well as melodic and harmonic progressions. Accordingly, ILL produces both context-free and context-dependent rules, each of which is indexed by a partition and a conditional under that partition. For example, given the partition that is equivalent to classifying music chords into roman numerals and conditioned on the previous two chords being a I64 followed by a V, a rule specifies the probability distribution of the next roman numeral, and in this case reproduces the music rule on Cadential-64. Note that in a context-dependent rule, not only is the query chord abstracted, but also the conditional. This is in contrast with many classical n-gram models where no abstraction is present and thus may suffer from the problem of rare contexts, where a conditional occurs very few or even zero times in the training set. However here, the core idea of abstraction makes “small data” large and thus rare contexts common. More examples of context-free and context-dependent rules are illustrated as histograms in Figure 8. These rule histograms are generated from ILL based on 370 of Bach’s four-part chorales (in the format of digital sheet music), and are used in the two experiments detailed below.\nG.1 COMPARISON TO HUMAN-CODIFIED KNOWLEDGE\nWe compare rules learned from ILL to a standard undergraduate music theory curriculum. We want to use known laws from music theory as a benchmark to see how ILL-generated rules correspond to human-codified music knowledge. In particular, we want to see what is covered, what is new, and what is different. Yet, the ultimate goal is not just to use known music theory as a ground truth for the purpose of driving ILL to fully reconstruct what we know, but eventually to discover new rules,\nto gain new understandings of existing rules, to suggest new composition possibilities, as well as to teach rules in a personalized way.\nA priori we are aware of three major differences between human-codified music theory and ILLgenerated rules. (a) In light of music raw representations (input), laws of music theory are derived from all aspects in sheet music whereas ILL-generated rules are currently derived from only MIDI pitches and their durations. This is because we currently study ILL as a general framework. When a music-specific application is to be developed later, one can include more music raw representations such as letter pitches, meter, measure, beaming, and articulations. (b) In light of rule format (output), laws of music theory and ILL-generated rules have two different styles, with the former being more descriptive and absolute (hard), whereas the latter being more numerical and probabilistic (soft). For instance, a music rule that completely forbids consecutive fifths is reproduced by an ILL-generated rule that assigns a small non-zero probability to the event. Therefore, while it is possible to “translate”, with information loss, a (precise) ILL-generated rule to a (verbal) rule in known theory, it may not make sense to “translate” in the opposite direction. Also, it is not a good idea to hardcode known rules as categorical labels in a supervised setting, since music rules are inherently flexible and hardcoding may lead to a rule-based AI that generates somewhat “mechanical” music such as the Illiac Suite (Hiller & Isaacson, 1957). (c) In light of purposes, laws of music theory are more intended for general pedagogical purposes, rather than to reflect the style of a particular data set. For instance, while consecutive fifths are banned in homework and exams, they may be widely used in many pop songs. Even in our data set of Bach’s chorales (which are supposed to follow the known rules quite well), we see Bach himself wrote a handful of consecutive perfect intervals. On the contrary, ILL-generated rules are specific to the input data set. We may certainly find some data sets that follow the known rules quite well (e.g., Bach’s chorales), but also others that break many known rules and even set their own rules.\nKeeping these three differences in mind and by further isolating them from the comparison results, we can reveal the remaining differences that are due to the rule-learning process itself. To come up with the benchmark, we compiled a comprehensive syllabus of laws from music theory taught in our music school’s theory review course, which runs through the full series of theory classes at a fast pace. This human-codified music knowledge is organized as a running list of 75 topics and subtopics indexed by lecture number. On the other hand, ILL-generated rules are indexed by partition (ID) and n-gram (n). The results are summarized below in Table 2, where the colored crosses in the last column indicate topics that are missed by ILL due to different reasons.\nAmong the total 75 topics in Table 2, we first ignore 7 of them (red crosses) which require music raw representations beyond MIDI pitches and durations (e.g., accents and enharmonic respellings of some augmented sixth chords). ILL covered 45 out of the remaining 68 topics, yielding a coverage of 66%. Among the 23 missed topics, 18 (blue crosses) are related to deeper-level temporal abstractions such as harmonic functions, key areas, and forms. These temporal abstractions may be better modeled as abstractions of transitions, which are implicitly captured but not explicitly recovered from our current multi-abstraction multi-n-gram language model, modeling only transitions of abstractions. The other 5 missed topics (black crosses) are tricky and require ad-hoc encodings, which are not explicitly learnable (but may be implicitly captured to some extent) from our current ILL implementation. Accordingly, the composition of the 30 = 7 + 18 + 5 uncovered topics suggest three future directions to possibly raise the rule-learning capacity of the current implementation: (a) include more music raw representations; (b) model abstractions of transitions; (c) either make music-specific adjustments when developing music apps or figure out a more expressive and more general framework in the long run. However, remember that the goal here is not to reproduce what we know but to augment it. So, we may certainly stop after enabling abstractions of transitions, which in the best case can yield an improved coverage of 84% (i.e., 93% of the topics from MIDI notes only) which is good enough.\nLecture Music Theory Partition IDs n-gram\n1 music accents 7 2 pitch 1-4 1 3 2 pitch class 16-19 1 3 2 interval 31-36 1 3\nTable 2 (cont.)\nLecture Music Theory Partition IDs n-gram\n2 interval class 97-102 1 3 3 stepwise melodic motion (counterpoint) 1-4 2 3 3 consonant harmonic intervals (counterpoint) 97-102 1 3 3 beginning scale degree (counterpoint) 16-19 2 3 3 ending scale degree (counterpoint) 16-19 2 3 3 beginning interval class (counterpoint) 97-102 2 3 3 ending interval class (counterpoint) 97-102 2 3 3 parallel perfect intervals (counterpoint) 97-102 2 3 3 directed perfect intervals (counterpoint) 7 3 law of recovery (counterpoint) 1-4 ≥3 3 3 contrapuntal cadence (counterpoint) 1-4, 97-102 2,3 3 3 melodic minor ascending line (counterpoint) 7 4 triads and seventh chords 26-30 1 3 4 triads and seventh chords: quality 140-144 1 3 4 triads and seventh chords: inversion 113-117 1 3 5 figured bass 113-117 1,2 3 5 roman numerals 81-85,129-133 1 3 6 melodic reduction (Schenkerian analysis) 7 7 passing tone (tones of figuration) 1-4, 134-144 3 3 7 neighbor tone (tones of figuration) 1-4, 134-144 3 3 7 changing tone (tones of figuration) 1-4, 134-144 4 3 7 appoggiatura (tones of figuration) 1-4, 134-144 3 3 7 escape tone (tones of figuration) 1-4, 134-144 3 3 7 suspension (tones of figuration) 1-4, 134-144 3 3 7 anticipation (tones of figuration) 1-4, 134-144 3 3 7 pedal point (tones of figuration) 1-4 ≥ 3 3 7 (un)accented (tones of figuration) 7 7 chromaticism (tones of figuration) 7 8 tonic (function) 7 8 dominant (function) 7 8 authentic cadence 1,4,81-85,129-133 2,3 3 8 half cadence 81-85,129-133 2,3 3 9 voice range (four-part texture) 1-4 1 3 9 voice spacing (four-part texture) 31-41 1 3 9 voice exchange (four-part texture) 20-25 2 3 9 voice crossing (four-part texture) 53-63 1 3 9 voice overlapping (four-part texture) 7 9 tendency tone (four-part texture) 16-19 1,2 3 9 doubling (four-part texture) 86-91 1 3 10 harmonic reduction (second-level analysis) 7 11 expansion chord 7 12 predominant (function) 7 13 phrase model 7 14 pedal or neighbor (six-four chord) 4,113-117 3 3 14 passing (six-four chord) 4,113-117 3 3 14 arpeggiated (six-four chord) 7 14 cadential (six-four chord) 85,113-117,133 3,4 3 15 embedded phrase model 7 16 non-dominant seventh chord (function) 7 17 tonic substitute (submediant chord) 7 17 deceptive cadence (submediant chord) 81-85,129-133 2,3 3 18 functional substitute (mediant chord) 7 19 back-relating dominant 81-85,129-133 2,3 3 20 period (I) 7 21 period (II) 7 22 period (III) 7\nTable 2 (cont.)\nFrom another source of music theory considering music symmetries (Tymoczko, 2010), we compare ILL-generated rules with a set of commonly used music operations, known as the OPTIC operations, namely octave shifts (O), permutations (P), transpositions (T), inversions (I), and cardinality changes (C). The results are summarized in Table 3, which shows that ILL covers the major four types of operations (OPTI). The music C operation is not recovered since it is not a transformation in the mathematical sense. Notations: tv denotes a translation by the translation vector v, i.e., tv(x) := x+v; rA denotes a rotation (can be proper or improper) by the rotation matrix A, i.e., rA(x) := Ax. As a special type of rotation matrices, P (··· ) denotes a permutation matrix where the superscript is the cycle notation of a permutation. Note that ILL, as a general framework, considers a much larger universe of generic symmetries (from Core Knowledge) beyond those already considered in music. Therefore, ILL can not only study existing music symmetries, but also suggest new symmetries to be exploited in new music styles as well as possible music interpretations of symmetries discovered in other fields like chemistry as described in the main paper.\nLastly, we mention a few new rules discovered by ILL that are interesting to our colleagues in the School of Music. First, tritone resolution plays an important role in tonal music and appears as an epitome in many more general harmonic resolutions. Yet, in Bach’s chorales, tritones are sometimes not resolved in a typical way but consistently transition to another dissonance like the minor seventh, which behaves like a harmonic version of an escape tone or changing tone. Second, a new notion of “the interval of intervals” has been consistently extracted in several ILL-generated rule traces. Such a “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure to consider. Third, new symmetry patterns reveal new possible foundations for building chords, and thus new composition possibilities. For example, as a parallel concept of harmony traditionally built on figured bass (which is indeed the dominant pattern in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 9). Clearly, figured soprano is not the best view for explaining Bach according to ILL and is indeed not included in any standard music theory class, yet it may be a more efficient perspective to view other types of music (e.g., in some Jazz improvisations). This vision coincides with comments made by Casey Sokol (Sokol, 2016), a music professor at York University, which we quote below: “The idea of Figured Soprano is simply a way of taking this thinking from the top-down and bringing it into greater prominence as a creative gesture. So these exercises are not anything new in their ideation, but they can bring many new ideas, chord progressions and much else. It’s a somewhat neglected area of harmonic study and it’s a lot of fun to play with.”\nG.2 HUMAN SUBJECT EXPERIMENT FOR ASSESSING INTERPRETABILITY\nOur second experiment is a human subject experiment, where we collect and assess human-generated verbal interpretations of ILL-generated music rules rendered as sophisticated symbolic and numeric objects. Our goal is to use the results here to reveal both the possibilities and challenges in such a process of decoding expressive messages from AI sources. We treat this as a first step towards (a) a better design of AI representations that are human-interpretable and (b) a general methodology to evaluate interpretability of AI-discovered knowledge representations. In this experiment, we want to test to what degree our ILL-generated rules are interpretable. Our subject pool includes people who have entry-level math and music theory knowledge. So, by interpretability, we mean interpretable to them. The whole experimental procedure divides into two stages. At the first stage, we collect human interpretations of ILL-generated rules. At the second stage, we assess the collected interpretations to further evaluate the interpretability of AI-produced knowledge.\nCollect Human Interpretations. The experiment was conducted in the form of a two-week written homework assignment for 23 students. Students came from the CS+Music degree program recently launched in our university. Entry-level knowledge of computer science, related math, and music theory is assumed from every student. However, all students are new to our AI system, and none have read any ILL-generated rules before. The homework contained three parts. Part I provided detailed instructions on the format of the rules as exemplified in Figure 8, including both feature-related and probability-related instructions (symmetries were excluded from the tags since group theory is an unfamiliar subject to these students). More specifically, we provided verbal definition, mathematical representation, and typical examples for each of the following terms: chord, window (for coordinate selection), seed feature, feature, rule, n-gram, histogram, data set. A faithful understanding of these eight terms was the only prerequisite to complete the homework. The estimated reading time of the\ninstructions was about an hour. Once this self-training part was completed, the students were ready to go to the second and third parts—the main body of the homework. Part II contained eleven 1-gram rules—a histogram specified by window and seed feature(s); Part III contained fourteen 2-gram rules—a histogram now specified by window, seed feature(s), and a conditional. The students were asked to freely write what they saw in each of the histograms guided by the following two questions. (a) Does the histogram agree or disagree with any of the music concepts/rules you know (write down the music concepts/rules in music-theoretic terms)? (b) Does the histogram suggest something new (i.e., neither an agreement nor a disagreement, with no clear connection to any known knowledge)? Answers to each of the 25 rules came in the form of text, containing word descriptions that “decode” the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the two-week period, we asked the students to complete it independently, with no group work or office hours.\nAssess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubric-generating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 4. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the n-gram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the n-gram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AI-produced knowledge itself." }, { "heading": "H CONCLUSION AND BROADER IMPACTS", "text": "Model transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two\nCultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a human-centered standpoint and our long-term pursuit of “getting humanity back into artificial intelligence”. We strive to develop human-like artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).\nAs such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g., two-phase learning that “starts like a baby and learns like a child” with a full rule trace as output), but also in ongoing ILL-driven real-world projects aimed for beneficent societal impact. To name a few: (a) ILL-aided scientific research to accelerate new discoveries, as in biology (Yu et al., 2019); (b) ILLaided artistic creation to enable new ways and new dimensions in one’s creative and/or collaborative experience (art as a process is about more than the work itself); (c) ILL-aided personalized education. Discovered scientific knowledge, artistic expression, and educational curricula, may have a dual use character (Kaiser & Moreno, 2012). Nevertheless, making the discovery of abstract knowledge easier may lead to abstraction traps (Selbst et al., 2019) in deploying such learned knowledge in engineering design or policy making.\nEvaluation for ILL and similar technologies should have a humanist perspective, whether comparing to human-codified knowledge or with human subject experiments to assess interpretability. Moreover, evaluations for scientific discovery, artistic creativity, and personalized education should not only focus on model performance, but also on the human-centered criteria of how effectively they aid people in achieving their goals. Illuminating rules associated with practice not only helps human students be better rule-followers, but more creative rule-breakers and rule-makers. Instead of a Turing test for machine-generated music, one might more productively conduct artistic evaluation at a meta-level between human-written music constructed with and without assistance from ILL.\nRegarding biases in data, because ILL works in the “small data” regime makes it easier to curate data to avoid representation biases (Suresh & Guttag, 2019). Manually curating 370 music compositions is possible, but manually curating a billion is not.\nILL can be treated as a complement to many existing AI models, with a special focus on model transparency and explainability. Extensions to ILL could enable it to better cooperate with other models, e.g., as a pre-processing or a post-interpretation tool to achieve superior task performance as well as controllability and interpretability. One such possibility could leverage ILL to analyze the attention matrices (as signals) learned from a Transformer-based NLP model like BERT or GPT (Rogers et al., 2020)." } ]
2,020
null
SP:1ee00313e354c4594bbf6cf8bdbe33e3ec8df62f
[ "This paper proposes searching for an architecture generator that outputs good student architectures for a given teacher. The authors claim that by learning the parameters of the generator instead of relying directly on the search space, it is possible to explore the search space of architectures more effectively, increasing the diversity of the architectures explored. They show that this approach combined with the standard knowledge distillation loss is able to learn good student architectures requiring substantially less samples and achieving competitive performances when comparing to other knowledge distillation algorithms." ]
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models. However, widespread use is constrained by device hardware limitations, resulting in a substantial performance gap between state-ofthe-art models and those that can be effectively deployed on small devices. While Knowledge Distillation (KD) theoretically enables small student models to emulate larger teacher models, in practice selecting a good student architecture requires considerable human expertise. Neural Architecture Search (NAS) appears as a natural solution to this problem but most approaches can be inefficient, as most of the computation is spent comparing architectures sampled from the same distribution, with negligible differences in performance. In this paper, we propose to instead search for a family of student architectures sharing the property of being good at learning from a given teacher. Our approach AutoKD, powered by Bayesian Optimization, explores a flexible graphbased search space, enabling us to automatically learn the optimal student architecture distribution and KD parameters, while being 20× more sample efficient compared to existing state-of-the-art. We evaluate our method on 3 datasets; on large images specifically, we reach the teacher performance while using 3× less memory and 10× less parameters. Finally, while AutoKD uses the traditional KD loss, it outperforms more advanced KD variants using hand-designed students.
[]
[ { "authors": [ "Sungsoo Ahn", "Shell Xu Hu", "Andreas Damianou", "Neil D Lawrence", "Zhenwen Dai" ], "title": "Variational information distillation for knowledge transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Fabio Carlucci", "Pedro M. Esperança", "Marco Singh", "Antoine Yang", "Victor Gabillon", "Hang Xu", "Zewei Chen", "Jun Wang" ], "title": "MANAS: Multi-agent neural architecture", "venue": null, "year": 1909 }, { "authors": [ "Yu Cheng", "Duo Wang", "Pan Zhou", "Tao Zhang" ], "title": "A survey of model compression and acceleration for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Tejalal Choudhary", "Vipul Mishra", "Anurag Goswami", "Jagannathan Sarangapani" ], "title": "A comprehensive survey on model compression and acceleration", "venue": "Artificial Intelligence Review,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "BOHB: Robust and efficient hyperparameter optimization at scale", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Emile Fiesler", "Amar Choudry", "H John Caulfield" ], "title": "Weight discretization paradigm for optical neural networks", "venue": "Optical interconnections and networks,", "year": 1990 }, { "authors": [ "Jindong Gu", "Volker Tresp" ], "title": "Search for better students to learn distilled knowledge", "venue": "arXiv preprint arXiv:2001.11612,", "year": 2020 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": null, "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Yanping Huang", "Youlong Cheng", "Ankur Bapna", "Orhan Firat", "Dehao Chen", "Mia Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V Le", "Yonghui Wu" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "venue": "In Advances in Neural Information Processing Systems (NeurIPS,", "year": 2019 }, { "authors": [ "Zehao Huang", "Naiyan Wang" ], "title": "Like what you like: Knowledge distill via neuron selectivity transfer", "venue": "arXiv preprint arXiv:1707.01219,", "year": 2017 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Big Transfer (BiT): General visual representation learning", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Changlin Li", "Jiefeng Peng", "Liuchun Yuan", "Guangrun Wang", "Xiaodan Liang", "Liang Lin", "Xiaojun Chang" ], "title": "Block-wisely supervised neural architecture search with knowledge distillation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": null, "year": 2016 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": null, "year": 2016 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Yu Liu", "Xuhui Jia", "Mingxing Tan", "Raviteja Vemulapalli", "Yukun Zhu", "Bradley Green", "Xiaogang Wang" ], "title": "Search to distill: Pearls are everywhere but not the eyes", "venue": "arXiv preprint arXiv:1911.09074,", "year": 2019 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Antonio Polino", "Razvan Pascanu", "Dan Alistarh" ], "title": "Model compression via distillation and quantization", "venue": "arXiv preprint arXiv:1802.05668,", "year": 2018 }, { "authors": [ "Ariadna Quattoni", "Antonio Torralba" ], "title": "Recognizing indoor scenes", "venue": "In Computer Vision and Pattern Recognition (CVPR), pp", "year": 2009 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Esteban Real", "Sherry Moore", "Andrew Selle", "Saurabh Saxena", "Yutaka Leon Suematsu", "Jie Tan", "Quoc V Le", "Alexey Kurakin" ], "title": "Large-scale evolution of image classifiers", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Binxin Ru", "Pedro M Esperanca", "Fabio M Carlucci" ], "title": "Neural Architecture Generator Optimization", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Daniel Soudry", "Itay Hubara", "Ron Meir" ], "title": "Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alex Alemi" ], "title": "Inception-v4, inceptionresnet and the impact of residual connections on learning", "venue": "arXiv preprint arXiv:1602.07261,", "year": 2016 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive representation distillation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ilya Trofimov", "Nikita Klyuchnikov", "Mikhail Salnikov", "Alexander Filippov", "Evgeny Burnaev" ], "title": "Multi-fidelity neural architecture search with knowledge distillation", "venue": "arXiv preprint arXiv:2006.08341,", "year": 2020 }, { "authors": [ "Frederick Tung", "Greg Mori" ], "title": "Similarity-preserving knowledge distillation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Saining Xie", "Alexander Kirillov", "Ross Girshick", "Kaiming He" ], "title": "Exploring randomly wired neural networks for image recognition", "venue": null, "year": 1904 }, { "authors": [ "Antoine Yang", "Pedro M Esperança", "Fabio M Carlucci" ], "title": "NAS evaluation is frustratingly hard", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Li Yuan", "Francis EH Tay", "Guilin Li", "Tao Wang", "Jiashi Feng" ], "title": "Revisiting knowledge distillation via label smoothing regularization", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Sukmin Yun", "Jongjin Park", "Kimin Lee", "Jinwoo Shin" ], "title": "Regularizing class-wise predictions via self-knowledge distillation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "arXiv preprint arXiv:1612.03928,", "year": 2016 }, { "authors": [ "Arber Zela", "Aaron Klein", "Stefan Falkner", "Frank Hutter" ], "title": "Towards automated deep learning: Efficient joint neural architecture and hyperparameter search", "venue": null, "year": 2018 }, { "authors": [ "Chenzhuo Zhu", "Song Han", "Huizi Mao", "William J Dally" ], "title": "Trained ternary quantization", "venue": "arXiv preprint arXiv:1612.01064,", "year": 2016 }, { "authors": [ "Xiatian Zhu", "Shaogang Gong" ], "title": "Knowledge distillation by on-the-fly native ensemble", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently-developed deep learning models have achieved remarkable performance in a variety of tasks. However, breakthroughs leading to state-of-the-art (SOTA) results often rely on very large models: GPipe, Big Transfer and GPT-3 use 556 million, 928 million and 175 billion parameters, respectively (Huang et al., 2019; Kolesnikov et al., 2020; Brown et al., 2020).\nDeploying these models on user devices (e.g. smartphones) is currently impractical as they require large amounts of memory and computation; and even when large devices are an option (e.g. GPU clusters), the cost of large-scale deployment (e.g. continual inference) can be very high (Cheng et al., 2017). Additionally, target hardware does not always natively or efficiently support all operations used by SOTA architectures. The applicability of these architectures is, therefore, severely limited, and workarounds using smaller or simplified models lead to a performance gap between the technology available at the frontier of deep learning research and that usable in industry applications.\nIn order to bridge this gap, Knowledge Distillation (KD) emerges as a potential solution, allowing small student models to learn from, and emulate the performance of, large teacher models (Hinton et al., 2015a). The student model can be constrained in its size and type of operations used, so that it will satisfy the requirements of the target computational environment. Unfortunately, successfully achieving this in practice is extremely challenging, requiring extensive human expertise. For example, while we know that the architecture of the student is important for distillation (Liu et al., 2019b), it remains unclear how to design the optimal network given some hardware constraints.\nWith Neural Architecture Search (NAS) it is possible to discover an optimal student architecture. NAS automates the choice of neural network architecture for a specific task and dataset, given a search space of architectures and a search strategy to navigate that space (Pham et al., 2018; Real et al., 2017; Liu et al., 2019a; Carlucci et al., 2019; Zela et al., 2018; Ru et al., 2020). One im-\nportant limitation of most NAS approaches is that the search space is very restricted, with a high proportion of resources spent on evaluating very similar architectures, thus rendering the approach limited in its effectiveness (Yang et al., 2020). This is because traditional NAS approaches have no tools for distinguishing between architectures that are similar and architectures that are very different; as a consequence, computational resources are needed to compare even insignificant changes in the model. Conversely, properly exploring a large space requires huge computational resources: for example, recent work by Liu et al. (2019b) investigating how to find the optimal student requires evaluating 10, 000 models. By focusing on the comparison between distributions we ensure to use computational resources only on meaningful differences, thus performing significantly more efficiently: we evaluate 33× less architectures than the most related work to ours (Liu et al., 2019b). To overcome these limitations, we propose an automated approach to knowledge distillation, in which we look for a family of good students rather than a specific model. We find that even though our method, AutoKD, does not output one specific architecture, all architectures sampled from the optimal family of students perform well when trained with KD. This reformulation of the NAS problem provides a more expressive search space containing very diverse architectures, thus increasing the effectiveness of the search procedure in finding good student networks.\nOur contributions are as follows: (A) a framework for combining KD with NAS and effectively emulate large models while using a fraction of the memory and of the parameters; (B) By searching for an optimal student family, rather than for specific architectures, our algorithm is up to 20x more sample efficient than alternative NAS-based KD solutions; (C) We significantly outperform advanced KD methods on a benchmark of vision datasets, despite using the traditional KD loss, showcasing the efficacy of our found students." }, { "heading": "2 RELATED WORK", "text": "Model compression has been studied since the beginning of the machine learning era, with multiple solutions being proposed (Choudhary et al., 2020; Cheng et al., 2017). Pruning based methods allow the removal of non-essential parameters from the model, with little-to-none drop in final performance. The primary motive of these approaches was to reduce the storage requirement, but they can also be used to speed up the model (LeCun et al., 1990; Han et al., 2015; Li et al., 2016a). The idea behind quantization methods is to reduce the number of bits used to represent the weights and the activations in a model; depending on the specific implementation this can lead to reduced storage, reduced memory consumption and a general speed-up of the network (Fiesler et al., 1990; Soudry et al., 2014; Rastegari et al., 2016; Zhu et al., 2016). In low rank factorization approaches, a given weight matrix is decomposed into the product of smaller ones, for example using singular value decomposition. When applied to fully connected layers this leads to reduced storage, while when applied to convolutional filters, it leads to faster inference (Choudhary et al., 2020).\nAll the above mentioned techniques can successfully reduce the complexity of a given model, but are not designed to substitute specific operations. For example, specialized hardware devices might only support a small subset of all the operations offered by modern deep learning frameworks. In Knowledge Distillation approaches, a large model (the teacher) distills its knowledge into a smaller student architecture (Hinton et al., 2015b). This knowledge is assumed to be represented in the neural network’s output distribution, hence in the standard KD framework, the output distribution of a student’s network is optimized to match the teacher’s output distribution for all the training data (Yun et al., 2020; Ahn et al., 2019; Yuan et al., 2020; Tian et al., 2020; Tung & Mori, 2019).\nThe work of Liu et al. (2019b) shows that the architecture of a student network is a contributing factor in its ability to learn from a given teacher. The authors propose combining KD with a traditional NAS pipeline, based on Reinforcement Learning, to find the optimal student. While this setup leads to good results, it does so at a huge computational cost, requiring over 5 days on 200 TPUs. Similarly, Gu & Tresp (2020) also look for the optimal student architecture, but do so by searching for a subgraph of the original teacher; therefore, it cannot be used to substitute unsupported operations.\nOrthogonal approaches, looking at how KD can improve NAS, are explored by Trofimov et al. (2020) and Li et al. (2020). The first establishes that KD improves the correlation between different budgets in multi-fidelity methods, while the second uses the teacher supervision to search the architecture in a blockwise fashion." }, { "heading": "3 SEARCHING FOR THE OPTIMAL STUDENT NETWORK GENERATOR", "text": "The AutoKD framework (Fig. 1) combines Bayesian Optimization (BO), Neural Architecture Search (NAS) and Knowledge Distillation (KD). AutoKD defines a family of random network generators G(θ) parameterized by a hyperparameter θ, from where student networks are sampled. BO uses a surrogate model to propose generator hyperparameters, while students from these generators are trained with KD using a state-of-the-art teacher network. The student performances are evaluated and provided as feedback to update the BO surrogate model. To improve our BO surrogate model, the search procedure is iterated, until the best family of student networks G(θ∗) is selected. In this section we specify all components of AutoKD. See also Fig. 1 and Algorithm 1 for an overview." }, { "heading": "3.1 KNOWLEDGE DISTILLATION", "text": "Knowledge Distillation (KD; Hinton et al., 2015b) is a method to transfer, or distill, knowledge from one model to another—usually from a large model to small one—such that the small student model learns to emulate the performance of the large teacher model. KD can be formalized as minimizing the objective function:\nLKD = ∑ xi∈X l(fT (xi), fS(xi)) (1)\nwhere l is the loss function that measures the difference in performance between the teacher fT and the student fS , xi is the ith input, yi is the ith target. The conventional loss function l used in practice is a linear combination of the traditional cross entropy loss LCE and the Kullback–Leibler divergence LKL of the pre-softmax outputs for fT and fS :\nl = (1− α)LCE + αLKL (σ (fT (xi)/τ) , σ (fS(xi)/τ)) (2)\nwhere σ is the softmax function σ(x) = 1/(1 + exp(−x)), and τ is the softmax temperature. Hinton et al. (2015b) propose “softening” the probabilities using temperature scaling with τ ≥ 1. The parameter α represents the weight trade-off between the KL loss and the cross entropy loss LCE. The LKD loss is characterized by the hyper-parameters: α and τ ; popular choices are τ ∈ {3, 4, 5} and α = 0.9 (Huang & Wang, 2017; Zagoruyko & Komodakis, 2016; Zhu et al., 2018). Numerous other methods (Polino et al., 2018; Huang & Wang, 2017; Tung & Mori, 2019) can be formulated as a form of Equation (2), but in this paper we use the conventional loss function l.\nTraditionally in KD, both the teacher and the student network have predefined architectures. In contrast, AutoKD defines a search space of student network architectures and finds the optimal student by leveraging neural architecture search, as detailed below." }, { "heading": "3.2 STUDENT SEARCH VIA GENERATOR OPTIMIZATION", "text": "Most NAS method for vision tasks employ a cell-based search space, where networks are built by stacking building blocks (cells) and the operations inside the cell are searched (Pham et al., 2018; Real et al., 2017; Liu et al., 2019a). This results in a single architecture being output by the NAS procedure. In contrast, more flexible search spaces have recently been proposed that are based on\nAlgorithm 1: AutoKD 1: Input: Network generator G, BOHB hyperparameters(η, training budget bmin and bmax),\nEvaluation function fKD(θ, b) which assesses the validation performance of a generator hyperparameterθ by sampling an architecture from G(θ) and training it with the KD loss LKD (equations 1 and 2) for b epochs.\n2: smax = blogη bmaxbmin c; 3: for s ∈ {smax, smax − 1, . . . , 0} do 4: Sample M = d smax+1s+1 · η\nse generator hyperparameters Θ = {θj}Mj=1 which maximises the raito of kernel density estimators ; . (Falkner et al., 2018, Algorithm 2)\n5: Initialise b = ηs · bmax ; . Run Successive Halving (Li et al., 2016b) 6: while b ≤ bmax do 7: L = {fKD(θ, b) : θ ∈ Θ}; 8: Θ = top k(Θ,L, b|Θ|/ηc); 9: b = η · b;\n10: end while 11: end for 12: Obtain the best performing configuration θ∗ for the student network generator. 13: Sample k architectures from G(θ∗), train them to completion, and obtain test performance.\nneural network generators (Xie et al., 2019; Ru et al., 2020). The generator hyperparameters define the characteristics of the family of networks being generated.\nNAGO optimizes an architecture generator instead of a single architecture and proposes a hierarchical graph-based space which is highly expressive yet low-dimensional (Ru et al., 2020). Specifically, the search space of NAGO comprises three levels of graphs (where the node in the higher level is a lower-level graph). The top level is a graph of cells (Gtop) and each cell is itself a graph of middlelevel modules (Gmid). Each module further corresponds to a graph of bottom-level operation units (Gbottom) such as a relu-conv3×3-bn triplet. NAGO adopts three random graph generators to define the connectivity/topology of Gtop, Gmid and Gbottom respectively, and thus is able to produce a wide variety of architectures with only a few generator hyperparameters. AutoKD employs NAGO as the NAS backbone for finding the optimal student family.\nOur pipeline consists of two phases. In the first phase (search), a multi-fidelity Bayesian optimisation technique, BOHB (Falkner et al., 2018), is employed to optimise the low-dimensional search space. BOHB uses partial evaluations with smaller-than-full budget to exclude bad configurations early in the search process, thus saving resources to evaluate more promising configurations. Given the same time constraint, BOHB evaluates many more configurations than conventional BO which evaluates all configurations with full budget. As Ru et al. (2020) empirically observe that good generator hyperparameters lead to a tight distribution of well-performing architectures (small performance standard deviation), we similarly assess the performance of a particular generator hyperparameter value with only one architecture sample. In the second phase (retrainA), AutoKD uniformly samples multiple architectures from the optimal generator found during the search phase and evaluates them with longer training budgets to obtain the best architecture performance.\nInstead of the traditionally used cross-entropy loss, AutoKD uses the KD loss in equation 2 to allow the sampled architecture to distill knowledge from its teacher. The KD hyperparameters temperature τ and loss weight α are included in the search space and optimized simultaneously with the architecture to ensure that the student architectures can efficiently distill knowledge both from the designated teacher and the data distribution. A full overview of the framework is shown in Fig. 1." }, { "heading": "4 EXPERIMENTS", "text": "The first part of this section studies how KD can improve the performance of our chosen NAS backbone (NAGO). In the second part, we show how a family of students, when trained with KD (AutoKD), can emulate much larger teachers, significantly outperforming current hand-crafted architectures.\nExperimental setup. All of our experiments were run on the two, small-image, standard object recognition datasets CIFAR10 and CIFAR100 (Krizhevsky, 2009), as well as MIT67 for large-image scene recognition (Quattoni & Torralba, 2009). We limit the number of student network parameters to 4.0M for small-image tasks and 6.0M for large-image tasks. Following Liu et al. (2019b), we picked Inception-Resnet-V2 (Szegedy et al., 2016) as a teacher for the large image dataset. As that model could not be directly applied to small images, and to explore the use of a machine-designed network as a teacher, we decided to use the best DARTS (Liu et al., 2019a) architecture to guide the search on the CIFAR datasets. For ImageNet (Deng et al., 2009), we use a Inception-Resnet-V2 teacher. All experiments are run on NVIDIA Tesla V100 GPUs.\nNAS implementation. Our approach follows the search space and BO-based search protocol proposed by NAGO (Ru et al., 2020), as such our student architectures are based on hierarchical random graphs. Likewise, we employ a multi-fidelity evaluation scheme based on BOHB (Falkner et al., 2018) where candidates are trained for different epochs (30, 60 and 120) and then evaluated on the validation set. In total, only ∼300 models are trained during the search procedure: using 8 GPUs, this amounts to∼2.5 days of compute on the considered datasets. At the end of the search, we sample 8 architectures from the best found generator, train them for 600 epochs (with KD, using the optimal temperature and loss weight found during the search), and report the average performance (top-1 test accuracy). All remaining training parameters were set following Ru et al. (2020).\nIn AutoKD, we include the knowledge distillation hyperparameters, temperature and weight, in the search space, so that they are optimized alongside the architecture. The temperature ranges from 1 to 10, while the weight ranges from 0 to 1. Fig. 8 (Appendix) illustrates the importance of these hyperparameters when training a randomly sampled model, lending support to their inclusion." }, { "heading": "4.1 IMPACT OF KNOWLEDGE DISTILLATION ON NAS", "text": "To understand the contribution from KD, we first compare vanilla NAGO with AutoKD on CIFAR100. Fig. 2 shows the validation accuracy distribution at different epochs: clearly, using KD leads to better performing models. Indeed this can be seen in more detail in Fig. 3, where we show the performance of the best found model vs the wall clock time for each budget. It is worth mentioning that while the KD version takes longer (as it needs to compute the lessons on the fly), it consistently outperforms vanilla NAGO by a significant margin on all three datasets.\nNote that accuracies in Fig. 3 refer to the best models found during the search process, while Fig. 2 shows the histograms of all models evaluated during search, which are by definition lower in accuracy, on average. At the end of search, the model is retrained for longer (as commonly done in NAS methods), thus leading to the higher accuracies also shown in Figs. 6, 7.\nNot only does AutoKD offer better absolute performance, but it also enables better multi-fidelity correlation, as can be seen in Fig. 4. For example, the correlation between 30 and 120 epochs improves from 0.49 to 0.82 by using KD, a result that is consistent with the findings in Trofimov et al. (2020). Note that multi-fidelity methods work under the assumption that the rankings at different budgets remains consistent to guarantee that the best models progress to the next stage. A high correlation between the rankings is, as such, crucial." }, { "heading": "4.2 LARGE MODEL EMULATION", "text": "At its core, AutoKD’s goal is to emulate the performance of large SOTA models with smaller students. Fig. 6 shows how the proposed method manages to reach the teacher’s performance while using only 1/9th of the memory on small image datasets. On MIT67, the found architecture is not only using 1/3rd of the memory, but also 1/10th of parameters. Finally, it is worth noting how AutoKD increases student performance, as such the high final accuracy cannot only be explained by the NAS procedure. Indeed, looking at Fig. 7 it is clear how KD improves both the speed of convergence and the final accuracy. Furthermore, as shown in Fig. 5, the optimal family of architectures is actually different when searched with KD.\nMIT67, CIFAR100, CIFAR10. Table 1 shows the comparison of AutoKD with other KD methods. Notice how learning the student architecture allows AutoKD to outperform a variety of more advanced KD approaches while emplying a smaller parameter count in the student. The exception to this is CIFAR10, where AutoKD outperforms other methods but with a larger number of parameters. This is because the default networks in the NAGO search space have 4M parameters, which is too large for this application. Accuracy-wise, the only method doing better on CIFAR100, Yuan et al. (2020), does so with a student with significantly more parameters (34M vs 4M). Finally, AutoKD is orthogonal to advanced KD approaches and could be combined with any of them for even further increases in performance.\nImageNet. The improved results on smaller datasets extend to large datasets as well. On ImageNet, AutoKD reaches 78.0% top-1 accuracy, outperforming both Liu et al. (2019b) using the same teacher (75.5%) and vanilla NAGO (76.8%)." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "Improving Knowledge Distillation by searching for the optimal student architecture is a promising idea that has recently started to gain attention in the community (Liu et al., 2019b; Trofimov et al., 2020; Gu & Tresp, 2020). In contrast with earlier KD-NAS approaches, which search for specific architectures, our method searches for a family of networks sharing the same characteristics. The\nmain benefit of this approach is sample efficiency: while traditional methods spend many computational resources evaluating similar architectures (Yang et al., 2020), AutoKD is able to avoid this pitfall: for instance, the method of Liu et al. (2019b) requires ∼10, 000 architecture samples, while AutoKD can effectively search for the optimal student family with only 300 samples. Compared to traditional KD methods, AutoKD is capable of achieving better performance with student architectures that have less parameters (and/or use less memory) than hand-defined ones.\nOur message “DON’T BE PICKY” refers to the fact that the macro-structure (connectivity and capacity) of a network is more important than its micro-structure (the specific operations). This has been shown to be true for non-KD NAS (Xie et al., 2019; Ru et al., 2020) and is here experimentally confirmed for KD-NAS as well. Changing the focus of optimization in this way releases computational resources that can be used to effectively optimize the global properties of the network. Additionally, the fact that a family of architectures can characterized by a small number of hyperparameters makes the comparison of architectures more meaningful and interpretable. In the current implementation, AutoKD finds the optimal student family, in which all sampled architectures perform well: future work should explore how to fully exploit this distribution, possibly finetuning the network distribution to obtain an ever better performing model.\nTo summarize, AutoKD offers a strategy to efficiently emulate large, state-of-the-art models with a fraction of the model size. Indeed, our family of searched students consistently outperforms the best hand-crafted students on CIFAR10, CIFAR100 and MIT67." } ]
2,020
null
SP:eea3b3ec32cce61d6b6df8574cf7ce9376f2230a
[ "The paper proposes a defense that works by adding multiple targeted adversarial perturbations (with random classes) on the input sample before classifying it. There is little theoretical reasoning for why this is a sensible defense. More importantly though, the defense is only evaluated in an oblivious threat model where the attacker is unaware of the defense mechanism. As has been argued again and again in the literature and in community guidelines such as [1, 2], the oblivious threat model is trivial and yields absolutely no insights into the effectiveness of a defense (e.g. you can just manipulate the backpropagated gradient in random ways to prevent any gradient-based attack from finding adversarial perturbations). The problem with oblivious attacks is clearly visible in the results section where more PGD iterations are less effective than fewer iterations - a clear red flag that the evaluation is ineffective. The paper also fails to point out that Pang et al. 2020, one of the methods they combine their method with, has been shown to be ineffective [2]." ]
Studies show that neural networks are susceptible to adversarial attacks. This exposes a potential threat to neural network-based artificial intelligence systems. We observe that the probability of the correct result outputted by the neural network increases by applying small perturbations generated for non-predicted class labels to adversarial examples. Based on this observation, we propose a method of counteracting adversarial perturbations to resist adversarial examples. In our method, we randomly select a number of class labels and generate small perturbations for these selected labels. The generated perturbations are added together and then clamped onto a specified space. The obtained perturbation is finally added to the adversarial example to counteract the adversarial perturbation contained in the example. The proposed method is applied at inference time and does not require retraining or finetuning the model. We validate the proposed method on CIFAR-10 and CIFAR-100. The experimental results demonstrate that our method effectively improves the defense performance of the baseline methods, especially against strong adversarial examples generated using more iterations.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Alhussein Fawzi", "Seyed-Mohsen Moosavi-Dezfooli", "Pascal Frossard" ], "title": "Robustness of classifiers: from adversarial to random noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Van Der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shengyuan Hu", "Tao Yu", "Chuan Guo", "Wei-Lun Chao", "Kilian Q Weinberger" ], "title": "A new defense against adversarial images: Turning a weakness into a strength", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Alex Lamb", "Vikas Verma", "Juho Kannala", "Yoshua Bengio" ], "title": "Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy", "venue": "In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tianyu Pang", "Chao Du", "Jun Zhu" ], "title": "Max-mahalanobis linear discriminant analysis networks", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Jun Zhu" ], "title": "Mixup inference: Better exploiting mixup to defend adversarial attacks", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jinhwan Park", "Yoonho Boo", "Iksoo Choi", "Sungho Shin", "Wonyong Sung" ], "title": "Fully neural network based speech recognition on mobile and embedded devices", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Pedro Tabacof", "Eduardo Valle" ], "title": "Exploring the space of adversarial images", "venue": "In 2016 International Joint Conference on Neural Networks (IJCNN),", "year": 2016 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Chaowei Xiao", "Sven Gowal", "Robert Stanforth", "Bo Li", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Towards stable and efficient training of verifiably robust neural networks", "venue": "International Conference on Learning Representations,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have become the dominant approach for various tasks including image understanding, natural language processing and speech recognition (He et al., 2016; Devlin et al., 2018; Park et al., 2018). However, recent studies demonstrate that neural networks are vulnerable to adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015). That is, these network models make an incorrect prediction with high confidence for inputs that are only slightly different from correctly predicted examples. This reveals a potential threat to neural network-based artificial intelligence systems, many of which have been widely deployed in real-world applications.\nThe adversarial vulnerability of neural networks reveals fundamental blind spots in the learning algorithms. Even with advanced learning and regularization techniques, neural networks are not learning the true underlying distribution of the training data, although they can obtain extraordinary performance on test sets. This phenomenon is now attracting much research attention. There have been increasing studies attempting to explain neural networks’ adversarial vulnerability and develop methods to resist adversarial examples (Madry et al., 2018; Zhang et al., 2020; Pang et al., 2020). While much progress has been made, most existing studies remain preliminary. Because it is difficult to construct a theoretical model to explain the adversarial perturbation generating process, defending against adversarial attacks is still a challenging task.\nExisting methods of resisting adversarial perturbations perform defense either at training time or inference time. Training time defense methods attempt to increase model capacity to improve adversarial robustness. One of the commonly used methods is adversarial training (Szegedy et al., 2014), in which a mixture of adversarial and clean examples are used to train the neural network. The adversarial training method can be seen as minimizing the worst case loss when the training example is perturbed by an adversary (Goodfellow et al., 2015). Adversarial training requires an adversary to generate adversarial examples in the training procedure. This can significantly increase the training time. Adversarial training also results in reduced performance on clean examples. Lamb et al. (2019) recently introduced interpolated adversarial training (IAT) that incorporates interpolation-based training into the adversarial training framework. The IAT method helps to improve performance on clean examples while maintaining adversarial robustness.\nAs to inference time defense methods, the main idea is to transfer adversarial perturbations such that the obtained inputs are no longer adversarial. Tabacof & Valle (2016) studied the use of random noise such as Gaussian noise and heavy-tail noise to resist adversarial perturbations. Xie et al. (2018) introduced to apply two randomization operations, i.e., random resizing and random zero padding, to inputs to improve adversarial robustness. Guo et al. (2018) investigated the use of random cropping and rescaling to transfer adversarial perturbations. More recently, Pang et al. (2020) proposed the mixup inference method that uses the interpolation between the input and a randomly selected clean image for inference. This method can shrink adversarial perturbations somewhat by the interpolation operation. Inference time defense methods can be directly applied to off-the-shelf network models without retraining or finetuning them. This can be much efficient as compared to training time defense methods.\nThough adversarial perturbations are not readily perceivable by a human observer, it is suggested that adversarial examples are outside the natural image manifold (Hu et al., 2019). Previous studies have suggested that adversarial vulnerability is caused by the locally unstable behavior of classifiers on data manifolds (Fawzi et al., 2016; Pang et al., 2018). Pang et al. (2020) also suggested that adversarial perturbations have the locality property and could be resisted by breaking the locality. Existing inference time defense methods mainly use stochastic transformations such as mixup and random cropping and rescaling to break the locality. In this research, we observe that applying small perturbations generated for non-predicted class labels to the adversarial example helps to counteract the adversarial effect. Motivated by this observation, we propose a method that employs the use of small perturbations to counteract adversarial perturbations. In the proposed method, we generate small perturbation using local first-order gradient information for a number of randomly selected class lables. These small perturbations are added together and projected onto a specified space before finally applying to the adversarial example. Our method can be used as a preliminary step before applying existing inference time defense methods.\nTo the best of our knowledge, this is the first research on using local first-order gradient information to resist adversarial perturbations. Successful attack methods such as projected gradient descent (PGD) (Madry et al., 2018) usually use local gradient to obtain adversarial perturbations. Compared to random transformations, it would be more effective to use local gradient to resist adversarial perturbations. We show through experiments that our method is effective and complementary to random transformation-based methods to improve defense performance.\nThe contributions of this paper can be summarized as follows:\n• We propose a method that uses small first-order perturbations to defend against adversarial attacks. We show that our method is effective in counteracting adversarial perturbations and improving adversarial robustness.\n• We evaluate our method on CIFAR-10 and CIFAR-100 against PGD attacks in different settings. The experimental results demonstrate that our method significantly improves the defense performance of the baseline methods against both untargeted and targeted attacks and that it performs well in resisting strong adversarial examples generated using more iterations." }, { "heading": "2 PRELIMINARY", "text": "" }, { "heading": "2.1 ADVERSARIAL EXAMPLES", "text": "We consider a neural network f(·) with parameters θ that outputs a vector of probabilities for L = {1, 2, ..., l} categories. In supervised learning, empirical risk minimization (ERM) (Vapnik, 1998) has been commonly used as the principle to optimize the parameters on a training set. Given an input x, the neural network makes a prediction c(x) = argmaxj∈L fj(x). The prediction is correct if c(x) is the same as the actual target c∗(x).\nUnfortunately, ERM trained neural networks are vulnerable to adversarial examples, inputs formed by applying small but intentionally crafted perturbations (Szegedy et al., 2014; Madry et al., 2018). That is, an adversarial example x′ is close to a clean example x under a distance metric, e.g., ℓ∞ distance, but the neural network outputs an incorrect result for the adversarial example x′ with high\nconfidence. In most cases, the difference between the adversarial example and clean example is not readily recognizable to humans." }, { "heading": "2.2 ATTACK METHODS", "text": "Existing attack methods can be categorized into white-box attacks and black-box attacks. We focus on defending against white-box attacks, wherein the adversary has full access to the network model including the architecture and weights. The fast gradient sign (FGSM) method (Goodfellow et al., 2015) and PGD are two successful optimization-based attack methods.\nThe FGSM method is a one-step attack method. It generates adversarial perturbations that yield the highest loss increase in the gradient sign direction. Let x be the input to a network model, y the label associate with x and L(θ,x, y) be the loss function for training the neural network. The FGSM method generates a max-norm constrained perturbation as follows:\nη = εsign(∇xL(θ,x, y)), (1)\nwhere ε denotes the max-norm. This method was developed based on the view that the primary cause of neural networks’ adversarial vulnerability is their linear nature. The required gradient can be computed efficiently using backpropagation.\nThe PGD method is a multistep attack method that iteratively applies projected gradient descent on the negative loss function (Kurakin et al., 2016) as follows:\nxt+1 = Πx+S(x t + αsign(∇xtL(θ,xt, y))), (2)\nwhere α denotes the step size and Π denotes the projection operator that projects the perturbed input onto x+ S. We consider projecting the perturbed input onto a predefined ℓ∞ ball from the original input. The PGD attack method can be seen as a multistep FGSM method. It is a much strong adversary that reliably causes a variety of neural networks to misclassify their input.\n3 METHODOLOGY\nWhile many studies have been conducted on defending against adversarial attacks at inference time, these studies have not considered using local gradient information to resist adversarial perturbations. Previous work has suggested that the primary cause of neural networks’ adversarial vulnerability is their linear nature (Goodfellow et al., 2015). It would be more effective to use first-order gradient information to counteract adversarial perturbations such that the resulted perturbations no longer result in the model making an incorrect prediction.\nAdversarial perturbations are small crafted perturbations that slightly affect the visual quality of inputs but cause the neural network to misclassify the inputs in favor of an incorrect answer with high probability. We show that this effect can be counteracted by applying small perturbations generated using local first-order gradient information for class labels other than the predicted one. An illustration of this phenomenon is shown in Figure 1. We see that by adding perturbations generated for non-predicted labels to the input, the prediction probability for the correct category increases and that for the incorrect label is suppressed.\nAlgorithm 1 Counteracting adversarial perturbations using local first-order gradient. Input: Neural network f ; input x; step size α used in PGD to generate perturbations to counteract the adver-\nsarial perturbation. Output: Prediction result for x. 1: Randomly select N class labels {l1, l2, ..., lN}; 2: for i = 1 to N do 3: ηi = PGD(li, α, step=1) // generate perturbation ηi for li using the one-step PGD method. 4: end for 5: x = x+ΠC( ∑N i=1 ηi(x)) // C is a ℓ∞ bounded space. 6: return f(x).\nBased on this phenomenon, we propose a method of counteracting adversarial perturbations to improve adversarial robustness. In the proposed method, we generate small perturbations for a number of randomly selected class labels and apply these perturbations to the input to resist the adversarial perturbation. Let x be the input to a model, which can be an adversarial or clean example. We randomly select N class labels and generate small first-order perturbations for the N selected labels. These N small perturbations are added together and then projected onto a ℓ∞-bounded space before applying to the input. This procedure can be formulated as follows:\nx̃ = x+ΠC( N∑ i=1 ηi(x)), (3)\nwhere ηi(x) denotes the small perturbation generated for the i-th selected class label, C = {t| ∥t− x∥∞ ≤ µ} is a µ bounded ℓ∞ space. The one-step PGD method is used to generate small perturbations. This is the same as using the FGSM method and empirically achieves better performance than using multiple steps. The perturbations can be generated in an untargeted or targeted manner. The combined perturbation is projected onto the space C. This ensures that the obtained example is visually similar to the original one. We detail the procedure for counteracting adversarial perturbations in Algorithm 1.\nDiscussion and Analysis Adversarial examples exposes underlying flaws in the training algorithms. While much progress has been made in defending against adversarial attacks, it is difficult to theoretically understand neural networks’ vulnerability to adversarial examples. Previous work (Athalye et al., 2018) has suggested that the adversarial perturbation δ can be obtained by solving the following optimization problem:\nmin ∥δ∥p , s.t. c(x+ δ) ̸= c∗(x), ∥δ∥p ≤ ξ,\n(4)\nwhere ξ is a hyperparameter constraining the size of the perturbation. This problem can be effectively solved by gradient descent-based attack methods such as PGD and FGSM that reliably cause neural networks to output an incorrect result. These attack methods typically use local first-order gradient to find the optimal solution. Because state-of-the-art neural networks usually have many parameters, perturbations obtained with these attack methods may overfit to the inputs. Therefore, perturbing and transferring these adversarial perturbations could be an effective way to resist the adversarial effect. Unlike previous random transformation-based methods, we employ the use of local first-order gradient information to counteract the adversarial effect. We show that the proposed method is effective in improving defense performance, especially against strong adversarial examples generated using more iterations.\nLet x0 be a clean example and δ be the adversarial perturbation. In our method, the following input is fed to the neural network:\nx0 + δ · 1z(x0) +ΠC( N∑ i=1 ηi(x0)), where 1z(x0) = { 0, x0 is not subject to adversarial attack, 1, x0 is subject to adversarial attack.\n(5)\nThe perturbation ηi generated to counteract the adversarial perturbation should be small, otherwise it would be a new adversarial perturbation. This would essentially have no effect in counteracting the\nadversarial perturbation. Adversarial training that has been shown to be effective to improve adversarial robustness usually employs a first-order adversarial like PGD to provide adversarial examples for training. These adversarial examples help to regularize the model to be resistant to adversarial perturbations. We show through experiments that our method is complementary to adversarial training to improve overall defense performance against both untargeted and targeted attacks.\nThe proposed method is applied at inference time. It can be directly applied to off-the-shelf models without retraining or finetuning them. The required gradient for generating small perturbations can be computed efficiently in parallel using backpropagation. This would not increase too much time for inference." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "We conduct experiments on CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009). ResNet-50 (He et al., 2016) is used as the network model. We validate the proposed method on models trained using two methods: Mixup (Zhang et al., 2018) and IAT (Lamb et al., 2019). For fair performance comparison, we follow the same experimental setup as Pang et al. (2020) to train the models. The training procedure is performed for 200 epochs with a batch size of 64. The learning rate is initialized to 0.1 and divided by a factor of 10 at epoch 100 and 150. The values used for interpolation are sampled from Beta(1, 1) for both Mixup and IAT. The ratio between clean examples and adversarial examples used in IAT is set to 1:1. The untargeted PGD10 method with a step size of 2/255 and ε set to 8/255 is used to generate adversarial examples in IAT.\nWe experiment against both untargeted and targeted PGD attacks with different iterations. The values of ε and step size for the PGD attacks are set to 8/255 and 2/255, respectively. The onestep PDG method is used to generate perturbations to resist adversarial perturbations. Unless stated otherwise, perturbations used for defense purposes are generated in a targeted fashion. The step size for the one-step PGD and number of randomly selected class labels are set to 4/255 and 9, respectively. The value of µ is set to 8/255. For each experiment, we run our model for three times and report the mean accuracy. Our method is implemented in Pytorch (Paszke et al., 2017) and all experiments are conducted on one GPU.\nBaselines Three methods that were recently developed for inference time defense are used as baselines. These three methods are Xie et al.’s (2018), Guo et al.’s (2018) and MI-OL (mixup inference\nwith non-predicted labels) (Pang et al., 2020). We compare the performance our method and the baselines and present results of the joint use of our method and the baselines to resist adversarial examples." }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "We validate the proposed method against oblivious-box attacks (Carlini & Wagner, 2017). That is the adversary does not know about the existence of the defense mechanism, and adversarial examples are generated only based on targeted network models. We evaluate the performance of defenses on the entire test set. Table 1 and Table 2 report the quantitative results on CIFAR-10 and CIFAR100, respectively, demonstrating the effectiveness of the proposed method in improving defense performance. We see from Table 1 that the proposed method significantly helps to improve defense performance of the baseline methods against untageted attacks, achieving at least 12.5% and 4.1% performance gains for Mixup and IAT trained models, respectively. For defending against targeted attacks, the proposed method performs well in combination with Xie et al.’s and Guo et al.’s for Mixup trained models, and it performs well together with Xie et al.’s for IAT trained models. It can be seen from Table 2 that, as with on CIFAR-10, the proposed method also helps improve defense performance against untargeted attacks on CIFAR-100, achieving at least 6.4% and 1.6% performance improvements for Mixup and IAT trained models, respectively. For defending against targeted attacks, our method consistently helps to improve defense performance when applied on Xie et al.’s and Guo et al.’s methods. We can also make the following three observations from the quantitative results.\n1. In most cases, the proposed method improves defense performance of the baseline methods. Especially for resisting untargeted attacks in different settings, our method significantly helps to improve defense performance. This shows that our method is complementary to the baselines to resist adversarial perturbations. Among the three baseline methods, the joint use of our method with Xie et al.’s and Guo et al.’s methods performs well compared to with the MI-OL method. This could be because the perturbation used to counteract adversarial perturbations is reduced due to the interpolation operation in MI-OL.\n2. The proposed method performs well against strong PGD attacks with more iterations. Previous studies show that adversarial perturbations generated using more iterations are difficult to resist. The results of the baselines also show that PGD attacks with more iterations result in reduced performance. It is worth noting that the proposed method achieves improved performance for defending against most strong PDG attacks. And for the remaining attacks, the use of more iterations results in\ncomparable performance as the use of less iterations. The results show that adversarial perturbations generated using more iterations can be easily counteracted by using first-order perturbations.\n3. For defending against targeted PGD50 and PGD200 attacks on CIFAR-10, our method together with Guo et al.’s on Mixup trained models achieve higher performance than those obtained on IAT trained models, improving the classification accuracy 1.4% and 3.2%, respectively. Overall, our method together with Guo et al.’s achieve better or comparable performance than pure IAT trained models. As far as we know, we are the first to outperform pure adversarial training-obtained models using only inference time defense methods. This shows that it is promising that adversarial training could be unnecessary if proper perturbations are applied to adversarial examples.\nNext, we analyse the impact of the step size used in the one-step PGD method on defense performance. We experiment on CIFAR-10 and CIFAR-100 resisting both untargeted and targeted PGD10 attacks. The experimental results are reported in Figure 2. We see that the step size affects differently for untargeted and targeted attacks. The performance improves as the step size increases from 1 to 8 for untargeted tasks on the two datasets. For targeted attacks, the performance improves as the step size increases from 1 to 4 but starts to reduce or maintain similar performance as the step size further increases.\nWe also analyse the impact of the number of selected class labels in our method on defense performance. Figure 3 demonstrates the results of resisting untargetd and targeted PGD10 attacks on CIFAR-10 and CIFAR-100. We see that the performance improves for both untargeted and targeted attacks as the number increases from 1 to 9 on CIFAR-10. On CIFAR-100, the performance also improves as the number increases from 1 to 9 but begins to drop or remain similar as the number further increases.\nDiscussion on type of defense perturbations In our experiments, small perturbations used to counteract the adversarial perturbation are generated in a targeted manner other than for targeted attacks on IAT trained models on CIFAR-100, small perturbations are generated in an untargeted manner. Overall, untargeted adversarial perturbations can be effectively counteracted using perturbations\ngenerated in a targeted manner by our method. The results also suggest that adversarial training has an unstable behavior for different data distributions.\nDiscussion on number of steps used to generate defense perturbations The perturbations for defense purposes are generated using the one-step PGD method. We also experiment using multiple steps to generate perturbations for defense purposes. However, we find that this results in reduced performance in defending against adversarial examples. This could be because perturbations generated using multiple steps have adversarial effects and they do not help much to counteract the original adversarial perturbation.\nTo demonstrate the advantage of our method, we further compare the performance of different methods used together with Guo et al.’s. The results of defending against attacks on Mixup trained models are reported in Table 3. We see that although these methods, including Xie et al.’s, MI-OL, as well as random rotation and Gaussian noise, are effective in improving performance, out methods outperforms these methods by a large margin, especially when resisting adversarial examples generated using more iterations.\nFinally, we evaluate our method on clean examples. Table 4 compares the performance of our method and the baseline methods. We see that our method performs differently using different types of perturbations that are generated for defense purposes. Our method mostly performs very well on clean inputs compared to the baselines when the perturbations used for defense purposes are generated in an untargeted manner." }, { "heading": "5 CONCLUSION", "text": "We proposed a method of counteracting adversarial perturbations for defending against adversarial attacks. In our method, we generate small perturbations for a number of randomly selected class labels and apply these small perturbations to the input to counteract the adversarial perturbation. Unlike previous methods, our method employs the use of local first-order gradient for defense purposes and can effectively improve adversarial robustness. Our method is applied at inference time and complementary to the adversarial training method to improve overall defense performance. We experimentally validated our method on CIFAR-10 and CIFAR-100 against both untargeted and targeted PGD attacks. We presented extensive results demonstrating our method significantly improves the defense performance of the baseline methods. We showed that our method performs well in resisting strong adversarial perturbations generated using more iterations, demonstrating the advantage of using local first-order gradient to resist adversarial perturbations. Notably, our method together with Guo et al.’s (2018) achieved better performance than those obtained on IAT trained models when resisting targeted PGD50 and PGD200 attacks. This shows that it is promising adversarial training could be unnecessary if proper perturbations are applied to inputs." }, { "heading": "A APPENDIX", "text": "A.1 TRAINING USING MIXUP\nIn the Mixup method (Zhang et al., 2018), neural networks are trained by minimizing the following loss:\nL(f) = 1\nm m∑ i=1 ℓ(f(x̃i, ỹi)), (6)\nwhere ℓ is a loss function that penalizes the difference between the prediction and its actual target, and\nx̃i = λxi + (1− λ)xj , ỹi = λyi + (1− λ)yj .\n(7)\n(xi, yi) and (xj , yj) are randomly sampled from the training data, λ∼Beta(α, α), α ∈ (0,+∞). Training using Mixup empirically improves the generalization performance on clean samples and slightly improves robustness against adversarial examples.\nA.2 ADVERSARIAL TRAINING\nAdversarial training was introduced by Szegedy et al. (2014). In the adversarial training method, a mixture of adversarial and clean examples are used train a neural network. Madry et al. (2018) formulated adversarially robust training of neural networks as the saddle point problem:\nmin θ ρ(θ), where ρ(θ) = E(x,y)∼D [ max δ∈S L(θ, x+ δ, y) ] , (8)\nwhere θ denotes the parameters of the neural network and S is the allowed set for perturbations. The inner maximization problem aims to find an adversarial version of a given data point x that achieves a high loss, while the outer minimization aims to find model parameters such that the adversarial loss given by the inner attack problem is minimized. PGD as a first-order adversary can reliably solve the inner maximization problem, even though the inner maximization is non-concave.\nLamb et al. (2019) proposed the interpolated adversarial training (IAT) method that combines Mixup with adversarial training. In the IAT method, the interpolation of adversarial examples and that of clean examples are used for training neural networks. Compared to adversarial training, IAT can achieve high accuracy on clean examples while maintaining adversarial robustness.\nA.3 MORE TECHNICAL DETAILS\nThe hyperparameter settings used in our method on CIFAR-10 and CIFAR-100 are given in Table 5 and Table 6, respectively." } ]
2,020
null
SP:8badc3f75194e9780063af5a2f26448e41e733d4
[ "The technique is described in sufficient detail and the paper is easy to read. Experimental results involving three datasets: MNIST, street view house numbers, and German traffic signs. The experimental results show that the proposed technique finds significant failures in all datasets, including critical failure scenarios. After correction, the performance of the method improves. " ]
With the greater proliferation of machine learning models, the imperative of diagnosing and correcting bugs in models has become increasingly clear. As a route to better discover and fix model bugs, we propose failure scenarios: regions on the data manifold that are incorrectly classified by a model. We propose an end-to-end debugging framework called Defuse to use these regions for fixing faulty classifier predictions. The Defuse framework works in three steps. First, Defuse identifies many unrestricted adversarial examples—naturally occurring instances that are misclassified—using a generative model. Next, the procedure distills the misclassified data using clustering into failure scenarios. Last, the method corrects model behavior on the distilled scenarios through an optimization based approach. We illustrate the utility of our framework on a variety of image data sets. We find that Defuse identifies and resolves concerning predictions while maintaining model generalization.
[]
[ { "authors": [ "Antreas Antoniou", "Amos Storkey", "Harrison Edwards" ], "title": "Data augmentation generative adversarial networks", "venue": "International Conference on Artificial Neural Networks and Machine Learning,", "year": 2017 }, { "authors": [ "Christopher M. Bishop" ], "title": "Pattern Recognition and Machine Learning (Information Science and Statistics)", "venue": null, "year": 2006 }, { "authors": [ "Serena Booth", "Yilun Zhou", "Ankit Shah", "Julie Shah" ], "title": "Bayes-trex: Model transparency by example", "venue": null, "year": 2020 }, { "authors": [ "L. Engstrom", "Andrew Ilyas", "Shibani Santurkar", "D. Tsipras", "J. Steinhardt", "A. Madry" ], "title": "Identifying statistical bias in dataset replication", "venue": null, "year": 2020 }, { "authors": [ "Michael Feldman", "Sorelle A. Friedler", "John Moeller", "Carlos Scheidegger", "Suresh Venkatasubramanian" ], "title": "Certifying and removing disparate impact", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Kaiming He", "X. Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Irina Higgins", "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew M Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NeurIPS Deep Learning and Representation Learning Workshop,", "year": 2014 }, { "authors": [ "Daniel Kang", "D. Raghavan", "Peter Bailis", "M. Zaharia" ], "title": "Model assertions for debugging machine learning", "venue": "Debugging Machine Learning Models,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Tom Fletcher" ], "title": "Semi-supervised learning with gans: Manifold invariance with improved inference", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Karolina La Fors", "Bart Custers", "Esther Keymolen" ], "title": "Reassessing values for emerging big data technologies: integrating design-based and application-based approaches", "venue": "Ethics and Information Technology,", "year": 2019 }, { "authors": [ "Himabindu Lakkaraju", "Ece Kamar", "Rich Caruana", "Jure Leskovec" ], "title": "Faithful and customizable explanations of black box models", "venue": "In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2019 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Edo Liberty", "Zohar Karnin", "Bing Xiang", "Laurence Rouesnel", "Baris Coskun", "Ramesh Nallapati", "Julio Delgado", "Amir Sadoughi", "Yury Astashonok", "Piali Das", "Can Balioglu", "Saswata Chakravarty", "Madhav Jha", "Philip Gautier", "David Arpin", "Tim Januschowski", "Valentin Flunkert", "Yuyang Wang", "Jan Gasthaus", "Lorenzo Stella", "Syama Rangapuram", "David Salinas", "Sebastian Schelter", "Alex Smola" ], "title": "Elastic machine learning algorithms in amazon sagemaker", "venue": null, "year": 2020 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Stefan Milz", "Tobias Rudiger", "Sebastian Suss" ], "title": "Aerial ganeration: Towards realistic data augmentation using conditional gans", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV) Workshops,", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Augustus Odena", "Catherine Olsson", "David Andersen", "Ian Goodfellow" ], "title": "TensorFuzz: Debugging neural networks with coverage-guided fuzzing", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Mnist example pytorch", "venue": null, "year": 2019 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do ImageNet classifiers generalize to ImageNet? volume", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": " why should i trust you?” explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Tongshuang Wu", "Carlos Guestrin", "Sameer Singh" ], "title": "Beyond accuracy: Behavioral testing of NLP models with CheckList", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4902–4912,", "year": 2020 }, { "authors": [ "Veit Sandfort", "Ke Yan", "Perry J. Pickhardt", "Ronald M. Summers" ], "title": "Data augmentation using generative adversarial networks (cyclegan) to improve generalizability in ct segmentation tasks", "venue": "Scientific Reports,", "year": 2019 }, { "authors": [ "Julien Simon" ], "title": "Amazon sagemaker model monitor – fully managed automatic monitoring for your machine learning models", "venue": "AWS News Blog,", "year": 2019 }, { "authors": [ "Dylan Slack", "Sorelle A. Friedler", "Emile Givental" ], "title": "Fairness warnings and fair-maml: Learning fairly with minimal data", "venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*),", "year": 2020 }, { "authors": [ "Dylan Slack", "Sophie Hilgard", "Sameer Singh", "Himabindu Lakkaraju" ], "title": "How much should i trust you? modeling uncertainty of black box explanations. AIES, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Constructing unrestricted adversarial examples with generative models", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Johannes Stallkamp", "Marc Schlipsing", "Jan Salmen", "Christian Igel" ], "title": "The German Traffic Sign Recognition Benchmark: A multi-class classification competition", "venue": "In IEEE International Joint Conference on Neural Networks,", "year": 2011 }, { "authors": [ "Erik B. Sudderth" ], "title": "Graphical models for visual object recognition and tracking", "venue": null, "year": 2006 }, { "authors": [ "P. Varma", "Bryan He", "Dan Iter", "Peng Xu", "R. Yu", "C.D. Sa", "Christopher Ré" ], "title": "Socratic learning: Augmenting generative models to incorporate latent subsets in training data", "venue": "arXiv: Learning,", "year": 2016 }, { "authors": [ "Paroma Varma", "Dan Iter", "Christopher De Sa", "Christopher Ré" ], "title": "Flipper: A systematic approach to debugging training sets. In Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA’17, New York, NY, USA, 2017", "venue": "Association for Computing Machinery. ISBN 9781450350297", "year": 2017 }, { "authors": [ "Tongshuang Wu", "Marco Tulio Ribeiro", "Jeffrey Heer", "Daniel Weld" ], "title": "Errudite: Scalable, reproducible, and testable error analysis", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Xuezhou Zhang", "Xiaojin Zhu", "Stephen J. Wright" ], "title": "Training set debugging using trusted items", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Zhengli Zhao", "Dheeru Dua", "Sameer Singh" ], "title": "Generating natural adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Pedregosa" ], "title": "We run our experiments with the default parameters and full", "venue": null, "year": 2011 } ]
[ { "heading": "1 INTRODUCTION", "text": "Debugging machine learning (ML) models is a critical part of the ML development life cycle. Uncovering bugs helps ML developers make important decisions about both development and deployment. In practice, much of debugging uses aggregate test statistics (like those in leader board style challenges [Rajpurkar et al. (2016)]) and continuous evaluation and monitoring post deployment [Liberty et al. (2020), Simon (2019)]. However, additional issues arise with over-reliance on test statistics. For instance, aggregate statistics like held out test accuracy are known to overestimate generalization performance [Recht et al. (2019)]. Further, statistics offer little insight nor remedy for specific model failures [Ribeiro et al. (2020); Wu et al. (2019)]. Last, reactive debugging of failures as they occur in production does little to mitigate harmful user experiences [La Fors et al. (2019)]. Several techniques exist for identifying undesirable behavior in machine learning models. These methods include explanations [Ribeiro et al. (2016); Slack et al. (2020b); Lakkaraju et al. (2019); Lundberg & Lee (2017)], fairness metrics [Feldman et al. (2015), Slack et al. (2020a)], data set replication [Recht et al. (2019); Engstrom et al. (2020)], and behavioral testing tools [Ribeiro et al. (2020)]. However, these techniques do not provide methods to remedy model bugs or require a high level of human supervision. To enable model designers to discover and correct model bugs beyond aggregate test statistics, we analyze unrestricted adversarial examples: instances on the data manifold that are misclassified [Song et al. (2018)]. We identify model bugs through diagnosing common patterns in unrestricted adversarial examples.\nIn this work, we propose Defuse: a technique for debugging classifiers through distilling1 unrestricted adversarial examples. Defuse works in three steps. First, Defuse identifies unrestricted adversarial examples by making small, semantically meaningful changes to input data using a variational autoencoder (VAE). If the classifier prediction deviates from the ground truth label on the altered instance, it returns the data instance as a potential model failure. This method employs similar techniques from [Zhao et al. (2018)]. Namely, small perturbations in the latent space of generative models can produce images that are misclassified. Second, Defuse distills the changes through clustering on the unrestricted adversarial example’s latent codes. In this way, Defuse diagnoses regions in the latent space that are problematic for the classifier. This method produces a set of\n1We mean distilling in the sense of “to extract the most important aspects of” and do not intend to invoke the knowledge distillation literature [Hinton et al. (2014)].\nclusters in the latent space where it is likely to find misclassified data. We call these localities failure scenarios. An annotator reviews the failure scenarios and assigns the correct label— one label per scenario. Third, Defuse corrects the model behavior on the discovered failure scenarios through optimization. Because we use a generative clustering model to describe the failure scenarios, we sample many unrestricted adversarial examples and finetune to fix the classifier. Critically, failure scenarios are highly useful for model debugging because they reveal high level patterns in the way the model fails. By understanding these consistent trends in model failures, model designers can more effectively understand problematic deployment scenarios for their models.\nTo illustrate the usefulness of failure scenarios, we run Defuse on a classifier trained on MNIST and provide an overview in figure 1. In the identification step (first pane in figure 1), Defuse generates unrestricted adversarial examples for the model. The red number in the upper right hand corner of the image is the classifier’s prediction. Although the classifier achieves high test set performance, we find naturally occurring examples that are classified incorrectly. Next, the method performs the distillation step (second pane in figure 1). The clustering model groups together similar failures for annotator labeling. We see that similar mistakes are grouped together. For instance, Defuse groups together a similar style of incorrectly classified eights in the first row of the second pane in figure 1. Next, Defuse receives annotator labels for each of the clusters.2 Last, we run the correction step using both the annotator labeled data and the original training data. We see that the model correctly classifies the images (third pane in figure 1). Importantly, the model maintains its predictive performance, scoring 99.1% accuracy after tuning. We see that Defuse enables model designers to both discover and correct naturally occurring model failures.\nWe provide the necessary background in Defuse (§2). Next, we detail the three steps in Defuse: identification, distillation, and correction (§3). We then demonstrate the usefulness of Defuse on three image data sets: MNIST [LeCun et al. (2010)], the German traffic signs data set [Stallkamp et al. (2011)], and the Street view house numbers data set [Netzer et al. (2011)], and find that Defuse discovers and resolves critical bugs in high performance classifiers trained on these datasets (§4)." }, { "heading": "2 NOTATION AND BACKGROUND", "text": "In this section, we establish notation and background on unrestricted adversarial examples. Though unrestricted adversarial examples can be found in many domains, we focus on Defuse applied to image classification.\n2We assign label 8 to the first row in the second pane of figure 1, label 0 to the second row, and label 6 to the third row.\nUnrestricted adversarial examples Let f : RN ! [0, 1]C denote a classifier that accepts a data point x 2 X , where X is the set of legitimate images. The classifier f returns the probability that x belongs to class c 2 {1, ..., C}. Next, assume f is trained on a data set D consisting of d tuples (x, y) containing data point x and ground truth label y using loss function L. Finally, suppose there exists an oracle o : x 2 X ! {1, ..., C} that outputs a label for x. We define unrestricted adversarial examples as the set AN := {x 2 X | o(x) 6= f(x)} [Song et al. (2018)].\nVariational Autoencoders (VAEs) In order to discover unrestricted adversarial examples, it is necessary to model the set of legitimate images. We use a VAE to create such a model. A VAE is composed of an encoder and a decoder neural networks. These networks are used to model the relationship between data x and latent factors z 2 RK . Where x is generated by some ground truth latent factors v 2 RM , we wish to train a model such that the learned generative factors closely resemble the true factors: p(x|v) ⇡ p(x|z). In order to train such a model, we employ the -VAE [Higgins et al. (2017)]. This technique produces encoder q (z|x) that maps from the data and latent codes and decoder p✓(x|z) that maps from codes to data." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 FAILURE SCENARIOS", "text": "We begin by formalizing our notion of failure scenarios. Let z 2 RK be the latent codes corresponding to image x 2 X and q (·) : x ! z be the encoder mapping the relationship between images and latent codes. Definition 3.1. Failure scenario. Given a constant ✏ > 0, vector norm || · ||, and point z0, a failure scenario is a set of images AR = {x 2 X | ✏ > ||q (x) z0|| ^ o(x) 6= f(x)}.\nPrevious works that investigate unrestricted adversarial examples look for specific instances where the oracle and the model disagree [Song et al. (2018); Zhao et al. (2018)]. We instead look for regions in the latent space where this is the case. Because the latent space of the VAE tends to take on Gaussian form due to the prior, we can use euclidean distance to define these regions. If we were to define failure scenarios on the original data manifold, we may need a much more complex distance function. Because it is likely too strict to assume the oracle and model disagree on every instance in such a region, we also introduce a relaxation. Definition 3.2. Relaxed failure scenario. Given a constant ✏ > 0, vector norm || · ||, point z0, and threshold ⇢, a relaxed failure scenario is a set of images Af = {x 2 X | ✏ > ||q (x) z0||} such that |{x 2 Af | o(x) 6= f(x)}| / |Af | > ⇢.\nIn this work, we adopt the latter definition of failure scenarios. To concretize failure scenarios and provide evidence for their existence, we continue our MNIST example from figure 1. We plot the t-SNE embeddings of the latent codes of 10000 images from the training set and 516 unrestricted\nadversarial examples created during the identification step in figure 2 (details of how we generate unrestricted adversarial examples in section 3.2.1). We see that the unrestricted adversarial examples are from similar regions in the latent space." }, { "heading": "3.2 DEFUSE", "text": "In this section, we introduce Defuse: our procedure for identifying and correcting classifier performance on failure scenarios. First, we explain how we identity unrestricted adversarial examples using VAEs. Next, we describe our clustering approach that distills these instances into failure scenarios. Last, we introduce our approach to correct classifier predictions on the failure scenarios." }, { "heading": "3.2.1 IDENTIFYING UNRESTRICTED ADVERSARIAL EXAMPLES", "text": "This section describes the identification step in Defuse (first pane in figure 1). The aim of the identification step is to generate many unrestricted adversarial examples. In essence, we encode all the images from the training data. We perturb the latent codes with a small amount of noise drawn from a Beta distribution. We save instances that are classified differently from ground truth by f when decoded. By perturbing the latent codes with a small amount of noise, we expect the decoded instances to have small but semantically meaningful differences from the original instances. Thus, if the classifier prediction deviates on the perturbation the instance is likely misclassified. We denote the set of unrestricted adversarial examples for a single instance . We generate unrestricted adversarial examples over each instance x 2 X producing a set of unrestricted adversarial containing the produced for each instance x. Pseudo code of the algorithm for generating a single unrestricted adversarial example is given in algorithm 1 in appendix A.\nOur technique is related to the method for generating natural adversarial examples from [Zhao et al. (2018)] — a very similar but slightly different concept from unrestricted adversarial examples. The authors use a similar stochastic search method in the latent space of a GAN. They start with a small amount of noise and increase magnitude of the noise until they find a unrestricted adversarial example. Thus, they save only the unrestricted adversarial examples which are minimally distant from a data point. They also save images that differ in prediction from the original decoded instance. Because we iterate over the entire data set, it is simpler to keep the level of noise fixed and sample a predetermined number of times. In addition, we save images that differ in ground truth label from the original decoded instance because we seek to debug a classifier. Meaning, if the original instance is misclassified we wish to save this instance as a model failure." }, { "heading": "3.2.2 DISTILLING FAILURE SCENARIOS", "text": "This section describes the distillation step in defuse (second pane of figure 1). The goal of the distillation step is to cluster the latent codes of the set of unrestricted adversarial examples in order to diagnose failure scenarios. We require our clustering method to (1) infer the correct number of clusters from the data and (2) be capable of generating instances of each cluster. We need to infer the number of clusters from the data because the number of failure scenarios are unknown ahead of time. Further, we must be capable of generating many instances from each cluster so that we have enough data to finetune on in order to correct the faulty model behavior. In addition, generating many failure instances enables model designers to see numerous examples from the failure scenarios, which encourages understanding of the model failure modes. Though any such clustering method that fits this description could be used for distillation, we use a Gaussian mixture model (GMM) with Dirichlet process prior. We use the Dirichlet process because it nicely describes the clustering problem where the number of mixtures is unknown before hand, fulfilling our first criteria [Sudderth (2006)]. Additionally, because the model is generative, we can sample new instances, which satisfies our second criteria.\nIn pratice, we use the truncated stick breaking construction of the dirchlet process, where K is the upper bound of the number of mixtures. The truncated stick breaking construction simplifies inference making computation more efficient [Sudderth (2006)]. The method outputs a set of clusters ✓j = (µj , j ,⇡j) where j 2 {1, ...,K}. The parameters µ and describe the mean and variance of a multivariate normal distribution and ⇡ indicates the cluster weight. To perform inference on the model, we employ expectation maximization (EM) described in [Bishop (2006)] and use the implementation provided in [Pedregosa et al. (2011)]. Once we run EM and determine the parameter\nvalues, we throw away cluster components that are not used by the model. We fix some small ✏ and define the set of failure scenarios ⇤ generated at the distillation step as: ⇤ := {(µj ,⌃j ,⇡j)|⇡j > ✏}." }, { "heading": "3.2.3 CORRECTING FAILURE SCENARIOS", "text": "Labeling First, an annotator assigns the correct label to the failure scenarios. For each failure scenario identified in ⇤, we sample Q latent codes from z ⇠ N (µj , ⌧ · j). Here, ⌧ 2 R is a hyperparameter that controls the diversity of samples from the failure scenario. Because it could be possible for multiple ground truth classes to be present in a failure scenario, we set this parameter tight enough such that the sampled instances are from the same class to make labeling easier. We reconstruct the latent codes using the decoder p✓(x|z). Next, an annotator reviews the reconstructed instances from the scenario and decides whether the scenario constitutes a model failure. If so, the annotator assigns the correct label to all of the instances. The correct label constitutes a single label for all of the instances generated from the scenario. We repeat this process for each of the scenarios identified in ⇤ and produce a dataset of failure instances Df . Pseudo code for the procedure is given in algorithm 2 in appendix A.\nFinetuning We finetune on the training data with an additional regularization term to fix the classifier performance on the failure scenarios. The regularization term is the cross entropy loss between the identified failure scenarios and the annotator label. Where CE is the cross entropy loss applied to the failure instances Df and is the hyperparameter for the regularization term, we optimize the following objective using gradient descent: F(D,Df ) = L(D) + · CE(Df ). This objective encourages the model to maintain its predictive performance on the original training data while en-\ncouraging the model to predict the failure instances correctly. The regularization term controls the pressure applied to the model to classify the failure instances correctly." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 SETUP", "text": "Datasets We evaluate Defuse on three datasets: MNIST [LeCun et al. (2010)], the German Traffic Signs dataset [Stallkamp et al. (2011)], and the Street view house numbers dataset (SVHN) [Netzer et al. (2011)]. MNIST consists of 60, 000 32X32 handwritten digits for training and 10, 000 digits for testing. The images are labeled corresponding to the digits 0 9. The German traffic signs data set includes 26, 640 training and 12, 630 testing images of size 128X128. We randomly split the testing data in half to produce a validation and testing set. The images are labeled from 43 different classes to indicate the type of traffic signs. The SVHN data set consists of 73, 257 training and 26, 032 testing images of size 32X32. The images include digits of house numbers from Google streetview with labels 0 9. We split the testing set in half to produce a validation and testing set. Models On MNIST, we train a CNN scoring 98.3% test set accuracy following the architecture from [Paszke et al. (2019)]. On German traffic signs and SVHN, we finetune a Resnet18 model pretrained on ImageNet [He et al. (2016)]. The German signs and SVHM models score 98.7% and 93.2% test accuracy respectively. We train a -VAE on all available data from each data set to model the set of legitimate images in Defuse. We use an Amazon EC2 P3 instance with a single NVIDIA Tesla V100 GPU for training. We follow similar architectures to [Higgins et al. (2017)]. We set the size of the latent dimension z to 10 for MNIST/SVHN and 15 for German signs. We provide our -VAE architectures in appendix B.\nDefuse In the identification step, we fix the parameters of the Beta distribution noise a and b to a = b = 50.0 for MNIST and a = b = 75.0 for SVHN and German signs. We found these parameters were good choices because they produce a very small amount of perturbation noise making the decoded instance only slightly different than the original instance. During distillation, we set the upper bound on the number of components K to 100. We generally found the actual number of clusters to be much lower than this level. Thus, this serves as an appropriate upper bound. We also fixed the weight threshold for clusters ✏ to 0.01 during distillation in order to remove clusters with very low weighting. We additionally randomly down sample the number of unrestricted adversarial examples to 50, 000 to make inference of the GMM more efficient. For correction, we sample finetuning and testing sets consisting of 256 images each from every failure scenario. This number of samples captures the breadth of possible images in the scenario, so it is appropriate for tuning and evaluation. We use the finetuning set as the set of failure instances Df . We use the test set as held out data for evaluating classifier performance on the failure scenarios after correction. During sampling, we fix the sample diversity ⌧ to 0.5 for MNIST and 0.01 for SVHN and German signs because the samples from each of the failure scenarios appear to be in the same class using these values. We finetune over a range of ’s in order to find the best balance between training and failure scenario data. We use 3 epochs for MNIST and 5 for both SVHN and German Signs because training converged within both these time frames. During finetuning, we select the model for each according to the highest training set accuracy for MNIST or validation set accuracy for SVHM and German traffic signs at the end of each finetuning epoch. We select the best model overall as the highest training or validation performance over all ’s.\nAnnotator Labeling Because Defuse requires human supervision, we use Amazon Sagemaker Ground Truth to both determine whether clusters generated in the distillation step are failure scenarios and to generate their correct label. In order to determine whether clusters are failure scenarios, we sample 10 instances from each cluster in the distillation step. It is usually apparent the classifier disagrees with many of the ground truth labels within 10 instances, and thus it is appropriate to label the cluster as a failure scenario. For example, in figure 3 it is generally clear the classifier incorrectly predicts the data within only a few examples. As such, 10 instances is a reasonable choice. To reduce noise in the annotation process, we assign the same image to 5 different workers and take the majority annotated label as ground truth. The workers label the images using an interface that includes a single image and the possible labels for that task. We additionally instruct workers to select “None of the above” if the image does not belong to any class and discard these labels. For instance, the MNIST interface includes a single image and buttons for the digits 0 9 along with a\n“None of the above” button. We provide a screen shot of this interface in figure 14. If more than half (i.e. setting ⇢ = 0.5) of worker labeled instances disagree with the classifier predictions on the 10 instances, we call the cluster a failure scenario. We chose ⇢ = 0.5 because clusters are highly dense with incorrect predictions at this level, making them useful for both understanding model failures and worthwhile for correction. We take the majority prediction over each of the 10 ground truth labels as the label for the failure scenario. As an exception, annotating the German traffic signs data requires specific knowledge of traffic signs. The German traffic signs data ranges across 43 different types of traffic signs. It is not reasonable to assume annotators have enough familiarity with this data and can label it accurately. For this data set, we, the authors, reviewed the distilled clusters and determined which clusters constituted failure scenarios. We labeled the cluster a failure scenario if half the instances appeared to be misclassified." }, { "heading": "4.2 ILLUSTRATIVE FAILURE SCENARIO EXAMPLES", "text": "We demonstrate the potential of Defuse for identifying critical model bugs. We review failure scenarios produced in the three datasets we consider. All together, Defuse produces 19 failure scenarios for MNIST, 6 for SVHN, and 8 for German signs. For each dataset, we provide samples from three failure scenarios in figure 3. The failure scenarios include numerous mislabeled examples. Each failure scenario is composed of mislabeled examples of a similar style. For example, in MNIST, the failure scenario in the upper left hand corner of figure 3 includes a similar style of 4’s that are generally predicted incorrectly. The same is true for the failure scenarios in the center and right column where a certain style of 2’s and 6’s are mistaken. The failure scenarios generally include images which seem difficult to classify. For instance, the misclassified 6’s are quite thin making them appear like 1’s in some cases. There are similar trends in SVHN and German Signs. In SVHN, particular types of 5’s and 8’s are misclassified. The same is true in German signs where styles of 50km/h and 30km/h signs are predicted incorrectly. Generally, these methods reveal important bugs in each of the classifiers. It is clear from the MNIST example for instance that very skinny 6’s are challenging for the classifier to predict correctly. Further, the German signs classifier has a difficult time with 50km/h signs and tends to frequently mistake them as 80km/h. We provide further samples from other failure scenarios in appendix D. These results clearly demonstrate Defuse reveals insightful model bugs which are useful for model designers to understand." }, { "heading": "4.3 CORRECTING FAILURE SCENARIOS", "text": "We show that Defuse resolves the failure scenarios while maintaining model generalization on the test set. To perform this analysis, we assess accuracy on both the failure scenario test data and test set after correction. It is important for classifier accuracy to improve on the failure scenario data in order\nto correct the bugs discovered while running Defuse. At the same time, the classifier accuracy on the test set should stay at a similar level or improve indicating that model generalization according to the test set is still strong. We compare Defuse against finetuning only on the unrestricted adversarial examples labeled by annotators. We expect this baseline to be reasonable because related works which focus on robustness to classic adversarial attacks demonstrate that tuning directly on the adversarial examples is effective [Zhang et al. (2019)]. We finetune on the unrestricted adversarial examples sweeping over a range of different ’s in the same way as Defuse described in section 4.1. We use this baseline for MNIST and SVHN and not German signs because we, the authors, assigned the failure scenarios for this data set. Thus, we do not have ground truth labels for unrestricted adversarial examples.\nWe provide an overview of the models before finetuning, finetuning with the unrestricted adversarial examples, and using Defuse in figure 4. Defuse scores highly on the failure scenario data after correction compared to before finetuning. There is only marginal improvement finetuning on the unrestricted adversarial examples. These results indicate Defuse corrects the faulty model performance on the identified failure scenarios. Further, we see the clustering step in Defuse is critical to its success because of the technique’s superior performance compared to finetuning on the unrestricted adversarial examples. In addition, there are minor effects on test set performance during finetuning. The test set accuracy increases slightly for MNIST and decreases marginally for SVHN and German Signs for both tuning on the unrestricted adversarial examples and using Defuse. Though the test set performance changes marginally, the increased performance on the failure scenarios demonstrates Defuse’s capacity to correct important model errors. Further, we plot the relationship between test set accuracy and failure scenario test accuracy in figure 5. We generally see there is an appropriate for each model where there is both high test set performance and accuracy on the failure scenarios. All in all, these results indicate Defuse serves as an effective method for correcting specific cases of faulty classifier performance while maintaining model generalization." }, { "heading": "4.4 ANNOTATOR AGREEMENT", "text": "Because we rely on annotators to provide the ground truth labels for the unrestricted adversarial examples, we investigate the agreement between the annotators during labeling. It is important for the annotators to agree on the labels for the unrestricted adversarial examples so that we can have high confidence our evaluation is based on accurately labeled data. We evaluate the annotator agreement through assessing the percent of annotators that voted for the majority label prediction in an unrestricted adversarial example across all the annotated examples. This metric will be high when the annotators are in consensus and low when only a few annotators constitute the majority vote. We provide the annotator agreement on MNIST and SVHN in figure 6 broken down into failure scenario data, non-failure scenario data, and their combination. Interestingly, the failure scenario data has slightly lower annotator agreement indicating these tend to be more ambiguous examples. Further, there is lower agreement on SVHN than MNIST, likely because this data is more complex. All in all, there is generally high annotator agreement across all the data." }, { "heading": "5 RELATED WORK", "text": "A number of related approaches for improving classifier performance use data created from generative models — mostly generative adversarial networks (GANs) [Sandfort et al. (2019); Milz et al. (2018); Antoniou et al. (2017)]. These methods use GANs to generate instances from classes that are underrepresented in the training data to improve generalization performance. Additional methods use generative models for semi-supervised learning [Kingma et al. (2014); Varma et al. (2016); Kumar et al. (2017); Dumoulin et al. (2016)]. Though these methods are similar in nature to the correction step of our work, a key difference is Defuse focuses on summarizing and presenting high level model failures. Also, [Varma et al. (2017)] provide a system to debug data generated from a GAN when the training set may be inaccurate. Though similar, we ultimately use a generative model to debug a classifier and do not focus on the generative model itself. Last, similar to [Song et al. (2018), Zhao et al. (2018)], [Booth et al. (2020)] provide a method to generate highly confident misclassified instances.\nRelated to debugging models, [Kang et al. (2018)] focus on model assertions that flag failures during production. Also, [Zhang et al. (2018)] investigate debugging the training set for incorrectly labeled instances. We focus on preemptively identifying model bugs and do not focus on incorrectly labeled test set instances. Additionally, [Ribeiro et al. (2020)] propose a set of behavioral testing tools that help model designers find bugs in NLP models. This technique requires a high level of supervision and thus might not be appropraite in some settings. Last, [Odena et al. (2019)] provide a technique to debug neural networks through perturbing data inputs with various types of noise. By leveraging unrestricted adversarial examples, we distill high level patterns in critical and naturally occurring model bugs. This technique requires minimal human supervision while presenting important types of model errors to designers." }, { "heading": "6 CONCLUSION", "text": "In this paper, we present Defuse: a method that generates and aggregates unrestricted adversarial examples to debug classifiers. Though unrestricted adversarial examples have been proposed in previous works, we harness such examples for the purpose of debugging classifiers. We accomplish this task through identifying failure scenarios: regions in the latent space of a VAE with many unrestricted adversarial examples. On a variety of data sets, we find that samples from failure scenarios are useful in a number of ways. First, failure scenarios are informative for understanding the ways certain models fail. Second, the generative aspect of failure scenarios is very useful for correcting failure scenarios. In our experimental results, we show that these failure scenarios include critical model issues for classifiers with real world impacts — i.e. traffic sign classification — and verify our results using ground truth annotator labels. We demonstrate that Defuse successfully resolves these issues. Although Defuse identifies important errors in classifiers, the technique requires a minimal level of human supervision. Namely, the failure scenarios must be reviewed before correction. In the future, it will be crucial to investigate automatic ways of reviewing failure scenarios." }, { "heading": "A DEFUSE PSUEDO CODE", "text": "In algorithm 2, Correct(·) and Label(·) are the steps where the annotator decides if the scenario warrants correction and the annotator label for the failure scenario.\nAlgorithm 1 Identification Step 1: procedure IDENTIFY(f, p, q, x, y, a, b) 2: := {} 3: µ, := q (x) 4: for i 2 {1, ..., Q} do 5: ✏ := [Beta(a, b)1, 6: ...,Beta(a, b)M ] 7: xdecoded := p✓(µ+ ✏) 8: if y 6= f(xdecoded) then 9: := [ xdecoded 10: end if 11: end for 12: Return 13: end procedure\nAlgorithm 2 Labeling Step 1: procedure LABEL SCENARIOS(Q,⇤, p, q, ⌧ ) 2: Df := {} 3: for (µ, ,⇡) 2 ⇤ do 4: Xd := {} 5: for i 2 {1, .., Q} do 6: Xd := Xd [ q (N (µ, ⌧ · )) 7: end for 8: if Correct(Xd) then 9: Df := Df [ {Xd,Label(Xd)} 10: end if 11: end for 12: Return S Df 13: end procedure" }, { "heading": "B TRAINING DETAILS", "text": "B.1 GMM DETAILS\nIn all experiments, we use the implementation of Gaussian mixture model with dirichlet process prior from [Pedregosa et al. (2011)]. We run our experiments with the default parameters and full component covariance.\nB.2 MNIST DETAILS\nModel details We train a CNN on the MNIST data set using the architecture in figure 7. We used the Adadelta optimizer with the learning rate set to 1. We trained for 5 epochs with a batch size of 64.\n-VAE training details We train a -VAE on MNIST using the architectures in figure 8 and 9. We set to 4. We trained for 800 epochs using the Adam optimizer with a learning rate of 0.001, a minibatch size of 2048, and set to 0.4. We also applied a linear annealing schedule on the KL-Divergence for 500 optimization steps. We set z to have 10 dimensions.\nIdentification We performed identification with Q set to 500. We set a and b both to 50. We ran identification over the entire training set. Last, we limited the max allowable size of to 100.\nDistillation We ran the distillation step setting K, the upper bound on the number of mixtures, to 100. We fixed ✏ to 0.01 and discarded clusters with mixing proportions less than this value. This left 44 possible scenarios. We set ⌧ to 0.5 during review. We used Amazon Sagemaker Ground Truth\nto determine failure scenarios and labels. The labeling procedure is described in section 4.1. This produced 19 failure scenarios.\nCorrection We sampled 256 images from each of the failure scenarios for both finetuning and testing. We finetuned with minibatch size of 256, the Adam optimizer, and learning rate set to 0.001. We swept over a range of correction regularization ’s consisting of [1e 10, 1e 9, 1e 8, 1e 7, 1e 6, 1e 5, 1e 4, 1e 3, 1e 2, 1e 1, 1, 2, 5, 10, 20, 100, 1000] and finetuned for 3 epochs on each.\nB.3 GERMAN SIGNS DATASET DETAILS\nDataset The data consists of 26640 training images and 12630 testing images consisting of 43 different types of traffic signs. We randomly split the testing data in half to produce 6315 testing and validation images. Additionally, we resize the images to 128x128 pixels.\nClassifier f We fine-tuned the ResNet18 model for 20 epochs using Adam with the cross entropy loss, learning rate of 0.001, batch size of 256 on the training data set, and assessed the validation accuracy at the end of each epoch. We saved the model with the highest validation accuracy.\n-VAE training details We trained for 800 epochs using the Adam optimizer with a learning rate of 0.001, a minibatch size of 2048, and set to 4. We also applied a linear annealing schedule on the KL-Divergence for 500 optimization steps. We set z to have 15 dimensions.\nIdentification We performed identification with Q set to 100. We set a and b both to 75.\nDistillation We ran the distillation step setting K to 100. We fixed ✏ to 0.01 and discarded clusters with mixing proportions less than this value. This left 38 possible scenarios. We set ⌧ to 0.01 during review. We determined 8 of these scenarios were particularly concerning.\nCorrection We finetuned with minibatch size of 256, the Adam optimizer, and learning rate set to 0.001. We swept over a range of correction regularization ’s consisting of [1e 10, 1e 9, 1e 8, 1e 7, 1e 6, 1e 5, 1e 4, 1e 3, 1e 2, 1e 1, 1, 2, 5, 10, 20, 100, 1000] and finetuned for 5 epochs on each.\nB.4 SVHN DETAILS\nDataset The data set consists of 73257 training and 26032 testing images. We also randomly split the testing data to create a validation data set. Thus, the final validation and testing set correspond to 13016 images each.\nClassifier f We fine tuned for 10 epochs using the Adam optimizer, learning rate set to 0.001, and a batch size of 2048. We chose the model which scored the best validation accuracy when measured at the end of each epoch.\n-VAE training details We trained the -VAE for 400 epochs using the Adam optimizer, learning rate 0.001, and minibatch size of 2048. We set to 4 and applied a linear annealing schedule on the Kl-Divergence for 5000 optimization steps. We set z to have 10 dimensions.\nIdentification We set Q to 100. We also set the maximum size of to 10. We set a and b to 75.\nDistillation We set K to 100. We fixed ✏ to 0.01. The distillation step identified 32 plausible failure scenarios. The annotators deemed 6 of these to be failure scenarios. We set ⌧ to 0.01 during review.\nCorrection We set the minibatch size of 2048, the Adam optimizer, and learning rate set to 0.001. We considered a range of ’s: [1e 10, 1e 9, 1e 8, 1e 7, 1e 6, 1e 5, 1e 4, 1e 3, 1e 2, 1e 1, 1, 2, 5, 10, 20, 100, 1000]. We finetuned for 5 epochs.\nB.5 T-SNE EXAMPLE DETAILS\nWe run t-SNE on 10, 000 examples from the training data and 516 unrestricted adversarial examples setting perplexity to 30. For the sake of clarity, we do not include outliers from the unrestricted adversarial examples. Namely, we only include unrestricted adversarial examples with > 1% probability of being in one of the MNIST failure scenario clusters." }, { "heading": "C ANNOTATOR INTERFACE", "text": "We provide a screenshot of the annotator interface in figure 14." }, { "heading": "D ADDITIONAL EXPERIMENTAL RESULTS", "text": "D.1 ADDITIONAL SAMPLES FROM MNIST FAILURE SCENARIOS\nWe provide additional examples from 10 randomly selected (no cherry picking) MNIST failure scenarios. We include the annotator consensus label for each failure scenario.\nD.2 ADDITIONAL SAMPLES FROM GERMAN SIGNS FAILURE SCENARIOS\nWe provide samples from all of the German signs failure scenarios. We provide the names of the class labels in figure 25. For each failure scenario, we indicate our assigned class label in the caption and the classifier predictions in the upper right hand corner of the image.\nD.3 ADDITIONAL SAMPLES FROM SVHN FAILURE SCENARIOS\nWe provide additional samples from each of the SVHN failure scenarios. The digit in the upper left hand corner is the classifier predicted label. The caption includes the Ground Truth worker labels." } ]
2,020
DEFUSE: DEBUGGING CLASSIFIERS THROUGH DIS-
SP:bbaedd5d8e7591fa3a5587260bf19f3d05779976
[ "The paper proposes a model for *variable selection* in *Mixed Integer Programming (MIP)* solvers. While this problem is clearly a sequential decision making task, modeling it as an MDP is challenging. As a result, existing works use other approaches such as ranking or imitation learning. This paper overcomes these challenges by introducing a new problem representation. " ]
Branch-and-Bound (B&B) is a general and widely used algorithm paradigm for solving Mixed Integer Programming (MIP). Recently there is a surge of interest in designing learning-based branching policies as a fast approximation of strong branching, a humandesigned heuristic. In this work, we argue that strong branching is not a good expert to imitate for its poor decision quality when turning off its side effects in solving branch linear programming. To obtain more effective and non-myopic policies than a local heuristic, we formulate the branching process in MIP as reinforcement learning (RL) and design a novel set representation and distance function for the B&B process associated with a policy. Based on such representation, we develop a novelty search evolutionary strategy for optimizing the policy. Across a range of NP-hard problems, our trained RL agent significantly outperforms expert-designed branching rules and the state-of-the-art learning-based branching methods in terms of both speed and effectiveness. Our results suggest that with carefully designed policy networks and learning algorithms, reinforcement learning has the potential to advance algorithms for solving MIPs.
[]
[ { "authors": [ "Tobias Achterberg" ], "title": "Conflict analysis in mixed integer programming", "venue": "Discrete Optimization4(1):,", "year": 2007 }, { "authors": [ "Tobias Achterberg" ], "title": "Scip: solving constraint integer programs", "venue": "Mathematical Programming Computation1", "year": 2009 }, { "authors": [ "Tobias Achterberg", "Timo Berthold" ], "title": "Hybrid branching. In International Conference on AI and OR Techniques in Constriant Programming for Combinatorial Optimization Problems", "venue": null, "year": 2009 }, { "authors": [ "Tobias Achterberg", "Roland Wunderling" ], "title": "Mixed integer programming: Analyzing 12 years of progress. In Facets of combinatorial optimization pp", "venue": null, "year": 2013 }, { "authors": [ "Tobias Achterberg", "Thorsten Koch", "Alexander Martin" ], "title": "Branching rules revisited", "venue": "Operations Research Letters33(1):,", "year": 2005 }, { "authors": [ "Réka Albert", "Albert-László Barabási" ], "title": "Statistical mechanics of complex networks", "venue": "Reviews of modern physics74(1):,", "year": 2002 }, { "authors": [ "Alejandro Marcos Alvarez", "Quentin Louveaux", "Louis Wehenkel" ], "title": "A machine learning-based approximation of strong branching", "venue": "INFORMS Journal on Computing29(1):,", "year": 2017 }, { "authors": [ "Karl J Astrom" ], "title": "Optimal control of markov processes with incomplete state information", "venue": "Journal of mathematical analysis and applications10(1):,", "year": 1965 }, { "authors": [ "Egon Balas", "Andrew Ho" ], "title": "Set covering algorithms using cutting planes, heuristics, and subgradient optimization: a computational study", "venue": "In Combinatorial Optimization pp. . Springer,", "year": 1980 }, { "authors": [ "Cynthia Barnhart", "Amy M Cohn", "Ellis L Johnson", "Diego Klabjan", "George L Nemhauser", "Pamela H Vance" ], "title": "Airline crew scheduling", "venue": "In Handbook of transportation science pp. . Springer,", "year": 2003 }, { "authors": [ "Yoshua Bengio", "Andrea Lodi", "Antoine Prouvost" ], "title": "Machine learning for combinatorial optimization: a methodological tour d’horizon", "venue": "European Journal of Operational Research,", "year": 2020 }, { "authors": [ "Timo Berthold" ], "title": "Primal heuristics for mixed integer programs", "venue": null, "year": 2006 }, { "authors": [ "Edoardo Conti", "Vashisht Madhavan", "Felipe Petroski Such", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of noveltyseeking agents", "venue": "In Advances in neural information processing systems", "year": 2018 }, { "authors": [ "Gérard Cornuéjols", "Ranjani Sridharan", "Jean-Michel" ], "title": "Thizy. A comparison of heuristics and relaxations for the capacitated plant location problem", "venue": "European journal of operational research50(3):,", "year": 1991 }, { "authors": [ "Giovanni Di Liberto", "Serdar Kadioglu", "Kevin Leo", "Yuri Malitsky. Dash" ], "title": "Dynamic approach for switching heuristics", "venue": "European Journal of Operational", "year": 2016 }, { "authors": [ "Marc Etheve", "Zacharie Alès", "Côme Bissuel", "Olivier Juan", "Safia Kedad-Sidhoum" ], "title": "Reinforcement learning for variable selection in a branch and bound algorithm", "venue": "arXiv preprint arXiv:2005.10026,", "year": 2020 }, { "authors": [ "Gerald Gamrath", "Daniel Anderson", "Ksenia Bestuzheva", "Wei-Kun Chen", "Leon Eifler", "Maxime Gasse", "Patrick Gemander", "Ambros Gleixner", "Leona Gottwald", "Katrin Halbig" ], "title": "The scip optimization suite", "venue": null, "year": 2020 }, { "authors": [ "Maxime Gasse", "Didier Chételat", "Nicola Ferroni", "Laurent Charlin", "Andrea Lodi" ], "title": "Exact combinatorial optimization with graph convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Christoph Hansknecht", "Imke Joormann", "Sebastian Stiller" ], "title": "Cuts, primal heuristics, and learning to branch for the time-dependent traveling salesman problem", "venue": "arXiv preprint arXiv:1805.01415,", "year": 2018 }, { "authors": [ "Elias Boutros Khalil", "Pierre Le Bodic", "Le Song", "George Nemhauser", "Bistra Dilkina" ], "title": "Learning to branch in mixed integer programming", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Rafael A Melo", "Laurence A Wolsey" ], "title": "Mip formulations and heuristics for two-level productiontransportation problems", "venue": "Computers & Operations", "year": 2012 }, { "authors": [ "Robert Rodosek", "Mark G Wallace", "Mozafar T Hajian" ], "title": "A new approach to integrating mixed integer programming and constraint logicprogramming", "venue": "Annals of Operations", "year": 1999 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Minjie Wang", "Lingfan Yu", "Da Zheng", "Quan Gan", "Yu Gai", "Zihao Ye", "Mufei Li", "Jinjing Zhou", "Qi Huang", "Chao Ma" ], "title": "Deep graph library: Towards efficient and scalable deep learning on graphs", "venue": "arXiv preprint arXiv:1909.01315,", "year": 2019 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Jan Peters", "Juergen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)", "year": 2008 }, { "authors": [ "Laurence A Wolsey", "George L Nemhauser" ], "title": "Integer and combinatorial optimization, volume 55", "venue": null, "year": 1999 } ]
[ { "heading": "1 INTRODUCTION", "text": "Mixed Integer Programming (MIP) has been applied widely in many real-world problems, such as scheduling (Barnhart et al., 2003) and transportation (Melo & Wolsey, 2012). Branch and Bound (B&B) is a general and widely used paradigm for solving MIP problems (Wolsey & Nemhauser, 1999). B&B recursively partitions the solution space into a search tree and compute relaxation bounds along the way to prune subtrees that provably can not contain an optimal solution. This iterative process requires sequential decision makings: node selection: selecting the next solution space to evaluate, variable selection: selecting the variable by which to partition the solution space (Achterberg & Berthold, 2009). In this work, we focus on learning a variable selection strategy, which is the core of the B&B algorithm (Achterberg & Wunderling, 2013).\nVery often, instances from the same MIP problem family are solved repeatedly in industry, which gives rise to the opportunity for learning to improve the variable selection policy (Bengio et al., 2020). Based on the human-designed heuristics, Di Liberto et al. (2016) learn a classifier that dynamically selects an existing rule to perform variable selection; Balcan et al. (2018) consider a weighted score of multiple heuristics and analyse the sample complexity of finding such a good weight. The first step towards learning a variable selection policy was taken by Khalil et al. (2016), who learn an instance customized policy in an online fashion, as well as Alvarez et al. (2017) and Hansknecht et al. (2018) who learn a branching rule offline on a collection of similar instances. Those methods need extensively feature engineering and require strong domain knowledge in MIP. To avoid that, Gasse et al. (2019) propose a graph convolutional neural network approach to obtain competitive performance, only requiring raw features provided by the solver. In each case, the branching policy is learned by imitating the decision of strong branching as it consistently leads to the smallest B&B trees empirically (Achterberg et al., 2005).\nIn this work, we argue that strong branching is not a good expert to imitate. The excellent performance (the smallest B&B tree) of strong branching relies mostly on the information obtained in solving branch linear programming (LP) rather than the decision it makes. This factor prevents learning a good policy by imitating only the decision made by strong branching. To obtain more effective and non-myopic policies,i.e. minimizing the total solving nodes rather than maximizing the immediate duality gap gap, we use reinforcement learning (RL) and model the variable selection process as a Markov Decision Process (MDP). Though the MDP formulation for MIP has been mentioned in the previous works (Gasse et al., 2019; Etheve et al., 2020), the advantage of RL has not been demonstrated clearly in literature.\nThe challenges of using RL are multi-fold. First, the state space is a complex search tree, which can involve hundreds or thousands of nodes (with a linear program on each node) and evolve over time. In the meanwhile, the objective of MIP is to solve problems faster. Hence a trade-off between decision quality and computation time is required when representing the state and designing a policy based on this state representation. Second, learning a branching policy by RL requires rolling out on a distribution of instances. Moreover, for each instance, the solving trajectory could contain thousands of steps and actions can have long-lasting effects. These result in a large variance in gradient estimation. Third, each step of variable selection can have hundreds of candidates. The large action set makes the exploration in MIP very hard.\nIn this work, we address these challenges by designing a policy network inspired by primal-dual iteration and employing a novelty search evolutionary strategy (NS-ES) to improve the policy. For efficiency-effectiveness trade-off, the primal-dual policy ignores the redundant information and makes high-quality decisions on the fly. For reducing variance, the ES algorithm is an attractive choice as its gradient estimation is independent of the trajectory length (Salimans et al., 2017). For exploration, we introduce a new representation of the B&B solving process employed by novelty search (Conti et al., 2018) to encourage visiting new states.\nWe evaluate our RL trained agent over a range of problems (namely, set covering, maximum independent set, capacitated facility location). The experiments show that our approach significantly outperforms stateof-the-art human-designed heuristics (Achterberg & Berthold, 2009) as well as imitation based learning methods (Khalil et al., 2016; Gasse et al., 2019). In the ablation study, we compare our primal-dual policy net with GCN (Gasse et al., 2019), our novelty based ES with vanilla ES (Salimans et al., 2017). The results confirm that both our policy network and the novelty search evolutionary strategy are indispensable for the success of the RL agent. In summary, our main contributions are the followings:\n• We point out the overestimation of the decision quality of strong branching and suggest that methods other than imitating strong branching are needed to find better variable selection policy. • We model the variable selection process as MDP and design a novel policy net based on primal-dual iteration over reduced LP relaxation. • We introduce a novel set representation and optimal transport distance for the branching process associated with a policy, based on which we train our RL agent using novelty search evolution strategy and obtain substantial improvements in empirical evaluation." }, { "heading": "2 BACKGROUND", "text": "Mixed Integer Programming. MIP is an optimization problem, which is typically formulated as\nminx∈Rn {cTx : Ax ≤ b, ` ≤ x ≤ u, xj ∈ Z, ∀j ∈ J} (1)\nwhere c ∈ Rn is the objective vector, A ∈ Rm×n is the constraint coefficient matrix, b ∈ Rm is the constraint vector, `,u ∈ Rn are the variable bounds. The set J ⊆ {1, · · · , n} is an index set for integer variables. We denote the feasible region of x as X .\nLinear Programming Relaxation. LP relaxation is an important building block for solving MIP problems, where the integer constraints are removed:\nminx∈Rn {cTx : Ax ≤ b, ` ≤ x ≤ u}. (2)\nAlgorithm 1: Branch and Bound Input: A MIP P in form Equation 1 Output: An optimal solution set x∗ and\noptimal value c∗ 1 Initialize the problem set S := {PLP }. where PLP is in form Equation 2. Set x∗ = φ, c∗ =∞ ; 2 If S = φ, exit by returning x∗ and c∗ ; 3 Select and pop a LP relaxation Q ∈ S ; 4 Solve Q with optimal solution x̂ and optimal\nvalue ĉ ; 5 If ĉ ≥ c∗, go to 2 ; 6 If x̂ ∈ X , set x∗ = x̂, c∗ = ĉ, go to 2 ; 7 Select variable j, split Q into two subproblems Q+j and Q − j , add them to S and go to 3 ;\nBranch and Bound. LP based B&B is the most successful method in solving MIP. A typical LP based B&B algorithm for solving MIP looks as Algorithm 1 (Achterberg et al., 2005).\nIt consists of two major decisions: node selection, in line 3, and variable selection, in line 7. In this paper, we will focus on the variable selection. Given a LP relaxation and its optimal solution x̂, the variable selection means selecting an index j. Then, branching splits the current problem into two subproblems, each representing the original LP relaxation with a new constraint xj ≤ bx̂jc for Q−j and xj ≥ dx̂je for Q + j respectively. This procedure can be visualized by a binary tree, which is commonly called search tree. We give a simple visualization in Section A.1.\nEvolution Strategy. Evolution Strategies (ES) is a class of black box optimization algorithm (Rechenberg, 1978). In this work, we refer to the definition in Natural Evolution Strategies (NES) (Wierstra et al., 2008). NES represents the population as a distribution of parameter vectors θ characterized by parameters φ : pφ(θ). NES optimizes φ to maximize the expectation of a fitness f(θ) over the population Eθ∼pφ [f(θ)]. In recent work, Salimans et al. (2017) outlines a version of NES applied to standard RL benchmark problems, where θ parameterizes the policy πθ, φt = (θt, σ) parameterizes a Gaussian distribution pφ(θ) = N (θt, σ2I) and f(θ) is the cumulative reward R(θ) over a full agent interaction. At every iteration, Salimans et al. (2017) apply n additive Gaussian noises to the current parameter and update the population as\nθt+1 = θt + α 1\nnσ n∑ i=1 f(θt + σ i) i (3)\nTo encourage exploration, Conti et al. (2018) propose Novelty Search Evolution Strategy (NS-ES). In NSES, the fitness function f(θ) = λN(θ)+(1−λ)R(θ) is selected as a combination of domain specific novelty score N and cumulative reward R, where λ is the balancing weight." }, { "heading": "3 WHY IMITATING STRONG BRANCHING IS NOT GOOD", "text": "Strong branching is a human-designed heuristic, which solves all possible branch LPs Q+j , Q − j ahead of branching. As strong branching usually produces the smallest B&B search trees (Achterberg, 2009), many learning-based variable selection policy are trained by mimicking strong branching (Gasse et al., 2019; Khalil et al., 2016; Alvarez et al., 2017; Hansknecht et al., 2018). However, we claim that strong branching is not a good expert: the reason strong branching can produce a small search tree is the reduction obtained in solving branch LP, rather than its decision quality. Specifically, (i) Strong branching can check lines 5, 6 in Algorithm 1 before branching. If the pruning condition is satisfied, strong branching does not need to add the subproblem into the problem set S. (ii) Strong branching can strengthen other LP relaxations in the problem set S via domain propagation (Rodosek et al., 1999) and conflict analysis (Achterberg, 2007). For example, if strong branching finds x1 ≥ 1 and x2 ≥ 1 can be pruned during solving branch LP, then any other LP relaxations containing x1 ≥ 1 can be strengthened by adding x2 ≤ 0. These two reductions are\nthe direct consequence of solving branch LP, and they can not be learned by a variable selection policy. (iii) Strong branching activates primal heuristics (Berthold, 2006) after solving LPs.\nTo examine the decision quality of strong branching, we employ vanilla full strong branching (Gamrath et al., 2020), which takes the same decision as full strong branching, while the side-effect of solving branch LP is switched off. Experiments in Section 5.2 show that vanilla full strong branching has poor decision quality. Hence, imitating strong branching is not a wise choice for learning variable selection policy." }, { "heading": "4 METHOD", "text": "Due to line 5 in Algorithm 1, a good variable selection policy can significantly improve solving efficiency. To illustrate how to improve variable selection policy, we organize this section in three parts. First, we present our formulation of the variable selection process as a RL problem. Next, we introduce the LP relaxation based state representation and the primal-dual based policy network. Then, we introduce our branching process representation and the corresponding NS-ES training algorithm." }, { "heading": "4.1 RL FORMULATION", "text": "Let the B&B algorithm and problem distribution D be the environment. The sequential decision making of variable selection can be formulated as a Markov decision process. We specify state space S, action space A, transition P and reward r as follows • State Space. At iteration t, node selection policy will pop out a LP relaxation PLP from the problem set S. We set the representation of the state to st = {PLP , J, S}, where J is the index set of integer variables. • Action Space. At iteration t, the action space is the index set of non-fixed integer variables determined by the relaxation: A(st) = {j ∈ J : `j < uj}. • Transition. Given state st and action at, the new state is determined by the node selection policy. • Reward. As our target is solving the problem faster, we set the reward rt = −1 with discount γ = 1.\nMaximizing the cumulative reward encourages the agent solving problems with less steps.\nIn commercial solver, the solving process is much more complicated than the B&B stated in Algorithm 1. For example, between line 3 and line 4, primal heuristics could be used to detect feasible solutions, and cutting planes could be applied to strengthen the LP relaxation. These components in solver introduce more randomness in transition, but our formulation is still valid." }, { "heading": "4.2 PRIMAL DUAL POLICY NET", "text": "Reduced LP. In the solving process, the variable bounds keep changing due to branching. Thus, we obtain our reduced LP relaxation by the following two steps: 1) remove fixed variables xj , where `j = uj , and plug their values into the constraints; 2) remove trivial constraints, where max`≤x≤u ∑ j Aijxj ≤ bi. In the view of primal-dual iteration, the LP relaxation has Lagrangian form:\nmin x max λ\ncTx + λT (Ax− b), s.t. ` ≤ x ≤ u,0 ≤ λ (4)\nwhere variables and constraints naturally form a bipartite graph. In the primal-dual iteration over Equation 4, fixed variables and trivial constraints always pass zero and have no interaction with other variables.\nPD policy net. We parameterize our policy network πθ(at|st) as a primal-dual iteration over the reduced LP relaxation by message passing\nYi ← fC ( Yi, ∑ jAijmC (Xj) ) , Xj ← fV ( Xj , ∑ iAijmV (Yi) ) (5)\nUnder review as a conference paper at ICLR 2021\nA2\nB2 C2\nA1 B1 A3 B3\nUnder review as a conference paper at ICLR 2021\nA2\nB2 C2\nA1 B1\nUnder review as a conference paper at ICLR 2021 3.3 NOVELTY SEARCH EVOLUTIONARY STRATEGY We train the RL agent using evolutionary strategy similar to NSRA-ES (Conti et al., 2018). The core idea is to flatten the RL problem into a blackbox optimization problem, where the input is the policy parameter ✓ and the outputs are cumulative reward F and novelty score N . At every iteration, we apply additive Gaussian noise to the current parameter vector. The update is performed as follows: ✓t+1 = ✓t + ↵ 1 n nX i=1 ·Ni✏i + (1 ) · Fi✏i (5) where is a weight balancing cumulative reward and novelty score. The remaining work is defining the novelty score for the abstract B&B process. [ It will be good to have a figure to explain the following 3 paragraphs.] In general B&B algorithm, the solving process can be represented as a search tree, where each leaf is a solved subproblem. Given a branch policy ⇡ and instance Q, we define our characterization b(⇡, Q) = {R1, · · · , RH} as the collection of those leaf subproblems. For subprolbems, we define a weight function w(Ri) and a distance function d(Ri, Rj). Provided weight function w, we can map the characterization b = {R1, · · · , RH} to simplex p(b) 2 H 1 such that p(Rj) = w(Rj)/ P H i=1 w(Ri). Given the distance function, we can set the cost matrix Wij = d(Ri, Rj). Then, we can define the metric D between two characterization, such that D(b1, b2) = OT (p(b1), p(b2),W ) is the Wasserstein distance. Equipped metric D between characterization, we can define the novelty score following Conti et al. (2018). Given a policy memory M and an instance Q sampled from the problem distribution D, novelty score is computed as: N(✓, Q,M) = 1 k X ⇡j2kNN(M,✓) D(b(⇡✓, Q), b(⇡j , Q)) (6) where kNN(M, ✓) is the k nearest neighbor of ⇡✓ in M . In this definition, the novelty score encourages the policies with characterization far from the characterizations in policy memory. Focusing on MIP, in this work, we represent the subproblem Ri as a polyhedron which is the feasible region the integer variables of the LP relaxation. The weight function w is defined as counting number of feasible integer points and the distance function d is defined as the 1 norm distance between the gravity center of two polyhedrons. For computational efficiency, we ignore the constraints and only consider variable bounds such that every polyhedron is a box. A simple illustration of characterization is given in Figure 2. We can see that\np (b1) =\n✓ 12\n16 , 4 16\n◆ = ✓ 3\n4 , 1 4\n◆\np (b2) =\n✓ 8\n16 , 4 16 , 4 16\n◆ = ✓ 1\n2 , 1 4 , 1 4\n◆\nW (b1, b2) =\n\" 3 2 3 2 5 2\n7 2 5 2 3 2\n# A1\nB1 C1\nA2 B2\nFigure 2: Caption for image\nTo put everything together, we lay out the training algorithm in Algorithm 2.\n4 EXPERIMENTS\nWe now present a comparative experiments against two competing machine learning approach and three SCIP’s branching rules to assess the value of our RL agent, as well as an ablation study to validate the our choice of state representation and training algorithm.\n5\nFigure 2: (l) characterization b1, (m) characterization b2, (r) distribution and the cost matrix\nwhere kNN(M, ✓) is the k nearest neighbor of ⇡✓ in M . In this definition, the novelty score encourages the policies with characterization far from the characterizations in policy memory.\n[ Rephrase this paragraph. Make it easier to understand.]\nW (b1, b2) =\n 3 2 3 2 5 2\n7 2 5 2 3 2\nW (b1, b3) =\n 1 2 3 2\n5 2 1 2\n(b1, b2) = 1/2 1/4 0 0 0 1/4\n(b1, b3) = 1/2 1/4 0 1/4\nWe have introduced the characterization and its metric for general B&B algorithm. Focusing on\nMIP, a subproblem Ri can be represented by a polytope, the feasible region of the LP relaxation.\nWe define the weight function w as counting the number of feasible integer points in the polytope\nand the distance function d as the 1 norm distance between the gravity center of two polytopes.\nFor computational efficiency, we ignore the constraints and only consider variable bounds such that every polytope is a box. A simple illustration of characterizations is given in Figure 2. We can compute the distance D(b1, b2) = 32 . Another observation is the polyhedron may degenerate to lower dimension after branching, for example, the B1 in b1. Hence, we choose the counting weight function w in our work [ Not so sure what this sentence means.]. This finishes the definition of the novelty score. To put everything together, we summarize the training in Algorithm 2.\n5 EXPERIMENTS\nWe now present a comparative experiments against two competing machine learning approach and three SCIP’s branching rules to assess the value of our RL agent, as well as an ablation study to validate the our choice of state representation and training algorithm.\n5.1 SETUP\nBenchmarks: We consider three classes of instances, Set Covering, Maximum Independent Set and Capacitated facility location, that are not only challenging for state-of-the-art solvers, but also representative for problems encountered in practice. For each class, we set up a backbone based on which we randomly generate the dataset.[ Need to justify why backbone idea is good: in many real word problems, different problem instance may share a backbone. For instance .... . Do you have more information about the backbone, and details about the generative process in appendix?] [Sun: I will add the generating process in Appendix] We generate set covering instances using 1000 columns. We train on instances with 500 rows and we evaluate on instances with 500 rows (test), 1000 rows (medium transfer), 1500 rows (hard transfer). We generate maximum independent set on graphs with 400 nodes and we evaluate on graphs with 400 nodes (test), 1000 nodes (medium transfer) and 1500 nodes (hard transfer). We generate capacitated facility location with 100 facilities. We train on instances with 40 customers (test) and we evaluate on instances with 200 customers (medium transfer) and 400 customers (hard transfer). More details are provided in the appendix.\n6\nb1 b2 b3\nUnder review as a conference paper at ICLR 2021 4.4 NOVELTY SEARCH EVOLUTIONARY STRATEGY\nWith metric D between representations, we can define the novelty score following Conti et al. (2018). Given a policy memory M and an instance Q sampled from the problem distribution D, novelty score is computed as:\nN(✓, Q,M) = 1\nk\nX\n⇡j kNN(M,✓)\nD(b(⇡✓, Q), b(⇡j , Q)) (7)\nwhere kNN(M, ✓) is the k nearest neighbor of ⇡✓ in M . In this definition, the novelty\nscore encourages the policies with representation far from the representations in policy memory.\n(b1, b2) =\n\n1/2 1/4 0\n0 0 1/4\n(b1, b3) = 1/2 1/4 0 1/4\n3/2 3/2 5/2 5/2 7/2 3/2\n1/2 3/2 5/2 1/2\n[ cite and summarize algorithm 1 here in this section!!! Consider\nwrap figure and make it half column.]\nAlgorithm 2: Evolutionary Strategy with Novelty Score. Input: Learning rate ↵, Noise std , number of workers n, Validation size N , Batch size M , Initial\nweight , Weight decay rate , Iterations T, Parameter ✓0, Policy memory MK with capacity K, Instance distribution D\nOutput: Best parameter ✓best\n1 Sample valid instances Q1, · · · , QN ⇠ D\n2 Set Fbest = 1N\nP\nN\nj=1 f(✓0, Qj), ✓best = ✓0 and push ✓0 into MK\n3 for t=0 to T do 4 Sample instances P1, · · · , PM ⇠ D 5 for i=1 to N do 6 Sample ✏1, · · · , ✏n ⇠ N (0, I) 7 Compute Fi = 1m P M =1 f(✓t + ✏i, Pm) 8 Compute Ni = 1m P M\nm=1 N(✓t + ✏i, Pm,M) 9 Send Ni and Fi from each worker to coordinator\n10 end 11 Set ✓t+1 = ✓t + ↵ 1n P n\ni=1 ·Ni✏i + (1 ) · Fi✏i 12 Compute F (t+1) = 1\nN\nP N\nj=1 f(✓t+1, Qj)\n13 if F (t+1) > Fbest then 14 Set Fbest = F (t+1), ✓best = ✓t+1, = ⇤ 15 end 16 Push ✓t+1 into MK 17 end\n6\nFigure 1: (left) three policies π1, π2 and π3 produce three sets of polytopes b1, b2 a d b3 resp ctively for the same problem Q, (right) example cost matrix W and transportation matrix Γ.\nwhere fC , fV are two-hidden-layers neural networks, mC ,mV are one hidden layer neural networks, Aij is the entry in the reduced constraint matrix A and X,Y are the embedding for variables and constraints initialized by PLP and J . As mentioned above, the original primal-dual iterations only occurs on the reduced LP hence, our message passing in Equation 5 is defined only on the reduced graph. For efficiency, we do not include problem set S, which makes it a partial observable MDP (Astrom, 1965). After two iterations of Equation 5, the variable embedding X is passed to a two-hidden-layer neural network score function fS and the output is the final score for each variable. Since the state reduction and message passing are both inspired by primal-dual iteration, we call it PD policy. A more detailed discussion and comparison with GCN (Gasse et al., 2019) can be found at section A.2.2." }, { "heading": "4.3 SET REPRESENTATION FOR POLICY AND OPTIMAL TRANSPORT DISTANCE", "text": "We train the RL agent using evolution strategy similar to NSR-ES (Conti et al., 2018) and we need to define the novelty score for B&B process. In the general B&B algorithm, th solving process can b represent d by a search tree, where each leaf is a solved subproblem. Given a branch policy π and an instance Q, we define our representation b(π,Q) = {R1, · · · , RH} as the collection of leaf subprobl ms on the compl te search tree . Focusing on MIP, a subproblem Ri is a LP relaxation which can be represented by its feasible region, a polytope. For example, in Figure 1, b1, b2 and b3 are the s t of polytop s produced by three diff rent policies π1, π2 and π3 respectively. And b1 = {A1, B1} is a set of two polytopes (leaf subproblems), b2 = {A2, B2, C2} is a set of three polytopes, and b3 = {A3, B3} is a set of two polytopes. For computational efficiency, we ignore the constraints and only consider variable bounds such that every polytope is a box.\nFor each polytope Ri (leaf subproblem), we define the weight function w(·) and distance function d(·, ·) between two polytopes Ri and Rj as\n• w(Ri) := #{x ∈ Ri : x is a feasible solution for Q}. • d(Ri, Rj) := ‖gi − gj‖1, where gi and gj are the center of mass for Ri and Rj respectively.\nFor example, in Figure 1, we have w(A1) = 12, d(A1, A2) = 32 . Then we can map the representation b = {R1, · · · , RH} to a simplex p(b) ∈ ∆H−1 by normalizing the weights p(Rj) = w(Rj)/ ∑H i=1 w(Ri), and compute a cost matrix Wij = d(Ri, Rj) (See Figure 1 for examples). Then, we can define the metric D between two representations as the Wasserstein distance (or optimal transport distance) (Villani, 2008; Peyré et al., 2019):\nD(b1, b2) = min Γ ∑ i,j ΓijWij(b1, b2), s.t. Γ1 = p(b1), ΓT1 = p(b2) (6)\nFor example, in Figure 1, the distance D(b1, b2) = 32 , D(b1, b3) = 3 4 meaning b3 is closer to b1 than b2. Hence the corresponding policy π3 is closer to π1 than π2. Here, we provide a concrete method to measure\nthe distance between two solving processes. It is also provides a framework for general B&B algorithm. We can choose weight function w and distance function d depending on the property of the solution space and compute the distance between two B&B solving processes." }, { "heading": "4.4 NOVELTY SEARCH EVOLUTIONARY STRATEGY", "text": "Equipped with metric D between representations, we can define the novelty score following Conti et al. (2018). Given a policy memory M (a collection of older policies) and an instance Q sampled from the problem distribution D, novelty score is computed as:\nN(θ,Q,M) = 1\nk ∑ πj∈kNN(M,θ) D(b(πθ, Q), b(πj , Q)) (7)\nwhere kNN(M, θ) is the k nearest neighbor of πθ in M . Back to Algorithm 1, B&B algorithm recursively splits the feasible region and obtains a set of polytopes when finishing solving an instance. Notice that a polytope in the set representation is invariant with the generating order, i.e. branching x1 then x2 will give the same polytope with branching x2 then x1. As a result, our metric D and novelty score N is mostly determined by the pruning behavior during the solving process. Put everything together, we summarize the training algorithm in section A.3." }, { "heading": "5 EXPERIMENTS", "text": "We now present comparative experiments against two competing machine learning approaches and three SCIP’s branching rules to assess the value of our RL agent, as well as an ablation study to validate our choice of policy representation and training algorithm." }, { "heading": "5.1 SETUP", "text": "Benchmarks: We consider three classes of instances, Set Covering (Balas & Ho, 1980), Maximum Independent Set (Albert & Barabási, 2002) and Capacitated facility location (Cornuéjols et al., 1991), those are not only challenging for state-of-the-art solvers, but also representative for problems encountered in practice. For each class, we set up a backbone based on which we randomly generate the dataset as many real-world problems also share the same backbone. For example, a logistics company frequently solves instances on very similar transportation networks with different customer demands. We generate set covering instances using 1000 columns. We train on instances with 500 rows and evaluate on instances with 500 rows (test), 1000 rows (medium transfer), 1500 rows (hard transfer). We train maximum independent set on graphs with 400 nodes and evaluate on graphs with 400 nodes (test), 1000 nodes (medium transfer), and 1500 nodes (hard transfer). We generate capacitated facility location with 100 facilities. We train on instances with 40 customers (test) and evaluate on instances with 40 customers (test), 200 customers (medium transfer), and 400 customers (hard transfer). More details are provided in the section A.4\nSettings: Throughout all experiments, we use SCIP 7.0.1 as the backend solver, with a time limit of 1 hour. For SCIP parameters, we have two settings: clean and default. The clean setting switches off other SCIP components, such as estimate node selection, cutting plane and primal heuristics. This way, the evaluation eliminates the interference from other components of the solver to variable selection policy. Under the clean setting, the solving nodes reflect the decision quality of variable selection policies only. So, we compare the decision quality of different methods under the clean setting. The default setting of SCIP will turn on all components inside SCIP, which is tuned for solving real problems. So, We compare the ability to solve challenging problems of different methods under the default setting.\nBaselines: We compare against: Reliability Pseudocost Branch (RPB) (Achterberg & Berthold, 2009), the human-designed state-of-the-art branching rule, which computes strong branching in the beginning and\ngradually switches to simpler heuristics; Full Strong Branching (FSB), a full version of strong branching; Vanilla Full Strong Branching (VFS), strong branching with branch LP information muted (Gamrath et al., 2020); and two recent machine learning policies support vector machine (SVM) rank approach (Khalil et al., 2016) and GCN approach (Gasse et al., 2019) 1. We denote our method as RL, which is the primal-dual net trained by NS-ES.\nMetrics. To minimize the expected solving cost, metrics are selected as the average solving times (Tavg) for all instances and average solving nodes (Navg) for instances solved by all methods. Since MIP instances could vary a lot in difficulty, we count the number of times each method leads the performance over the number of times each method solves the instance within timelimit (Wins) as a third robust metric.\nImplementation. The detail of implementation is provided in section A.2" }, { "heading": "5.2 DECISION QUALITY", "text": "We evaluate the variable selection quality by solving 100 test instances under clean setting. Since we are comparing the decision quality, we say a method wins in this experiment if it results in the least number of solving nodes. As FSB and RPB benefit a lot from branching LP information (section 3), we do not include them when counting Wins. Table. 1 shows our RL agent leads the win times on all datasets and the average solving nodes on set covering, and independent set are significantly better than other methods." }, { "heading": "5.3 GENERALIZATION TO LARGER INSTANCES", "text": "It is very important for RL agents to transfer to larger unseen instances as training on large instances is very expensive in the real world. We investigate the generalization ability of our RL agent by solving 100 transfer instances under default setting. To meet the needs in practice, we say a method wins in this experiment if it results in the fastest solving time. As VFS is not able to solve any transfer instance in time limit, we do not list its results in Table. 4. We can see, except for RPB and SVM having comparable performance on hard set covering and hard facility location, respectively, the RL agent leads the performance. In set covering (hard) and maximum independent set (hard), we do not compute the average number of nodes for full strong branching as it solves too limited instances." }, { "heading": "5.4 IMPROVEMENT ANALYSIS", "text": "Having seen the improvements brought by RL, we would like to ask what kind of decisions our agent learns. We answer this question in two aspects: finding lower primal bound c∗ and obtaining higher dual value ĉ that\n1The source code has been released in Gasse et al. (2019).\nallows pruning in line 5 Algorithm 1. We compare our RL agent with GCN, SVM, VFC on 100 maximum independent set test instances under clean setting.\nWe first examine primal bound c∗. Figure 2 plots the feasible solutions found during the solving process. A point (n, y) means we find a feasible solution c∗ = y in a subproblem containing n branch constraints. Figure 2 shows that our RL agent is able to detect small c∗ at the early stage. Hence, it can prune more subproblems and solve the MIP faster. On the contrary, VFS fails to detect feasible solutions efficiently. One reason is, traditionally, strong branching or other human-designed heuristics are mainly on the purpose of obtaining higher ĉ. Our result suggests a new possibility for researchers to find variable selection method good at detecting feasible solutions.\nThen, we check local dual value ĉ. To eliminate the influence in primal bound c∗ changing, we initialize c∗ = copt with the optimal value like Khalil et al. (2016). We plot the curve of average width versus the depth in Figure 3. The area under the curve equals the average number of solving nodes, and we report it in the legend. Also, as c∗ is fixed, the width versus depth plot characterizes how many branches are needed to increase the local dual value ĉ to c∗ so as to close a subproblem. A smaller width indicates the variable selection policy closes the gap faster. VFS performs better under this setting than in Figure 2 while it is still beat by learning based methods. Figure 3 shows that although our RL agent has the worst width in the beginning, it has the lowest peak and leads the overall performance. This means our RL agent successfully employs a non-myopic policy to maximize ĉ in the long term.\n0 10 20 30 40 50 depth\n0.0\n2.5\n5.0\n7.5\n10.0\n12.5\n15.0\n17.5\nav er\nag e\nwi dt\nh\nRL=166 GCN=199 SVM=327 VFS=239\nFigure 3: Average width versus depth 0.0 0.5 1.0 1.5 2.0 2.5 number of total frames (x1e8)\n350 325 300 275 250 225 200 175 150 av er ag e re wa rd s\nPD_ES PD_NSES GCN_ES GCN_NSES\nFigure 4: Comparison of 4 RL agents" }, { "heading": "5.5 ABLATION STUDY", "text": "We present an ablation study of our method on maximum independent set problem by comparing four types of RL agents: (1) PD policy + ES; (2) PD policy + NS-ES; (3) GCN + ES; (4) GCN + NS-ES. We sample V = 200 instances as our validation set in and plot the average number of solving nodes under clean setting on the validation set during the training process for five random seeds. All agents are initialized by imitation learning. The results are plotted in Figure 4. All curves obtain higher rewards shows that RL improves the variable selection policy. (1) and (2) having larger rewards than (3) and (4) shows that PD policy can obtain more improvement than GCN. Also, (2) and (4) having larger rewards than (1) and (3) shows that novelty search helps to find better policies. The results suggest that RL improves learning to branch and both PD policy, NS-ES are indispensable in the success of RL agent." }, { "heading": "6 DISCUSSION", "text": "In this work, we point out the overestimation of the decision quality of strong branching. The evidence in Table 1 shows VFS performs poor on synthetic dataset under clean setting. An interesting phenomenon is that GCN can easily beat VFS after imitation learning (or our PD policy can obtain similar result). One possible explanation is that the primal-dual message passing structure naturally learns the good decisions and ignores the noise brought by strong branching. Another possible reason is the biased sampling. To keep the diversity of the samples, Gasse et al. (2019) employs a mixed policy of RPB and VFS to sample the training data. VFS probably performs good on most of the states while has poor decision quality when trapped in some regions. As a result, VFS has poor overall performance. Fortunately, using the mixing policy as the behavior policy helps to escape from these regions hence, the collected data have good decision quality. More studies are needed before we can give a confident answer for this question." }, { "heading": "7 CONCLUSION", "text": "We present an NS-ES framework to automatically learn the variable selection policy for MIP. Central to our approach is the primal-dual policy network and the set representation of the B&B process. We demonstrate our RL agent makes high-quality variable selection across different problems types and sizes. Our results suggest that with carefully designed policy networks and learning algorithms, reinforcement learning has the potential to advance algorithms for solving MIP." }, { "heading": "A APPENDIX", "text": "A.1 BRANCH AND BOUND\nHere we gives a simple illustration of B&B algorithm in Figure 5. Given the LP relaxation, the polytope represents the feasible region of the LP relaxation and the red arrow represents the objective vector. We first solve the LP relaxation and obtain the solution x̂ as the red point. Noticing it is not feasible for MIP, we branch the LP relaxation into two subproblems. In (a) we select to split variable x1 and in (b) we select to split variable x2. The subproblems obtained after branching are displayed by the shaded purple regions. After finishing solve these two MIPs, we obtain the search trees t1 and t2. We can see that a wise selection of variable x2 can solve the problem faster. Branching Rules\n• divide into (disjoint)\nsubproblems\n• improve local bounds\n• branching on constraints\n• SOS1 • SOS2\nGregor Hendel – SCIP Introduction 47/71\nBranching Rules\n• divide into (disjoint)\nsubproblems\n• improve local bounds\n• branching on constraints • SOS1 • SOS2\nGregor Hendel – SCIP Introduction 47/71\nA.2 IMPLEMENTATION\nA.2.1 HARDWARE\nAll the experiments were run at a Ubuntu 18.04 machine with Intel(R) Xeon(R) Silver 4116 CPU @ 2.10GHz, 256 GB Memory and Nvidia RTX 2080Ti graphic cards.\nA.2.2 PD POLICY\nComparison. PD policy is similar to the GCN in Gasse et al. (2019) but has two major differences. First, we use a dynamic reduced graph where fixed variables and trivial constraints are removed due to the variable bounds changing during the solving process while Gasse et al. (2019) do not consider it. The reduced graph can not only save computation, but also give a more accurate description of the solving state by ignoring the redundant information. The ablation in Section 5.5 shows it is indispensable in the success of RL. Second, we use a simple matrix multiplication in our PD policy while Gasse et al. (2019) use a complicated edge embedding in GCN. In some sense, GCN can be seen as an overparameterized version of our method. And our success reveals that message passing on the LP relaxation is the true helpful structure.\ndetail. We implement our primal dual policy net using dgl (Wang et al., 2019), with hidden dimension h = 64 and ReLU activation. The feature X for variable is a 17 dimension vector and feature Y for constraint is a 5 dimension vector. We list the detail of feature in Table. 3\nA.2.3 BASELINE\nFSB. We use the implementation in SCIP Gamrath et al. (2020)\nVFS. We use the implementation in SCIP Gamrath et al. (2020)\nRPB. We use the implementation in SCIP Gamrath et al. (2020)\nGCN. We tried to implement GCN in dgl (Wang et al., 2019), however, it is significantly slower than the original implementation in Gasse et al. (2019). Hence, we still use the implementation in Gasse et al. (2019).\nSVM. We use the implementation in Gasse et al. (2019).\nA.3 TRAINING\nWe have two settings clean, default. In experiments, we always train and test under the same setting.\nImitation Learning. We initialize our PD policy using imitation learning similar to Gasse et al. (2019). The difference is we only use 10000 training samples, 2000 validation samples and 10 training epochs as a warm start. In our setting, a policy from scratch can hardly solve an instance in a reasonable time, hence a warm start is necessary.\nNovelty Search Evolution Strategy. We improve our RL agent using Algorithm 2. The parameters are set as α = 1e− 4, σ = 1e− 2, n = 40, V = 200, w = 0.25, β = 0.99, T = 1000, k = 10.\nAlgorithm 2: Evolutionary Strategy with Novelty Score. Input: Learning rate α, Noise std σ, number of workers n, Validation size V , Batch size M , Initial\nweight λ, Weight decay rate β, Iterations T, Parameter θ0, Policy memory M , Instance distribution D, Neighborhood size k.\nOutput: Best parameter θbest 1 Sample validation instances Q1, · · · , QV ∼ D 2 Set Rbest = 1V ∑V j=1R(θ0, Qj), θbest = θ0 3 for t=0 to T do 4 Sample instances P1, · · · , PM ∼ D 5 Sample 1, · · · , n ∼ N (0, I) and compute θit = θt + σ i 6 Set M = {θ1t , · · · , θnt } 7 for i=1 to n do 8 Compute Ri = 1m ∑M m=1R(θ i t, Pm)\n9 Compute Ni = 1m ∑M m=1N(θ i t, Pm,M)\n10 end 11 Set θt+1 = θt + α 1nσ ∑n i=1 λ ·Ni i + (1− λ) ·Ri i\n12 Compute R(t+1) = 1V ∑V j=1R(θt+1, Qj) 13 if R(t+1) > Rbest then 14 Set Rbest = R(t+1), θbest = θt+1, λ = β ∗ λ 15 end 16 end\nA.4 DATA SET\nSet Covering. We generate a weighted set covering problem following Balas & Ho (1980). The problem is formulated as the following ILP.\nmin ∑ S∈S wSxS\nsubject to ∑ S:e∈S XS ≥ 1, ∀e ∈ U\nxS ∈ {0, 1}, ∀S ∈ S\nwhere U is the universe of elements, S is the universe of the sets, w is a weight vector. For any e ∈ U and S ∈ S , e ∈ S with probability 0.05. And we guarantee that for any e, it is contained by at least two sets in S. Each wS is uniformly sampled from integer from 1 to 100. We first generate a set covering problem with U0 = {e1, · · · , e400} and S0 = {S1, · · · , S1000} and set it as our backbone. Then, every time we want to generate a new problem with m elements, we let U = U0 ∪ {e401, e402, · · · , em} add new ei into S ∈ S following the pipeline mentioned above.\nMaximum Independent Set. We generate maximum independent set problem using Barabasi-Albert (Albert & Barabási, 2002) graphs. The problem is formulated as the following ILP.\nmax ∑ v∈V xv\nsubject to xu + xv ≤ 1, ∀euv ∈ E xv ∈ {0, 1}, ∀v ∈ V\nwhere V is the set of vertices and E is the set of edges. We generate the BA graph using a preferential attachment with affinity coefficient 4.\nWe first generate a BA graph G0 with 350 nodes. Then, every time we want to generate a new problem with n variables, we expand G0 using preferential attachment.\nCapacitated Facility Location. We generate the capacitated facility location problem following Cornuéjols et al. (1991). The problem with m customers and n facilities is formulated as the following MIP.\nmin n∑ i=1 m∑ j=1 cijdjyij + n∑ i=1 fixi\nsubject to n∑ i=1 yij = 1, ∀j = 1, · · · ,m\nm∑ j=1 djyij ≤ uixi, ∀i = 1 · · · , n\nyij ≥ 0, ∀i = 1, · · · , n and j = 1, · · · ,m xi ∈ {0, 1}, ∀i = 1, · · · , n\nwhere xi = 1 indicates facility i is open, and xi = 0 otherwise; fi is the fixed cost if facility i is open; dj is the demand for customer j; cij is the transportation cost between facility j and customer i; yij is the fraction of the demand of customer j filled by facility i. Following Cornuéjols et al. (1991), where we first sample the location of facility and customers on a 2 dimension map. Then cij is determined by the Euclidean distance between facility i and customer j and other parameters are sampled from the distribtuion given in Cornuéjols et al. (1991).\nWe first generate the location of 100 facilities and 40 customers as our backbone. Then, every time we want to generate a new problem with m customers, we generate new m − 40 locations for customers and follow the pipeline mentioned above.\nA.5 EXPERIMENTS ON BENCHMARK FROM GASSE ET AL\nWe are mostly interested in improving the variable selection policy on similar problems hence, we generate our benchmark based on a backbone. The backbone allows the instances share some common structures such that there exists a good policy for the given distribution of problems. Our experiments show that NS-ES is able to learn a good policies on this purpose. However, it is also interesting to check the performance of our method on a more random distribution. Here, we conduct experiments on benchmark from Gasse et al. (2019). We employ the same instance generator and SCIP setting as Gasse et al. (2019). For each category, we evaluate the policy on 20 instances with 5 random seeds. We report the average solving time Tavg and shifted geometric mean solving time Tgeo on all instances, average solving nodes Navg and shifted geometric mean solving nodes Ngeo on instances solved by all instances, and Wins for the times one methods leads\nover the number of instance solving to optimal. As VFS is too slow to solve challenge instances, we only report its performance on easy instances.\nWe can see that, in Table 4, the improvement from RL method is less than that in the main text. Intuitively, the randomly generated instances have less shared structure and leave less room for RL to improve the policy. How can we improve branch policies for randomly generated problems is still a question needs more explorations in the future." } ]
2,020
null
SP:a20769de2c7acf390c7e3bece904a17df6a991bd
[ "The work examines properties of Neural Processes (NP). More precisely, of deterministic NPs and how they for finite-dimensional representations of infinite-dimensional function spaces. NP learn functions f that best represent/fit discrete sets of points in space. Based on signal theoretic aspects of discretisation, authors infer a maximum theoretical upper bond of frequencies of functions f that can be used to represent the points. The bond depends on the latent dimension/representation size and the finite interval spawn by the points. Simulations are computed to test the validity of the upper bond. Authors find that NPs behave like a Fourier Transform and decompose the spectrum of the signal. Since the representation during training learns to represent specific frequencies, NPs can be used as band pass/stop filter." ]
Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters.
[]
[ { "authors": [ "ANP", "Louizos" ], "title": "2019) propose to not merge observations into a global latent space", "venue": null, "year": 2019 }, { "authors": [ "2020). Sitzmann" ], "title": "2020) show that periodic activation functions make it easier for networks", "venue": null, "year": 2020 }, { "authors": [ "REFERENCES Christopher M. Bishop" ], "title": "Pattern recognition and machine learning", "venue": "Information science and statistics. Springer,", "year": 2006 }, { "authors": [ "Roberto Calandra", "Jan Peters", "Carl Edward Rasmussen", "Marc Peter Deisenroth" ], "title": "Manifold gaussian processes for regression", "venue": "In International Joint Conference on Neural Networks,", "year": 2016 }, { "authors": [ "Martin Engelcke", "Adam R. Kosiorek", "Oiwi Parker Jones", "Ingmar Posner" ], "title": "GENESIS: Generative scene inference and sampling with object-centric latent representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "S.M. Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S. Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A. Rusu", "Ivo Danihelka", "Karol Gregor", "David P. Reichert", "Lars Buesing", "Theophane Weber", "Oriol Vinyals", "Dan Rosenbaum", "Neil Rabinowitz", "Helen King", "Chloe Hillier", "Matt Botvinick", "Daan Wierstra", "Koray Kavukcuoglu", "Demis Hassabis" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Gallant", "White" ], "title": "There exists a neural network that does not make avoidable mistakes", "venue": "In IEEE International Conference on Neural Networks,", "year": 1988 }, { "authors": [ "Marta Garnelo", "Dan Rosenbaum", "Christopher Maddison", "Tiago Ramalho", "David Saxton", "Murray Shanahan", "Yee Whye Teh", "Danilo Rezende", "S.M. Ali Eslami" ], "title": "Conditional neural processes", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marta Garnelo", "Jonathan Schwarz", "Dan Rosenbaum", "Fabio Viola", "Danilo J. Rezende", "S.M. Ali Eslami", "Yee Whye Teh" ], "title": "Neural processes", "venue": "In International Conference on Machine Learning – Workshop on Theoretical Foundations and Applications of Deep Generative Models,", "year": 2018 }, { "authors": [ "Jonathan Gordon", "Wessel P. Bruinsma", "Andrew Y.K. Foong", "James Requeima", "Yann Dubois", "Richard E. Turner" ], "title": "Convolutional conditional neural processes", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2011 }, { "authors": [ "José Miguel Hernández-Lobato", "Ryan P. Adams" ], "title": "Probabilistic backpropagation for scalable learning of bayesian neural networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "D. Jimenez Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih", "Jonathan Schwarz", "Marta Garnelo", "Ali Eslami", "Dan Rosenbaum", "Oriol Vinyals", "Yee Whye Teh" ], "title": "Attentive neural processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "D. P Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "V. Kotelnikov" ], "title": "On the transmission capacity of the “ether” and wire in electrocommunications", "venue": null, "year": 1933 }, { "authors": [ "Ananya Kumar", "S.M. Ali Eslami", "Danilo J. Rezende", "Marta Garnelo", "Fabio Viola", "Edward Lockhart", "Murray Shanahan" ], "title": "Consistent jumpy predictions for videos and scenes", "venue": "In Advances in Neural Information Processing Systems – Bayesian Deep Learning Workshop,", "year": 2018 }, { "authors": [ "Tuan Anh Le", "Hyunjik Kim", "Marta Garnelo", "Dan Rosenbaum", "Jonathan Schwarz", "Yee Whye Teh" ], "title": "Empirical evaluation of neural process objectives", "venue": "In Advances in Neural Information Processing Systems – Bayesian Deep Learning Workshop,", "year": 2018 }, { "authors": [ "Christos Louizos", "Xiahan Shi", "Klamer Schutte", "Max Welling" ], "title": "The functional neural process", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ben Mildenhall", "Pratul P. Srinivasan", "Matthew Tancik", "Jonathan T. Barron", "Ravi Ramamoorthi", "Ren Ng" ], "title": "NeRF: Representing scenes as neural radiance fields for view synthesis", "venue": "[cs],", "year": 2020 }, { "authors": [ "Radford M. Neal" ], "title": "Bayesian Learning for Neural Networks", "venue": "Lecture Notes in Statistics. Springer,", "year": 1996 }, { "authors": [ "Charles R. Qi", "Hao Su", "Mo Kaichun", "Leonidas J. Guibas" ], "title": "PointNet: Deep learning on point sets for 3d classification and segmentation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "PointNet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Carl Edward Rasmussen", "C.K.I. Williams" ], "title": "Gaussian Processes for Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "Danilo Jimenez Rezende", "Fabio Viola" ], "title": "Taming VAEs", "venue": "[cs, stat],", "year": 2018 }, { "authors": [ "C.E. Shannon" ], "title": "Communication in the presence of noise", "venue": "Proceedings of the IRE,", "year": 1949 }, { "authors": [ "Gautam Singh", "Jaesik Yoon", "Youngsung Son", "Sungjin Ahn" ], "title": "Sequential neural processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Vincent Sitzmann", "Michael Zollhoefer", "Gordon Wetzstein" ], "title": "Scene representation networks: Continuous 3d-structure-aware neural scene representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Vincent Sitzmann", "Julien N.P. Martel", "Alexander W. Bergman", "David B. Lindell", "Gordon Wetzstein" ], "title": "Implicit neural representations with periodic activation functions", "venue": null, "year": 2020 }, { "authors": [ "Matthew Tancik", "Pratul P. Srinivasan", "Ben Mildenhall", "Sara Fridovich-Keil", "Nithin Raghavan", "Utkarsh Singhal", "Ravi Ramamoorthi", "Jonathan T. Barron", "Ren Ng" ], "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Prudencio Tossou", "Basile Dura", "Francois Laviolette", "Mario Marchand", "Alexandre Lacoste" ], "title": "Adaptive deep kernel learning", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Edward Wagstaff", "Fabian B. Fuchs", "Martin Engelcke", "Ingmar Posner", "Michael Osborne" ], "title": "On the limitations of representing functions on sets", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "E.T. Whittaker" ], "title": "Xviii.—on the functions which are represented by the expansions of the interpolation-theory", "venue": "Proceedings of the Royal Society of Edinburgh,", "year": 1915 }, { "authors": [ "Timon Willi", "Jonathan Masci", "Jürgen Schmidhuber", "Christian Osendorfer" ], "title": "Recurrent neural processes", "venue": null, "year": 2019 }, { "authors": [ "Andrew G Wilson", "Zhiting Hu", "Russ R Salakhutdinov", "Eric P Xing" ], "title": "Stochastic variational deep kernel learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Andrew Gordon Wilson", "Zhiting Hu", "Ruslan Salakhutdinov", "Eric P. Xing" ], "title": "Deep kernel learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Wenxuan Wu", "Zhongang Qi", "Li Fuxin" ], "title": "PointConv: Deep convolutional networks on 3d point clouds", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Zichao Yang", "Andrew Wilson", "Alex Smola", "Le Song" ], "title": "A la carte – learning fast kernels", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Abylay Zhumekenov", "Malika Uteuliyeva", "Olzhas Kabdolov", "Rustem Takhanov", "Zhenisbek Assylbekov", "Alejandro J. Castro" ], "title": "Fourier neural networks: A comparative study", "venue": null, "year": 1902 } ]
[ { "heading": null, "text": "Neural Processes are a powerful tool for learning representations of function spaces purely from examples, in a way that allows them to perform predictions at test time conditioned on so-called context observations. The learned representations are finite-dimensional, while function spaces are infinite-dimensional, and so far it has been unclear how these representations are learned and what kinds of functions can be represented. We show that deterministic Neural Processes implicitly perform a decomposition of the training signals into different frequency components, similar to a Fourier transform. In this context, we derive a theoretical upper bound on the maximum frequency Neural Processes can reproduce, depending on their representation size. This bound is confirmed empirically. Finally, we show that Neural Processes can be trained to only represent a subset of possible frequencies and suppress others, which makes them programmable band-pass or band-stop filters." }, { "heading": "1 INTRODUCTION", "text": "Neural Processes (Garnelo et al., 2018a;b) are a class of models that can learn a distribution over functions, or more generally a function space. In contrast to many other approaches that do the same, for example Bayesian Neural Networks, Neural Processes learn an explicit representation of such a function space, which allows them to condition their predictions on an arbitrary number of observations that are only available at test time. This representation is finite-dimensional, while function spaces are infinite-dimensional, and so far it has not been understood how they are able to bridge this gap and under what conditions they can successfully do so.\nOur work reveals how Neural Processes learn to represent infinite-dimensional function spaces in a finite-dimensional space, and in the process describes constraints and conditions that decide what kinds of function spaces can be represented. We begin with an observation that prior art in the context of learning on sets can be reinterpreted from a signal-processing perspective, which allows us to derive a theoretical upper bound on the frequencies, i.e. Fourier components, of functions that can be represented. We subsequently confirm this bound empirically, which suggests that the learned representations should contain a notion of frequency. To further investigate this hypothesis, we continue with a visualization of the learned representations, which reveals that Neural Processes can decompose a function space into different frequency components, essentially finding a representation in Fourier space without any explicit supervision on the representations to elicit such behaviour. As further evidence of this we train Neural Processes to represent only certain frequencies, which results in them suppressing those frequencies that were not observed in the training data. Our contributions can be summarized as follows1:\n• We derive a theoretical upper bound on the signal frequency Neural Processes of a given representation size can reconstruct. As we show, the bound is observed either in the expected way—by suppressing high frequencies—or by implicitly limiting the signal interval.\n• We investigate learned representations qualitatively, presenting evidence that Neural Processes perform a frequency decomposition of the function space, akin to a Fourier transform. This behaviour is not incentivized externally but rather emerges naturally.\n1The complete source code to reproduce our experiments is available at https://github.com/***\n• We show that by choosing the training distribution appropriately, Neural Processes can be made to represent certain frequencies and suppress others, which turns them into programmable band-pass or band-stop filters." }, { "heading": "2 BACKGROUND", "text": "Neural Processes (Garnelo et al., 2018a;b) are maps P : C,X → Y , where C is a set of tuples {(x, f(x))}Nc=1 =: (xc, f(xc))2 with arbitrary but positive cardinality N , and f ∈ F : X → Y . C is often called the context, because Neural Processes perform predictions for values xt ∈ X (t for target), conditioned on these points. F is the function space we would like to find a representation of. Note that some sources define function spaces as any set of functions with a shared domain and co-domain, while others require them to be vector spaces as well. We don’t concern ourselves with this distinction and further restrict our work to X = Y = R, because it allows us to visualize learned representations. We only look at the original Neural Processes, namely the deterministic Conditional Neural Processes (CNP) (Garnelo et al., 2018a) and the variational Neural Processes (NP) (Garnelo et al., 2018b), because newer contributions in the field work in ways that preclude them from being analyzed in the same way. We discuss this further in Section 5. In CNPs and NPs, the map P is separated into two parts, a so called encoding E : C → Z and a decoding or generating part G : Z,X → Y . Z is referred to as the representation or latent space. To allow Neural Processes to approximate arbitrary3 function spaces F , E and G are typically chosen to be powerful approximators, specifically neural networks, as the name suggests.\nThe defining characteristic of CNPs and NPs is that E encodes individual pairs (x, f(x)) from the context separately, and the resulting representations are averaged to form a global representation, meaning one that is independent of the target points xt at which we then evaluate the Neural Process. This is often not the case in later work, for example in Attentive Neural Processes (Kim et al., 2019), where the individual representations are instead aggregated using an attention mechanism that depends on xt. In CNPs the representations are deterministic, while in NPs they parametrize mean and (log-)variance of a Gaussian distribution, so the latter are trained using variational inference. For details on implementation and training we refer to Appendix A.1. Our work will investigate how these global representations, which are finite-dimensional, represent infinite-dimensional function spaces.\nAs stated above,E and by extension the Neural Process P acts on set-valued inputs. This is contrary to the vast majority of machine learning work where inputs are vectors of fixed dimension and ordering. Recall that sets are permutation invariant, so we must ensure that the same is true for the output of E. It is easy to see that this is given when we average individual encodings, but Zaheer et al. (2017) show that it is in fact the only way to ensure it: E is permutation-invariant if and only if it has a so-called sum-decomposition, i.e. it can be represented in the form\nE(x) = ρ ( N∑ i=1 φ(xi) ) (1)\nwhere ρ, φ are appropriately chosen functions. Wagstaff et al. (2019) further show that to be able to represent all continuous permutation-invariant functions on sets with a cardinality of at most N , the dimension of the image Z must at least be N . This will become relevant in the following section." }, { "heading": "3 AN UPPER BOUND ON SIGNAL FREQUENCIES", "text": "We mentioned in the previous section that the encoder E in a Neural Process should have a sumdecomposition, so that the global representations are permutation-invariant, as shown in Zaheer et al. (2017). Expanding on this, Wagstaff et al. (2019) show that we require a representation size of at least N to be able to represent arbitrary continuous functions on sets of cardinality smaller or equal to N . What these works do not consider are the implications for situations where the elements of\n2We use boldface as a shorthand for sets, not vectors. 3This will depend on the implementation of E and G, and for neural networks F is practically restricted to\ncontinuous and differentiable functions.\nthe sets are input-output tuples of some function f , as it is typically the case in Neural Processes. We will use these previous findings to derive an upper bound on the frequencies ν any f ∈ F may contain so that they can be represented in a Neural Process. In order to do this, we must first define what it means to successfully learn a representation of a function space. Definition 3.1 (Representation of Function Spaces in Neural Processes). We say that a Neural Processes P has learned a representation of a function space F , defined on an interval [a, b] ⊂ R, if, for some error tolerance , it holds for all x ∈ [a, b] and for all f ∈ F , represented as a suitable set of discrete measurements (xf , f(xf )), that |P ((xf , f(xf )), x)− f(x)| < .\nThat means the learned representation must be such that we can encode a particular element of the function space f into it and are able to reconstruct it up to a predefined error tolerance. The choice of this tolerance is essentially arbitrary, but should reflect that for g /∈ F the reconstructions should generally not be accurate within . We also write that f is represented as a suitable set of discrete measurements, by which we mean that it must be possible to reconstruct f from those measurements.\nSwitching to signal-processing terminology, we know that to represent a continuous signal as a set of discrete measurements, we need to sample it at points with a distance of at most τ = 1/(2νmax), where νmax is the maximum frequency component of the signal. This is most commonly known as the Nyquist-Shannon sampling theorem (Whittaker, 1915; Kotelnikov, 1933; Shannon, 1949). For any finite real interval [a, b], this translates to a number of sampling points N > 2|b − a|νmax. The latter allows us to make a connection to the findings by Wagstaff et al. (2019), so that we can deduce an upper bound on the maximum signal frequency Neural Processes with a given representation size can reconstruct. Theorem 3.1 (Maximum Frequency in Neural Process Representations). A Neural Process P with latent dimension Dr can only learn a representation of some function space F defined on a finite interval [a, b] ⊂ R if for all f ∈ F with a maximum frequency content νmax,f it holds that:\nνmax,f < Dr\n2|b− a| (2)\nNote that this means we should in theory be able to represent any function space that obeys Eq. (2) to within arbitrarily small . In practice, we will typically have less control over F , and we only find approximate representations. Part of our experiments will test how Neural Processes behave if the signals contain frequencies larger than those allowed by Eq. (2). It should also be noted that the Nyquist-Shannon theorem used for the above derivation assumes equidistant sampling points. During training, we work with randomly sampled inputs, but at test time equidistant points are used, as we outline in Appendix A.2." }, { "heading": "4 EXPERIMENTS & RESULTS", "text": "" }, { "heading": "4.1 VALIDATION OF THE FREQUENCY BOUND", "text": "Our experiments are grouped into three parts. The first experiment seeks to test the validity of the bound we just derived in Eq. (2). In particular, we train Neural Processes with varying representation sizes on two exemplary function spaces, so that for some models the representation size is insufficient to represent all frequencies. The function spaces we base our experiments on are those defined by Gaussian Process priors (for an introduction see for example Rasmussen & Williams (2006)) with an exponentiated-quadratic (EQ) kernel with lengthscale parameter l, as well as those defined by random real-valued Fourier series—for details we refer to Appendix A.2. While the Gaussian Process samples have an average Fourier magnitude that smoothly decays to zero, the distribution of Fourier magnitudes is uniform for the Fourier series, as shown in Fig. A.1. The Fourier series space also grants us precise control over the frequency domain, which will be useful in subsequent experiments.\nFigure 1 shows example reconstructions in a deterministic Neural Process (CNP) for samples from a Gaussian Process prior with EQ kernel (l = 0.05) and from a random Fourier series. For the GP example, the CNP essentially acts like a low-pass filter when the representation size is insufficient, which qualitatively confirms the bound we derived in Eq. (2). Interestingly, the bound can also be observed in a different way: for the Fourier series example, the CNP hardly suppresses high\nfrequencies, but instead limits the effective interval of the signal, simply ignoring the outer regions of it. Both behaviours are in agreement with the bound in Eq. (2). The Fourier example also serves as a good sanity check: with K = 19 (the maximum angular frequency) the data has a maximum frequency of νmax = K/(2π) = 3.02. For Dr = 32 this would limit the size of the interval to |b − a| < 5.29, for Dr = 16 to |b − a| < 2.65. The reconstructed signal regions in Fig. 1 are a bit narrower, and thus in good agreement with the bound we derived. For a variational Neural Process, we observe the same behaviour, but with stronger dampening of high frequencies in both cases, as seen in Fig. A.2. In Fig. A.3 we show the average reconstruction error for CNPs and NPs of different representation sizes, applied to GP examples with varying lengthscale, which results in a smooth decrease in error for larger representations and larger lengthscale parameters, as one would expect." }, { "heading": "4.2 HOW DO NEURAL PROCESSES REPRESENT FUNCTION SPACES?", "text": "Having found that Neural Processes do indeed observe the bound we derived in Eq. (2), we seek to understand how this happens. To this end, we visualize the learned representations in Neural Processes, which is possible because we restrict ourselves to X = Y = R. Again looking at the two function spaces from the previous experiment, we sample pairs (x, y) on a regular grid (50 × 50) with x ∈ [−3, 3], which is our training input range, and also y ∈ [−3, 3] as it suitably covers the value range of outputs. We then encode each pair individually to a representation, thus constructing a map ri(x, y) for each representation channel. The latter allows us to uncover potential patterns and to gain a better understanding of how Neural Processes learn representations of function spaces.\nFigure 2 presents example representation channels for CNPs and NPs, trained on samples from a Gaussian Process with an EQ-kernel (l = 0.2) and on random Fourier series. The individual channels were selected to illustrate the general patterns of behaviour we observed. First, we find that representations are almost always anti-symmetric across y = 0. This is not surprising, as the function spaces we look at are on average symmetric—in the sense that f and −f will occur with the same probability—so the Neural Process learns the same representation, just with a different sign. More importantly, we find that both NPs and CNPs implicitly form a representation of the input space (i.e. the relevant interval of the function space domain), in the sense that different regions of the input space map to different representation channels. In CNPs this results in an oscillating pattern, with different channels exhibiting different frequencies. In other words, the CNP performs a frequency decomposition of the function space, not unlike a Fourier transform. At the\nsame time, there is nothing that would enforce orthogonality between the different representation dimensions, and the Fourier series example highlights that we can generally expect a mixture of multiple frequencies for a given dimension. It should be noted that this frequency decomposition emerges naturally and is not incentivized externally (e.g. by a special loss).\nEven though NPs behaved very similarly to CNPs in the previous section, their learned representations look vastly different from those in a CNP. Instead of a frequency decomposition, they seem to partition the input space, so that a given representation dimension is written to by a specific, narrow region of the input space. Only for channels with a low average magnitude (i.e. a large index in Fig. 2) do we find behaviour similar to CNPs. We conclude that NPs can in principle learn a frequency decomposition, but their variational formulation—the only difference to CNPs— disincentivizes it. We show more representations for CNPs and NPs trained on GP data in Fig. A.4 and Fig. A.5, and for CNPs and NPs trained on Fourier series data in Fig. A.6 and Fig. A.7, sorting channels by their average magnitude." }, { "heading": "4.3 NEURAL PROCESSES AS BAND FILTERS", "text": "Our final experiment is designed to show that we can exert more control over the learned representations, and it will serve as additional evidence that deterministic Neural Processes (CNP) perform a frequency decomposition of the function space they represent. At the same time, it suggests a possible practical application of Neural Processes. We saw in Section 4.1 that CNPs sometimes act like low-pass filters, which could be a useful application, but the emergence of that behaviour is not reliable. We now train CNPs with a sufficiently large representation size (Dr = 128) to be bandpass and band-stop filters. To this end, we train the models on the Fourier series defined by Eq. (11), but for the band-stop we set all components ak to zero for which 5 ≤ k ≤ 14, and likewise set all ak to zero outside of that range for the band-pass. We then look at the reconstructions of examples from the original series with all components present. For more details on the training procedure and how we sample points for function evaluation, please see Appendix A.1 and Appendix A.2.\nThe average Fourier magnitude of the training functions for the different models is given by the bottom left panel in Fig. 3. In the first model (Reference), all components are allowed; in the second (Band-stop), components in the middle of that range are suppressed; in the third (Band-pass) only components in the middle of the range are allowed. We then apply these models to examples from the reference data distribution, the result of which can be seen in the bottom-right panel of Fig. 3. The models that are only shown certain frequencies during training will suppress those frequencies that were not found in the training data, meaning they effectively become programmable band-stop or band-pass filters. This is confirmed by the example in the top rows of the figure, where we show\nboth the signal and its Fourier transform magnitude. Note that one needs to adjust the value range of the reference data before passing them through the band filters to prevent gain in the non-suppressed frequency regions. We give more details in Appendix A.2.\nUnfortunately, we were only partly able to elicit the same behaviour in variational NPs. While the trained band-stop filter worked exactly like the CNP band-stop, we were not able to train a bandpass filter. The models collapsed during training, meaning the loss plateaued and no meaningful representations were learned. There is no obvious reason why a band-pass shouldn’t work when a band-stop does, so we suspect our hyperparameter configuration was not ideal and that with more tuning it would be possible to train a band-pass as well. The NP results are shown in Fig. A.8." }, { "heading": "5 RELATED WORK", "text": "Neural Processes broadly relate to the topic of learning distributions of functions, even though we speak of the less restrictive term function space in our work. In this context, Bayesian Neural Networks (see for example Neal (1996); Graves (2011); Hernández-Lobato & Adams (2015)) are a popular choice, which place distributions on the weights of a network. However, in doing so they only implicitly represent distributions over functions, while Neural Processes learn an explicit finite-dimensional representation that can be leveraged for predictions, so as to condition on context observations given at test time. Perhaps the most well known class of methods that do the same are Gaussian Processes (for an introduction see Rasmussen & Williams (2006)). These are stochastic processes represented by a joint Gaussian distribution over context and target points, defined via the covariance matrix by a kernel. All flexibility of Gaussian Processes to represent different distributions of functions is decided by this kernel, so many works try to learn it (Yang et al., 2015; Wilson et al., 2016b;a; Tossou et al., 2019; Calandra et al., 2016). Even though Neural Processes were originally motivated by Gaussian Processes, they can be understood as orthogonal methods: Gaussian Processes represent a function space using a (potentially learned) kernel, while Neural Processes represent them in a learned finite-dimensional space.\nNeural Processes can also be interpreted from the perspective of deep learning on sets, the earliest work in the field being Zaheer et al. (2017). More theoretical contributions were made by Wagstaff et al. (2019), whose work we use to underpin our finding that the representation size in Neural Processes limits the maximum frequency of signals that can be represented. More applied work in the set-learning context has mostly been performed on point-cloud data (Qi et al., 2017b;a; Wu et al., 2019), which can be interpreted as a higher-dimensional instance of learning function spaces. Validating our findings in higher-dimensional spaces is an important direction for future work.\nNeural Processes have inspired a number of follow-up works. Perhaps the most well known addition are Attentive Neural Processes (Kim et al., 2019), which replace the averaging of individual representations with a learned attention mechanism (Vaswani et al., 2017). The aggregate representations are thus no longer independent of the target inputs, and no global representation is learned. This holds true for most follow-up work. Convolutional Conditional Neural Processes (Gordon et al., 2020) propose to no longer learn a finite-dimensional representation at all and instead work in function space by applying a CNN on suitable and variable discretizations of a kernel density estimate. Similar to ANP, Louizos et al. (2019) propose to not merge observations into a global latent space, but instead learn conditional relationships between them. Singh et al. (2019) and Willi et al. (2019) address the problem of overlapping and changing dynamics in time series data. Relating this to our work, it would be possible to test how the original Neural Processes would represent functions where the average frequency content is not constant over the domain. We leave this investigation for future work. Neural Processes have also been extended to scenarios where the function space maps to entire images, in the form of Generative Query Networks (GQN) (Eslami et al., 2018; Kumar et al., 2018). Employing vastly more powerful decoders, they can (re-)construct unseen views in 3D scenes, which relates Neural Processes to the field of 3D scene understanding, an area that has received a lot of attention more recently (Sitzmann et al., 2019; Engelcke et al., 2020; Mildenhall et al., 2020). Sitzmann et al. (2020) show that periodic activation functions make it easier for networks to learn so-called implicit representations—mappings from coordinates to a density, RGB values, etc.. We did in fact try periodic activation functions in our experiments, but found no difference to using tanh-activations. In the same context, Tancik et al. (2020) show that coordinates in Fourier space are often superior to coordinates in signal space to produce fine detail. We interpret this as an indication\nthat a representation in frequency space is more efficient for many signals, which could explain why Neural Processes implicitly perform a frequency decomposition. Note that the above introduces Fourier features explicitly as a form of inductive bias, while Neural Processes automatically learn this form of representation.\nIt is well known that neural networks, specifically a MLP with at least one hidden layer, can learn the Fourier transform of an input signal (Gallant & White, 1988). In fact, there have been a multitude of works that exploit this ability in one way or the other, leading to the term Fourier Neural Networks. We refer to the recent review by Zhumekenov et al. (2019) for a comprehensive overview. The difference to Neural Processes is that these works typically apply networks directly to a sequence of points, while NPs learn a mapping that is only applied to individual (x,y) pairs, the representations of which are averaged. We emphasize again that the frequency decomposition occurs naturally in NPs, while these works usually employ direct supervision." }, { "heading": "6 DISCUSSION", "text": "The goal of this work was to gain a better understanding of the mechanisms that allow Neural Processes to form finite-dimensional representations of infinite-dimensional function spaces. To the best of our knowledge, ours is the first work to investigate this question, and our findings are both surprising and meaningful in this context. We first derived a theoretical upper bound on the frequency of signals that can be represented in Neural Processes with a given representation size. We empirically confirmed that the representation size does indeed pose such a limit and that this can result in Neural Processes acting like low-pass filters. Alternatively, models ignore parts of the signal to keep higher frequencies. Both behaviours are in agreement with the derived bound. We then visualized learned representations to understand how the models incorporate the concept of frequency into them. In all cases the models formed an implicit representation of the input space, in the sense that different x-values are mapped to different representation channels. For CNPs, an oscillating pattern emerges, such that different representation channels correspond to different frequencies, from which we concluded that CNPs perform a frequency decomposition of the function space they learn to represent. It should be noted that this behaviour emerges naturally and is not explicitly encouraged (e.g. by a special loss). In contrast to this, NPs tend to partition the space into more or less disjunct regions. They are still able to learn a frequency decomposition like CNPs, but we assume that the variational training objective makes it harder to do so, as sampling from the representation during training can also be understood as a random perturbation. For VAEs, which are conceptually similar to NPs, it was also suggested that models partition their latent space in way that maximally spreads representations of individual data points under the prior distribution (Rezende & Viola, 2018). Finally, to further test the models’ ability to distinguish frequencies and also as an example of possible practical benefits of our findings, we trained CNPs to be band-pass and bandstop filters. This worked extremely well, the Fourier component magnitudes of the training data are essentially “baked” into the models, and any frequency not found therein is subsequently suppressed in reconstructions from the models. An obvious use case would be programmable frequency filters, when perhaps a more complex frequency response is desired.\nOverall, our work offers exciting new insights into the inner workings of Neural Processes and into the learning of representations of function spaces. Many applications of deep learning are concerned with representation learning in some way, and we hope that our findings inspire further research and forge a better understanding of the methods used in the field. Our work also opens up a number of exciting questions for future work. We only look at function spaces with scalar domain, and while we expect that our findings translate to higher dimensions, the same should be validated empirically. Seeing that variational Neural Processes can in principle learn frequency decompositions, it would be interesting to investigate how we can further incentivize this behaviour in them. Likewise, it should be possible to encourage orthogonality between the individual representation dimensions, so that frequencies are more cleanly separated. Further theoretical exploration of the conditions, besides frequency content, that allow function spaces to be represented could also be worthwhile. Finally, it is not immediately obvious how our findings translate to scenarios that disallow a classical definition of frequency, for example when the observations are entire images as in Eslami et al. (2018)." }, { "heading": "A APPENDIX", "text": "A.1 OPTIMIZATION & IMPLEMENTATION\nTo train Neural Processes, we represent individual examples f ∈ F as sets of randomly sampled evaluations (x, f(x) = y), which we partition into context set (xc,yc) and target set (xt,yt). We further have encoder E and decoder G of a Neural Process implemented as neural networks, for which we summarize the parameters in θ. In our implementation, both are multilayer perceptrons (MLP), meaning simple fully connected networks. Our goal is then to find the optimal set of parameters θ∗ that maximizes the likelihood of yt, given xc, yc and xt, over all f :\nθ∗ = argmax θ ∑ f∈F log pθ(yt|xt,xc,yc) (3)\nwhere pθ is a placeholder for some parametrized likelihood function. We introduce the logarithm because we assume the likelihood factorizes across individual f , turning the expression into a sum. So what would this optimization look like in practice? For example, we could minimize the mean squared error between yt and the predictions ŷt from our network. This would implicitly assume a Gaussian likelihood with a fixed variance. However, we would like our model to predict a variance, so that it can indicate how uncertain it is about a prediction, and because Le et al. (2018) found that this results in overall better performance. We achieve this by implementing G as a network that predicts both the mean and the variance of a diagonal Gaussian distribution, and Eq. (3) becomes:\nθ∗ = argmax θ ∑ f∈F ∑ t logN (yt;Gµθ (Z, xt), Gσθ (Z, xt)) (4)\nIn deterministic Neural Processes (CNP), we can directly optimize this with maximum likelihood training. In variational Neural Processes (NP), Z is also parametrized by a Gaussian, meaning just like G, E predicts mean and variance of a Gaussian with Dr dimensions. In this case, we need to rewrite the summands of Eq. (3):\nlog pθ(yt|xt,xc,yc) = log E z∼p(Z|xc,yc) pθ(yt|xt, z) (5)\nHere, p(Z|xc,yc) is not the distribution predicted by our encoder, but some true distribution we don’t have access to. The idea of variational inference (see for example Bishop (2006) for an introduction) is to approximate this p by some other distribution qθ and then to optimize pθ and qθ simultaneously. qθ is what our encoder E predicts, just like pθ is what our decoder G predicts. Continuing from Eq. (5):\nLHS = log E z∼qθ(Z|xt,yt) pθ(yt|xt, z) p(z|xc,yc) qθ(z|xt,yt)\n(6)\n≥ E z∼qθ(Z|xt,yt) log\n( pθ(yt|xt, z)\np(z|xc,yc) qθ(z|xt,yt)\n) (7)\n≈ E z∼qθ(Z|xt,yt) log\n( pθ(yt|xt, z)\nqθ(z|xc,yc) qθ(z|xt,yt)\n) (8)\n= E z∼qθ(Z|xt,yt)\nlog pθ(yt|xt, z)\n−DKL(qθ(z|xt,yt)||qθ(z|xc,yc)) (9)\nwhere LHS refers to the left hand side of Eq. (5). In the first line, we have switched the underlying distribution from the true prior—meaning conditioned on the context—to an approximate posterior—meaning conditioned on both context and target, but for notational simplicity we only write out the target set. The second line follows from Jensen’s inequality while in the third line we have replaced the true prior with the approximate prior. Finally, we have rewritten the right hand side using the Kullback-Leibler (KL) divergence, a measure of distance between two distributions. Because we predict Gaussian distributions, the KL divergence has a closed-form expression. Otherwise it would be impractical to use it in an optimization context. The last line is often called the evidence lower bound (ELBO) in variational inference.\nLet us put the above into more practical terms. When presented with an example consisting of context and target sets, we first use the encoder network E to encode each context tuple separately. The encoder is a MLP with two input channels (for X and Y ), 6 hidden layers with 128 channels, and a final layer mapping to Dr channels, i.e. to the representation. While all hidden layers have a fixed dimension of 128, we vary the representation dimension Dr for our experiments (but never make it larger than 128). For the variational case, the final layer maps to 2Dr channels, half for the mean and half for the variance of the predicted Gaussian (in practice, we predict the log-variance to allow negative values). The individual representations are then averaged, and in the variational case we call this the prior (qθ(z|xc,yc) in Eq. (9)). For the posterior, we also encode the target pairs and then average over all individual representations, including the context. During training forward passes, we sample once from the posterior and use this sample as the representation for the decoder. Ideally, we should sample many times to integrate the expectation in Eq. (9), but for stochastic mini-batch training it was found empirically that a single sample suffices (Jimenez Rezende et al., 2014; Kingma & Welling, 2014). The decoder predicts a Gaussian from the representation and an input xt. It is implemented symmetrically to the encoder, meaning it’s a MLP with Dr + 1 input channels, 6 hidden layers with 128 channels, and two output channels for mean and (log-)variance. We use tanh-activations as well. As a loss we directly use the negative log-likelihood, meaning we evaluate the likelihood of a reference point yt under a Gaussian parametrized by the predicted mean and variance. Finally, we average over all predicted points, which are the target points as well as the context points. We use the Adam optimizer Kingma & Ba (2015) with an initial learning rate of 0.001, repeatedly decaying it with a factor of 0.995 after 1000 batches. We train with a batch size of 256 for a total of 600 000 batches.\nA.2 EXPERIMENT DETAILS\nWe conduct our experiments on two kinds of function spaces. The first is defined by a Gaussian Processes prior using an EQ kernel given by:\nk(x1, x2) = exp ( ||x1 − x2||22 2l ) (10)\nwhere l is a lengthscale parameter. This example was also used in the original works (Garnelo et al., 2018a;b). The second are random Fourier series, defined by:\nf(x) = a0 + K∑ k=1 ak cos (kx− φk) , K = 19 (11)\nwhere we sample φk and ak (including a0) randomly from the interval [−1, 1]. Note that k is an angular frequency, while results are presented for regular frequencies.\nTo construct training examples, we sample N context inputs and M target input values uniformly from the range [−3, 3]. N is a random integer from the range [3, 100), while M is a random integer from [N, 100). This sampling strategy was adopted from the original works and Le et al. (2018). yvalues are generated by evaluating the above functions (or drawing from the distribution in the case of the GP) on the random input values. The models are trained by letting them predict/reconstruct both context and target points, conditioned only on the context. At test time, we are only interested in reconstructions, meaning target points and context points are identical, and we work with 200 equally spaced input values across the full range.\nIn the band filter experiment, we train models on Fourier series with some frequencies intentionally left out of the training data. When we train a model on data where some frequency components are blocked, the distribution of y-values a model sees during training becomes narrower. As a result, passing functions from the reference distribution (where no components are blocked) through a band-filter CNP will suppress the desired frequencies, but will also amplify non-blocked frequencies. To counteract this, we have to multiply the y-values of the reference data, which are approximately normally distributed, by σband/σref, i.e. the ratio of standard deviations of the relative y-distributions.\nA.3 ADDITIONAL VISUALIZATIONS" } ]
2,020
null
SP:ba25b5b02701e01998e9dd22e4230c4e095f4542
[ "The paper deals with the problem of credit assignment and synchronous estimation in cooperative multi-agent reinforcement learning problems. The authors introduce marginal advantage functions and use them for the estimation of the counterfactual advantage function. These functions permit to decompose the Multi-Agent Policy Optimization Problem in Single Agent Policy Optimization subproblems, which are solved using TRPO." ]
Cooperative multi-agent tasks require agents to deduce their own contributions with shared global rewards, known as the challenge of credit assignment. General methods for policy based multi-agent reinforcement learning to solve the challenge introduce differentiate value functions or advantage functions for individual agents. In multi-agent system, polices of different agents need to be evaluated jointly. In order to update polices synchronously, such value functions or advantage functions also need synchronous evaluation. However, in current methods, value functions or advantage functions use counter-factual joint actions which are evaluated asynchronously, thus suffer from natural estimation bias. In this work, we propose the approximatively synchronous advantage estimation. We first derive the marginal advantage function, an expansion from single-agent advantage function to multi-agent system. Further more, we introduce a policy approximation for synchronous advantage estimation, and break down the multi-agent policy optimization problem into multiple sub-problems of single-agent policy optimization. Our method is compared with baseline algorithms on StarCraft multi-agent challenges, and shows the best performance on most of the tasks.
[]
[ { "authors": [ "Yu-Han Chang", "Tracey Ho", "Leslie P Kaelbling" ], "title": "All learning is local: Multi-agent learning in global reward games", "venue": "In Advances in neural information processing systems,", "year": 2004 }, { "authors": [ "Tianshu Chu", "Sandeep Chinchali", "Sachin Katti" ], "title": "Multi-agent reinforcement learning for networked system control", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jakob N Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Yasuhiro Fujita", "Shin-ichi Maeda" ], "title": "Clipped action policy gradient", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Evan Greensmith", "Peter L Bartlett", "Jonathan Baxter" ], "title": "Variance reduction techniques for gradient estimates in reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Marek Grzes" ], "title": "Reward shaping in episodic reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech M. Czarnecki", "Schaul Tom", "Leibo Joel Z", "Silver David", "Kavukcuoglu Koray" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Emilio Jorge", "Mikael Kågebäck", "Fredrik D Johansson", "Emil Gustavsson" ], "title": "Learning to play guess who? and inventing a grounded language as a consequence", "venue": "arXiv preprint arXiv:1611.03218,", "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Ryan Lowe", "Yi I Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multiagent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Patrick Mannion", "Sam Devlin", "Jim Duggan", "Enda Howley" ], "title": "Reward shaping for knowledgebased multi-objective multi-agent reinforcement learning", "venue": "The Knowledge Engineering Review,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Kavukcuoglu Koray", "Silver David", "Graves Alex", "Antonoglou Ioannis", "Wierstra Daan", "Riedmiller Martin" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Duc T. Nguyen", "Kumar Akshat", "Hoong Chuin L" ], "title": "Credit assignment for collective multiagent rl with global rewards", "venue": null, "year": 2018 }, { "authors": [ "Frans A Oliehoek", "Nikos Vlassis" ], "title": "Q-value functions for decentralized pomdps", "venue": "In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems,", "year": 2007 }, { "authors": [ "Tabish Rashid", "Mikayel Samvelyan", "Christian Schroeder De Witt", "Gregory Farquhar", "Jakob Foerster", "Shimon Whiteson" ], "title": "Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1803.11485,", "year": 2018 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Peter Sunehag", "Guy Lever", "Audrunas Gruslys", "Wojciech Marian Czarnecki", "Vinı́cius Flores Zambaldi", "Max Jaderberg", "Marc Lanctot", "Nicolas Sonnerat", "Joel Z Leibo", "Karl Tuyls" ], "title": "Valuedecomposition networks for cooperative multi-agent learning based on team reward", "venue": "In AAMAS,", "year": 2018 }, { "authors": [ "Ming Tan" ], "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", "venue": "In Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "Paweł Wawrzyński" ], "title": "Real-time reinforcement learning by sequential actor–critics and experience replay", "venue": "Neural Networks,", "year": 2009 }, { "authors": [ "David H Wolpert", "Kagan Tumer" ], "title": "Optimal payoff functions for members of collectives. In Modeling complexity in economic and social systems, pp. 355–369", "venue": "World Scientific,", "year": 2002 }, { "authors": [ "Yaodong Yang", "Rui Luo", "Minne Li", "Ming Zhou", "Weinan Zhang", "Jun Wang" ], "title": "Mean field multiagent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning(RL) algorithms have shown amazing performance on many singleagent(SA) environment tasks (Mnih et al., 2013)(Jaderberg et al., 2016)(Oh et al., 2018). However, for many real-world problems, the environment is much more complex where RL agents often need to cooperate with other agents. For example, taxi scheduling(Nguyen et al., 2018) and network control(Chu et al., 2019).\nIn cooperative multi-agent tasks, each agent is treated as an independent decision-maker, but can be trained together to learn cooperation. The common goal is to maximize the global return in the perspective of a team of agents. To deal with such tasks, the architecture of centralized training and decentralized executions(CTDE) is proposed(Oliehoek & Vlassis, 2007)(Jorge et al., 2016). The basic idea of CTDE is to construct a centralized policy evaluator, which only works during training and is accessable to global information. At the same time, each agent is assigned with a local policy for decentralized execution. The role of the evaluator is to evaluate agents’ local policies differentially from the global perspective.\nA challenge in construction of centralized evaluator is multi-agent credit assignment(Chang et al., 2004): in cooperative settings, joint actions typically generate only global rewards, making it difficult for each agent to deduce its own contribution to the team’s success. Credit assignment requires differentiate evaluation for agents’ local policies, but designing individual reward function for each agent is often complicated and lacks of generalization(Grzes, 2017)(Mannion et al., 2018). Current policy based MARL methods generally realize credit assignment by introducing differentiate value functions or advantage functions(Foerster et al., 2018)(Lowe et al., 2017). However, these value functions or advantage functions are estimated asynchronously but decentralized policies are updated synchronously, as shown in figure 1(b), which results in natural estimation bias.\nIn this paper, we propose a novel policy based MARL method called multi-agent policy optimization with approximatively synchronous advantage estimation(ASAE). In our work, we first define the counter-factual scenes, in which MA advantage estimation can be converted to SA advantage estimation. For certain agent, each counter-factual scene is assigned with a SA advantage. Then\nthe marginal advantage function is defined as the expectation of SA advantages on distribution of counter-factual scenes, and credit assignment is realized by constructing different scenes’ distribution for different agents. Moreover, in order to achieve synchronous advantage estimation, an approximation of other agents’ joint future policy is introduced. To ensure the approximation is reliable, a restriction is applied to the original multi-agent policy optimization(MAPO) problem. The approximate optimization problem is simplified and broken down into multiple sub-problems, which has a similar form to trust region policy optimization(TRPO) problem. And the sub-problems are finally solved by proximal policy optimization(PPO) method.\nWe have two contributions in this work: (1) A novel advantage estimation method called marginal advantage estimation, which realizes credit assignment for MARL is proposed. More importantly, this method provides a channel for various SA advantage functions expanding to multi-agent system. (2) A simple yet effective method for approximatively synchronous advantage estimation is firstly proposed." }, { "heading": "2 RELATED WORK", "text": "A common challenge in cooperative multi-agent tasks is credit assignment. RL algorithms designed for single-agent tasks, ignore credit assignment and take other agents as part of partial observable environment. Such algorithms perform poorly in complex cooperative tasks which require high coordination(Lowe et al., 2017). To deal with the challenge, some value based MARL methods estimate a local Q value for each agent, and the shared global Q value is then constructed through these local Q values. Value decomposition network(VDN) constructs the global Q value by simply adding all local Q values together(Sunehag et al., 2018). And in QMIX algorithm(Rashid et al., 2018), the global Q value is obtained by mixing local Q values with a neural network. In mean field multi-agent methods, local Q values are defined on agent pairs. The mapping from local Q values to the global Q value is established by measuring the influence of each agent pair’s joint action to the global return(Yang et al., 2018).\nSimilarly, for policy based MARL methods, credit assignment is generally realized through differentiated evaluation with CTED structure. Some naive policy based methods estimate local Q values for individual agents with a centralized critic(Lowe et al., 2017), resulting in large variance. Some other methods try to introduce advantage function in MARL. Counter-factual multi-agent policy gradient(COMA) method(Foerster et al., 2018) is inspired by the idea of difference reward(Wolpert & Tumer, 2002) and provides a naive yet effective approach for differentiated advantage estimation in cooperative MARL. In COMA, a centralized critic is used to predict the joint Q value function Qπ(s,u) of joint action u under state s. And the advantage for agent a is defined as\nAa(s,u) = Q(s,u)− ∑ u′a πa(u′a|τa)Q(s, (u−a, u′a)) (1)\nwhere τ and π represent trajectory and policy respectively. a and -a denote current agent and the set of other agents respectively. COMA introduces a counter-factual baseline, which assumes that\nother agents take fixed actions, as shown in figure 1(b). COMA performs synchronous updates with asynchronous estimation, which leads to lagging and biased advantage estimation. In contrast, asynchronous estimation & asynchronous updating is more reliable yet more complicated. An ideal approach is synchronous estimation & synchronous updating. However, it requires prediction of other agents’ future policies." }, { "heading": "3 BACKGROUND", "text": "We consider a most general setting of partially observable, full cooperative multi-agent tasks, which can be described as a stochastic game defined by a tuple G =< S,U, P, r, Z,O, n, γ >. The true state of environment s ∈ S is unavailable to all agents. At each time step, n agents identified by a ∈ A (A = {1, 2, · · · , n}) receive their local observations za ∈ Z, and take actions ua ∈ U simultaneously. The joint observation Z = Zn is acquired by the observation function O(s, a) : S × A → Z. The next state is determined by joint action u ∈ U (U = Un) and the transition function P (s′|s,u) : S×U×S → [0, 1]. The reward function r(s,u) : S×U→ R is shared by all agents, so as the discounted return Gt = ∑∞ t+i γ\ntrt+i. γ ∈ [0, 1) is a discount factor. In policy based MARL with CTED architecture, each agent has a local trajectory τa consists of historical observation and action {(za0 , ua0), (ua1 , za1 ), · · · }. And an independent policy πa(ua|τa) is constructed for each agent on their local trajectory. Action-state value function Qπ(s,u) and state value function V π(s) are used to evaluate joint policy. The advantage function is Aπ(s,u) = Qπ(s,u)−V π(s). For clarity, symbols in bold are used to denote the joint variable of group agents. In single-agent policy optimization problems(Schulman et al., 2015a), the objective is to maximize the expected action state value functionEπθ [Qπθ ]. Similarly, for MAPO with CTDE structure, each agent optimize its local policy individually with estimated Q values from centralized critic. Under this circumstance, the overall objective is\nfor agent a = 1 to n :\nmax θa E(πθa ,π−a)\n[ Qa(πθa ,π−a) ] (2) Where Q values can be substituted by advantages to reduce the variance." }, { "heading": "4 APPROXIMATIVELY SYNCHRONOUS ADVANTAGE ESTIMATION IN MULTI-AGENT SYSTEM", "text": "In this section, we first introduce marginal advantage estimation which expands advantage functions of SARL to MARL as well to realize credit assignment. And then, we describe how to realize approximatively synchronous advantage estimation based on the marginal advantage function in MAPO problem." }, { "heading": "4.1 MARGINAL ADVANTAGE ESTIMATION", "text": "In this subsection, we are going to solve the challenge of credit assignment through the proposed marginal advantage estimation. We first consider an counter-factual way where advantages are estimated asynchronously but policies are updated synchronously, as shown in figure 1(b). In this case, a counter-factual scene can be defined as: at certain state, for agent a, other agent always take fixed actions. In partially observable, full cooperative multi-agent settings, the counter-factual advantage of agent a’s action ua under state s is derived based on the joint action’s value(or joint Q value) function Q(s,u)\nAa(s,u) = Aa(s, (ua, u−a)) = Q(s,u)− ∫ ua Q(s, u−a, ua) dπa(ua|τa)\n(3)\nFrom the view of agent a, the counter-factual advantage depends on other agents’ joint action u−a, which is a random variable and u−a ∼ π−a. In order to remove the dependency, the marginal Q value function of agent a is defined as\nQa(s, ua) = Eu−a∼π−a [ Q(s, (ua, u−a)) ] (4)\nNotice that in CTED structure, policy πa(ua|τa) and π−a(u−a|τ−a) are independent. By replacing joint Q value function with marginal Q value function, the marginal advantage function is derived\nAa(s, ua) = Qa(s, ua)− ∫ ua Qa(s, ua) dπa(ua|τa)\n= ∫ u−a Q(s, ua, u−a)dπ−a(u−a|τ−a)− ∫ ua ∫ u−a Q(s, ua, u−a) dπ−a(u−a|τ−a)dπa(ua|τa)\n= ∫ u−a [ Q(s, ua, u−a)− ∫ ua Q(s, ua, u−a) dπa(ua|τa) ] dπ−a(u−a|τ−a)\n= ∫ u−a\nAa(s,u) dπ−a(u−a|τ−a) (5)\nSuch replacement will not change the result of advantage estimation because the substitution of joint Q value is its expectation. Form equation(5), for different agent, the value of marginal advantage is different, which realizes credit assignment. It can be easily proved that if counter-factual advantage Aa(s,u) is an unbiased estimation of joint Q value Q(s,u), then marginal advantage is also an unbiased estimation of marginal Q value(Appendix I).\nIn a counter-factual scene, from the view of agent a, other agents and their fix joint actions u−a can be regarded as part of the environment. Let (s,u−a) = sctf and counter-factual advantage function can be written as\nAa(s,u) =Aa(sctf , ua)\n=Q(sctf , u a)− ∫ ua Q(sctf , u a) dπa(ua|τa) (6)\nIn counter-factual scenes, counter-factual advantage function is identical to advantage function in SARL, which means the counter-factual advantage in equation(5) can be replaced by any form of advantage function used in SARL. For example, considering using TD residual δat = r(st, u a t ) + γV (st+1)−V (st) as an estimation of joint advantage Aa(st,ut), the marginal advantages could be written as\nAa(st, ut) : = Eu−a∼π−a [ ∞∑ l=0 γlδat+l ] Aa(st, ut) : = Eu−a∼π−a [δ a t ]\n(7)\nThe former is unbiased estimation, but has high variance. The latter is biased estimation for any V 6= V π , but has much lower variance. These two methods can be combined for compromise between bias and variance(Schulman et al., 2015b).\nAs agents’ policies are independent, the expectation in equation(5) can be split into a (n− 1)-layer integration, which is complicated. For simplicity and efficiency, the Monte-Carlo(MC) sampling can be applied as a substitution.\nAa(st, ut) = ∫ u−a Aa(st,ut)dπ−a ≈ 1 m m∑ u−a Aa(st,ut) (8)\nWhere m is the number of other agents’ joint action samples. The principle of one step process to calculate marginal advantage with TD residual is shown in figure 2. Firstly, based on the last true state st, m joint action samples are sampled. These samples are then reorganized. Take agent 1 as example, action u1,t from Sa1 is combined with other agents’ action samples from Sa2 to Sam respectively. As a result, m reorganized new samples are acquired. Based on these new samples, one step simulations are executed and m counter-factual rewards and states are acquired, which are used to calculate the estimation of marginal advantage. At last, the next true state is selected form counter-factual states.\nBoth methods in equation(7) use V value predictor and require interactive simulation. Agent needs to interact with environment to get extra samples. In this work, we consider using centralized critic to predict jointQ values, and the marginal advantages can be directly calculated with theseQ values, which avoids interactive simulation." }, { "heading": "4.2 APPROXIMATIVELY SYNCHRONOUS ADVANTAGE ESTIMATION", "text": "In marginal advantage estimation, actions are sampled from the agents’ past policies. The estimation is still asynchronous because it assumes the invariance of others’ policies. However, synchronous advantage estimation requires the prediction of other agents’ future action, as shown in figure 1(c). In marginal advantage estimation, problem of action prediction becomes policy prediction.\nDirect prediction of others’ future policies is very difficult. In iterative training, only others’ policies of next iteration are needed. Assume others’ joint policy of iteration i is π−ai (u\n−a|τ−a). Synchronous marginal advantage is given by\nAai,syn(s, u a) = Eu−a∼π−ai\n[ Aai (s, u a, u−a) ]\n(9)\nTo calculate the synchronous marginal advantage, we first introduce an approximation that Aai (s,u) ≈ Aai−1(s,u). The reliability of this approximation is ensured by a restriction KL [ πai , π a i−1 ] < δSchulman et al. (2015a). For simplicity, we use πa to represent πa(ua|τ). In marginal advantage estimation, we have introduced Monte Carlo(MC) sampling and samples form others’ joint policy π−ai are needed. However, only polices before iteration i are available. So the second approximation is introduced as π−ai−1 ≈ π −a i . Similarly, in order to ensure the approximation is reliable, a KL divergence restriction between π−ai and π −a i−1 is applied asKL [ πai , π a i−1 ] < δ. The objective of policy optimization problem with synchronous advantage estimation for agent a is\nmax πai Eua∼πai−1\n[ Aai−1,syn(s, u a) · π a i\nπai−1 ] = max\nπai Eu∼πai−1\n[ Aai−1(s,u) ·\nπai πai−1 ] subject to : KL [ π−ai , π −a i−1 ] < δ1\nKL [ πai , π a i−1 ] < δ2\n(10)\nThe first restriction involves other agents’ polices, which requires joint optimization of all agents’ policies. The integral objective of multi-agent policy optimization with n agents is\nmax πai n∑ a Eu∼πi−1 [ Aai−1(s,u) · πai πai−1 ]\nsubject to : n⋃ a KL [ π−ai , π −a i−1 ] < δ1\nn⋃ a KL [ πai , π a i−1 ] < δ2\n(11)\nIt can be proved that KL [ π−ai , π −a i−1 ] < ∑−a o KL [ πoi , π o i−1 ] (Appendix II). For simplification, a\ntighter form of the restriction KL [ π−ai , π −a i−1 ] < δ1 can be written as\nKL [ πoi , π o i−1 ] <\nδ1 n− 1 = δ′1, for o in\n−a⋃ (12)\nBy replacing the restriction KL [ π−ai , π −a i−1 ] < δ1 with the tighter form, the first restriction in equation(11) is simplified: n⋃ a −a⋃ o {KL [ πoi , π o i−1 ] < δ′1}a (13)\nNotice that there are n − 1 duplicate restrictions for each KL [ πai , π a i−1 ] < δ′, remove redundant duplicates and the first restrictions in equation(11) finally equals to n⋃ a KL [ πai , π a i−1 ] < δ′1 (14) Set δ1 = (n − 1)δ′1 = (n − 1)δ2 and the two restrictions in equation(11) can be combined into⋃n a KL [ πai , π a i−1 ] < δ2.\nThe integral problem of MAPO in equation(11) consists of n individual policy optimization problems with n sub-restrictions. In CTED structure, policies of different agents are updated independently. For agent a, only the sub-restriction KL [ πai , π a i−1 ] < δ2 is effective. Thus, for further simplification, the integral objective can be split into n sub-objectives:\nfor a in 1, 2, · · · , n :\nmax πai Eu∼πi−1\n[ Aai−1(s,u) ·\nπai πai−1 ] subject to : KL [ πai , π a i−1 ] < δ2\n(15)\nThe sub-objectives above are similar to the objective in trust region policy optimization problemSchulman et al. (2015a). It’s proved that the KL divergence restriction can be effectively replaced by a clip operation(Schulman et al., 2017). The sub-objectives of MAPO with ASAE is finally acquired\nfor a in 1, 2, · · · , n :\nmax πai m∑ 1 [ Aai−1(s,u) · clip( πai πai−1 , 1− , 1 + ) ]\n(16)" }, { "heading": "5 EXPERIMENTS", "text": "In this section, we use COMA advantage as counter-factual advantage to estimate the approximatively synchronous advantage. And we compare our method with baseline algorithms on the benchmark StarCraft multi-agent challenge(SMAC)." }, { "heading": "5.1 EXPERIMENT SETTINGS", "text": "StarCraft II is a Real Time Strategy(RTS) game. And SMAC is a popular benchmark for cooperative MARL algorithms which provides an interface for RL agents to interact with StarCraft II, getting rewards, observations and sending actions. In our experiments, we consider different types of tasks of battle games involving both mixed and single type of agents. Specifically, our experiments are carried out on 8 tasks of different difficulty level, as shown in table 1. In, homogeneous tasks, agents are of the same type. In symmetric battle scenarios, each army are composed of the same units, agents need to learn to focus fire without overkill. The asymmetric scenarios are more challenging because the enemy army always outnumbers allied army by one or more units. In micro-trick tasks, agents’ are required a higher-level of cooperation and a specific micromanagement trick to defeat the enemy, which is the most difficult.\nIn our experiment settings, only ally units are considered to be MARL agents. The environment is set to be partially observable and each agent has a sight range, which is set to be a circular area around the agent. Only specific attributes of units in sight range are observable. These attributes include distance, relative x coordinate, relative y coordinate, health and shield. Agents can also observe the terrain features surrounding them. And in addition, the last actions of ally units are accessable for all agents. The global state including information about all units on the map and it’s only available in centralized training. The action space is discrete and consists of 4 moving directions, k attack actions where k is the number of the enemy units in map, stop and none-operation. The SMAC environment provides both sparse reward and shaped reward settings, we only consider the shaped reward situation where reward are much denser.\nIn ASAE, we use COMA advantage as counter-factual advantage to calculate the approximatively synchronous advantage. We adopted the CTED architecture and the structure of actors and critic in our methods is the same to other policy based MARL methods, such as COMA. The centralized critic is used to predict the joint Q value function of reorganized samples. The state value is calculated with the average Q values of these reorganized samples, which avoids interactive simulations. The number of sample m is set to be 50 and the clip range is 0.1.\nWe compare our method with baseline algorithms including IQL(Tan, 1993), VDN, QMIX and COMA. In these baselines, only COMA is policy based and most similar to our method. For all algorithms, good samples are added to replay buffer in initializing stage for early policy training. And all algorithms are trained for 3 million time-steps. The training batch-size is set to be 32 and discount factor is 0.99. The learning rates for critic and actors are both 0.0005." }, { "heading": "5.2 EXPERIMENT RESULTS", "text": "The test wining rates during training are shown in figure 3. Compared to other baseline algorithms, our algorithms converges fastest and perform best in most of the tasks. Particularly, compared to\nthe other policy based MARL algorithm COMA, our algorithm shows considerable improvement in 7 tasks of 8. According to the results, in homogeneous & symmetric tasks such as 3m and 8m, our algorithm converges after 1 million steps of training and reach approximate 100 percent test wining rate. For homogeneous & asymmetric tasks(10m vs 11m) and simple heterogeneous & symmetric tasks such as 3s5z and 1c3s5z, our algorithms converges slower, and the wining rate fluctuates slightly during training. However, our algorithm also reaches approximate 100 percent test win rate after 3 million steps of training. For different micro-trick tasks, the performance and convergence speed of our algorithm varies greatly. While in harder tasks as 10m vs 11m, MMM2 and 2m vs 1z, COMA algorithm shows no performance. The wining rate after training is tested and shown in table 2. Our algorithm also shows the best performance in most of the tasks.\nAn interesting phenomenon is that, the wining rate curve shows less fluctuation in tasks with homogeneous ally units, such as 3m, 2m vs 64zg, 2m vs 1z, etc. It’s inferred that, in such tasks, different agents are functionally replaceable, which provides a higher fault tolerance rate for individual agent’s action. As a result, the performance fluctuation of certain agent during training has less influence on group’s joint policy.\nIn order to analyse the cooperative strategies learned by agents, we render the battle process between default AI and agents trained by our algorithms. Some key frames are showed in figure 4. Agents in red are ally units. In The first task 2s3z, cooperative agents learn to focus fire after training.\nWhile the enemy agents tend to attack the units nearby. After few rounds of crossfire, enemy group quickly lose the first unit. In the second task MMM2, besides focus fire, cooperative agents also learn to adjust formation and use skill to avoid being destroyed. Particularly, in micro-trick tasks, cooperative agents learn to take advantage of map features and units’ differences. As shown in the third sub-graph, in task 2m vs 64zg, only ally units are able to move across terrain. Take advantage of this, ally units can attack enemy and move across the terrain when enemy approaching thus avoid being attacked." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose a novel method of advantage estimation which address credit assignment and synchronous estimation in cooperative multi-agent systems. By introducing marginal Q value function, we derived the marginal advantage function and it’s relationship with counter-factual advantage function. Then, we define the counter-factual scene where counter-factual advantage can be replaced by single agent advantage. So we acquire a method for single agent advantage function expanding to multi-agent systems. Based on marginal advantage, we propose the approximatively synchronous advantage estimation. Through policy approximation and constrain simplification, the problem of MAPO is decomposed into multiple sub-problems of SA policy optimization and finally solved by PPO method. Our algorithms are evaluated on battle tasks of SMAC benchmark. Compared to baselines, our algorithms perform best in both training and testing. Moreover, visualized battle processes show that our agents acquire heuristic cooperation strategies after training.\nFor future work, we are interested in applying our algorithm to cooperative robot tasks. For example, two arms’ cooperation tasks where two arms are treated as individual agents and need to cooperate to accomplish complicate work. Moreover, because our method provides a channel for SA advantage function expanding to multi-agent system, it’s also interesting to investigate the application of various policy based RL algorithms on multi-agent scenarios." }, { "heading": "A APPENDIX I", "text": "Assume that joint advantage Aa(s,u) is a unbiased estimation of joint Q value function Q(s,u). Then\nAa(s,u) = Q(s,u)− bs where : ∇θbs ≡ 0\n(17)\nThen Aa(s, u) = E−aπ [A\na(s,u)] = E−aπ [Q(s,u)− bs] = E−aπ [Q(s,u)]− E−aπ [bs]\n(18)\nE−aπ [Q(s,u)] is exact the marginal Q value function and∇θE−aπ [bs] = E−aπ [∇θbs] ≡ 0" }, { "heading": "B APPENDIX II", "text": "Consider two agents, whose policies of episode i are represented by π1i and π 2 i respectively.\nKL [ π1i π 2 i , π 1 i−1π 2 i−1 ] = ∫ π1i π 2 i log π1i π 2 i\nπ1i−1π 2 i−1\ndu\n= ∫ π1i π 2 i ( log\nπ1i π1i−1 + log π2i π2i−1\n) du\n< ∫ π1i log\nπ1i π1i−1 du+\n∫ π2i log\nπ2i π2i−1 du\n=KL [ π1i , π 1 i−1 ] +KL [ π2i , π 2 i−1 ]\n(19)\nThe relation can be expanded to joint distribution of other agents’ policies\nKL [ π−ai , π −a i−1 ] = ∫ −a∏ o πoi log ∏−a o π o i∏−a o π o i−1 du\n< −a∑ o KL [ πoi , π o i−1 ] (20)" } ]
2,020
null
SP:37bdb147b866b9e32a94d55dae82d7a42cea8da9
[ "This paper addresses the problem of vertex classification using a new Graph Convolutional Neural Network (NN) architecture. The linear operator within each of the layers of the GNNN is formed by a polynomial graph filter (i.e., a matrix polynomial of either the adjacency or the Laplacian novelty). Rather than working on the frequency domain, the paper focuses on learning the polynomial coefficients of the filter on the vertex domain. The key novelty is the consideration of a stack architecture for which the polynomial filter is formed by the successive application (i.e., matrix multiplication) of filters of order one. Numerical experiments with real datasets showcase the merits, including superior classification performance, of the proposed architecture. " ]
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
[]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Nazanin Alipourfard", "Kristina Lerman", "Hrayr Harutyunyan", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Filippo Maria Bianchi", "Daniele Grattarola", "Cesare Alippi", "Lorenzo Livi" ], "title": "Graph neural networks with convolutional arma filters", "venue": "arXiv preprint arXiv:1901.01343,", "year": 2019 }, { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Adversarial attacks on node embeddings via graph poisoning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Bruno Brosowski", "Frank Deutsch" ], "title": "An elementary proof of the stone-weierstrass theorem", "venue": "Proceedings of the American Mathematical Society,", "year": 1981 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: Fast learning with graph convolutional networks via importance", "venue": null, "year": 2018 }, { "authors": [ "Lei Chen", "Le Wu", "Richang Hong", "Kun Zhang", "Meng Wang" ], "title": "Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Elliott Ward Cheney" ], "title": "Introduction to approximation theory", "venue": null, "year": 1966 }, { "authors": [ "Eli Chien", "Jianhao Peng", "Pan Li", "Olgica Milenkovic" ], "title": "Adaptive universal generalized pagerank graph neural network, 2020", "venue": null, "year": 2020 }, { "authors": [ "Fan RK Chung", "Fan Chung Graham" ], "title": "Spectral graph theory", "venue": "Number 92 in CBMS Workshop on Spectral Graph Theory. American Mathematical Society,", "year": 1997 }, { "authors": [ "Hanjun Dai", "Hui Li", "Tian Tian", "Xin Huang", "Lin Wang", "Jun Zhu", "Le Song" ], "title": "Adversarial attack on graph structured data", "venue": "arXiv preprint arXiv:1806.02371,", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "James Fox", "Sivasankaran Rajamanickam" ], "title": "How robust are graph neural networks to structural noise", "venue": "arXiv preprint arXiv:1912.10206,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "arXiv preprint arXiv:1704.01212,", "year": 2017 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Martin Grohe" ], "title": "word2vec, node2vec, graph2vec, x2vec: Towards a theory of vector embeddings of structured data", "venue": "In Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems,", "year": 2020 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David K Hammond", "Pierre Vandergheynst", "Rémi Gribonval" ], "title": "Wavelets on graphs via spectral graph theory", "venue": "Applied and Computational Harmonic Analysis,", "year": 2011 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks. 2017", "venue": null, "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Qimai Li", "Xiao-Ming Wu", "Han Liu", "Xiaotong Zhang", "Zhichao Guan" ], "title": "Label efficient semisupervised learning via graph filtering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard S Zemel" ], "title": "Lanczosnet: Multi-scale deep graph convolutional networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Meng Liu", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Non-local graph neural networks", "venue": "arXiv preprint arXiv:2005.14612,", "year": 2020 }, { "authors": [ "Andrew Y Ng", "Michael I Jordan", "Yair Weiss" ], "title": "On spectral clustering: Analysis and an algorithm", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Hoang NT", "Takanori Maehara" ], "title": "Revisiting graph neural networks: All we have is low-pass filters", "venue": "arXiv preprint arXiv:1905.09550,", "year": 2019 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "International Conference on Representation Learning,", "year": 2020 }, { "authors": [ "Hongbin Pei", "Bingzhe Wei", "Kevin Chen-Chuan Chang", "Yu Lei", "Bo Yang" ], "title": "Geom-gcn: Geometric graph convolutional networks", "venue": "arXiv preprint arXiv:2002.05287,", "year": 2020 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": null, "year": 2019 }, { "authors": [ "Benedek Rozemberczki", "Carl Allen", "Rik Sarkar" ], "title": "Multi-scale attributed node embedding", "venue": "arXiv preprint arXiv:1909.13021,", "year": 2019 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Uri Shaham", "Kelly Stanton", "Henry Li", "Boaz Nadler", "Ronen Basri", "Yuval Kluger" ], "title": "Spectralnet: Spectral clustering using deep neural networks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "Jianbo Shi", "Jitendra Malik" ], "title": "Normalized cuts and image segmentation", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2000 }, { "authors": [ "David I. Shuman", "Sunil K. Narang", "Pascal Frossard", "Antonio Ortega", "Pierre Vandergheynst" ], "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "venue": "arXiv preprint arXiv:1211.0053,", "year": 2012 }, { "authors": [ "Indro Spinelli", "Simone Scardapane", "Uncini Aurelio" ], "title": "Adaptive propagation graph convolutional network", "venue": "arXiv preprint arXiv:2002.10306,", "year": 2020 }, { "authors": [ "Jason Weston", "Frédéric Ratle", "Hossein Mobahi", "Ronan Collobert" ], "title": "Deep learning via semisupervised embedding", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Holanda de Souza Jr.", "Christopher Fifty", "Tao Yu", "Kilian Q. Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "S Yu Philip" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "arXiv preprint arXiv:1806.03536,", "year": 2018 }, { "authors": [ "Zhilin Yang", "William Cohen", "Ruslan Salakhudinov" ], "title": "Revisiting semi-supervised learning with graph embeddings", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "Graphsaint: Graph sampling based inductive learning method", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Daniel Zügner", "Stephan Günnemann" ], "title": "Adversarial attacks on graph neural networks via meta learning", "venue": "arXiv preprint arXiv:1902.08412,", "year": 2019 }, { "authors": [ "Daniel Zügner", "Amir Akbarnejad", "Stephan Günnemann" ], "title": "Adversarial attacks on neural networks for graph data", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Pei" ], "title": "2020) to split the traffic amount into 5 categories. The synthetic dataset is generated using NetworkX library and labeled by its bipartite parts", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum." }, { "heading": "1 INTRODUCTION", "text": "The semi-supervised vertex classification problem (Weston et al., 2012; Yang et al., 2016) in attributed graphs has become one of the most fundamental machine learning problems in recent years. This problem is often associated with its most popular recent solution, namely Graph Convolutional Networks (Kipf & Welling, 2017). Since the GCN proposal, there has been a vast amount of research to improve its scalability (Hamilton et al., 2017; Chen et al., 2018; Wu et al., 2019) as well as performance (Liao et al., 2019; Li et al., 2019; Pei et al., 2020).\nExisting vertex classification models often (implicitly) assume that the graph has large vertex homophily (Pei et al., 2020), or equivalently, low-frequency property (Li et al., 2019; Wu et al., 2019); see Section 2.1 for graph frequency. However, this assumption is not true in general. For instance, let us take the Wisconsin dataset (Table 1), which captures a network of students, faculty, staff, courses, and projects. These categories naturally exhibit different frequency patterns1. Connections between people are often low-frequency, while connections between topics and projects are often midrange. This problem becomes apparent as GCN-like models show low accuracies on this dataset; for example, see (Pei et al., 2020; Chen et al., 2020b; Liu et al., 2020).\nThis paper aims at establishing a GCN model for the vertex classification problem (Definition 1) that does not rely on any frequency assumption. Such a model can be applied to ubiquitous datasets without any hyper-parameter tuning for the graph structure.\nContributions. By observing the relation between label frequency and performance of existing GCN-like models, we propose to learn the graph filters coefficients directly rather than learning the MLP part of a GCN-like layer. We use filter stacking to implement a trainable graph filter, which is capable of learning any filter function. Our stacked filter construction with novel learnable filter parameters is easy to implement, sufficiently expressive, and less sensitive to the filters’ degree. By using only one hyper-parameter setting, we show that our model is more adaptive than existing work on a wide range of benchmark datasets.\nThe rest of our paper is organized as follows. Section 2 introduces notations and analytical tools. Section 3 provides insights into the vertex classification problem and motivations to our model’s design. Section 4 presents an implementation of our model. Section 5 summarizes related literature with a focus on graph filters and state-of-the-art models. Section 6 compares our model and other existing methods empirically. We also provide additional experimental results in Appendix A.\n1“Frequency” is an equivalent concept to “homophily” and will be explained in Section 2." }, { "heading": "2 PRELIMINARIES", "text": "We consider a simple undirected graph G = (V,E), where V = {1, . . . , n} is a set of n vertices and E ⊆ V × V is a set of edges. A graph G is called an attributed graph, denoted by G(X), when it is associated with a vertex feature mapping X : V 7→ Rd, where d is the dimension of the features. We define the following vertex classification problem, also known in the literature as the semi-supervised vertex classification problem (Yang et al., 2016). Definition 1 (Vertex Classification Problem). We are given an attributed graph G(X), a set of training vertices Vtr ⊂ V , training labels Ytr : Vtr → C, and label set C. The task is to find a model h : V → C using the training data (Vtr, Ytr) that approximates the true labeling function Y : V → C.\nLet A be the adjacency matrix of the graph G, i.e., Ai,j = 1 if (i, j) ∈ E and 0 otherwise. Let di = ∑ j Aij be the degree of vertex i ∈ V , and let D = diag(d1, . . . , dn) be the n × n diagonal matrix of degrees. Let L = D −A be the combinatorial graph Laplacian. Let L = D−1/2LD−1/2 be the symmetric normalized graph Laplacian. We mainly focus on the symmetric normalized graph Laplacian due to its interesting spectral properties: (1) its eigenvalues range from 0 to 2; and (2) the spectral properties can be compared between different graphs (Chung & Graham, 1997). In recent literature, the normalized adjacency matrix with added self-loops, à = I −L+ c, is often used as the propagation matrix, where c is some diagonal matrix." }, { "heading": "2.1 GRAPH FREQUENCY", "text": "Graph signal processing (Shuman et al., 2012) extends “frequency” concepts in the classical signal processing to graphs using the graph Laplacian. Let L = UΛU> be the eigendecomposition of the Laplacian, where U ∈ Rn×n is the orthogonal matrix consists of the orthonormal eigenvectors of L and Λ is the diagonal matrix of eigenvalues. Then, we can regard each eigenvector uk as a “oscillation pattern” and its eigenvalue λk as the “frequency” of the oscillation. This intuition is supported by the Rayleigh quotient as follows.\nr(L, x) , x >Lx x>x =\n∑ u∼v Lu,v(x(u)− x(v))2∑\nu∈V x(u) 2\n. (1)\nwhere ∑ u∼v sums over all unordered pairs for which u and v are adjacent, x(u) denotes the entry of vector x corresponding to vertex u, and Lu,v is the (u, v)-entry of L. From the definition we see that r(x) is non-negative and L is positive semi-definite. r(x) is also known as a variational characterization of eigenvalues of L (Horn & Johnson, 2012, Chapter 4), hence 0 ≤ r(x) ≤ 2 for any non-zero real vector x. We use the notation r(x) to denote the Rayleigh quotient when the normalized graph Laplacian is clear from context. The Rayleigh quotient r(x) measures how the data x is oscillating. Hence, in this study, we use the term “frequency” and the “Rayleigh quotient” interchangeably. By the definition, the eigenvector ui has the frequency of λi.\nThe labeling y of the vertices is low-frequency if the adjacent vertices are more likely to have the same label. This is a common assumption made by the spectral clustering algorithms (Shi & Malik, 2000; Ng et al., 2002; Shaham et al., 2018). Commonly used terms, homophily and heterophily, used in network science, correspond to low-frequency and high-frequency, respectively." }, { "heading": "2.2 GRAPH FILTERING", "text": "In classical signal processing, a given signal is processed by filters in order to remove unwanted interference. Here, we first design a frequency response f(λ) of the filter, and then apply the filter to the signal in the sense that each frequency component x̂(λ) of the data is modulated as f(λ)x̂(λ). Graph signal processing extends this concept as follows. Same as in classical signal processing, we design a filter f(λ). Then, we represent a given graph signal x ∈ R|V | as a linear combination of the eigenvectors as x = ∑ i xiui. Then, we modulate each frequency component\nby f(λ) as x = ∑ i f(λi)xiui. An important fact is that this can be done without performing the eigendecomposition explicitly. Let f(L) be the matrix function induced from f(λ). Then, the filter is represented by f(L)x. As an extension of signal processing, graph signal processing deals with signals defined on graphs. In definition 1, each column of the feature matrix X ∈ Rn×d is a “graph signal”. Let L = UΛU> be\nthe eigendecomposition where U ∈ Rn×n consists of orthonormal eigenvectors. Signal X is filtered by function f of the eigenvalues as follow.\nX̄ = Uf(Λ)U>X = f(L)X (2)\nIn general, different implementations of f(L) lead to different graph convolution models. For instance, GCN and SGC (Wu et al., 2019) are implemented by f(L) = (I−L+(D+ I)−1/2L(D+ I)−1/2)k, where the constant term stems from the fact that self-loops are added to vertices and k is the filter order. Generally, the underlying principle is to learn or construct the appropriate filter function f such that it transforms X into a more expressive representation. The filter in GCN is called a low-pass filter because it amplifies low-frequency components (Li et al., 2018; NT & Maehara, 2019)." }, { "heading": "3 SPECTRAL PROPERTIES OF FILTERS", "text": "Towards building a ubiquitous solution, we take an intermediate step to study the vertex classification problem. Similar to the unsupervised clustering problem, an (implicit) low-frequency assumption is commonly made. However, the semi-supervised vertex classification problem is more involved because vertex labels can have complicated non-local patterns. Table 1 shows three groups of datasets, each with different label frequency ranges. Notably, WebKB datasets (Wisconsin, Cornell, Texas) have mixed label frequencies; some labels have low frequencies while others have midrange frequencies. Therefore, in order to relax the frequency assumptions, we need to learn the filtering function f(λ) in a similar way as proposed by Defferrard et al. (2016).\nThe filtering function f(λ) is often approximated using a polynomial of the graph Laplacian as\nf(L) ≈ poly(L) = K∑ i=0 θiLi. (3)\nBecause polynomials can uniformly approximate any real continuous function on a compact interval (see, e.g., (Brosowski & Deutsch, 1981)), such approximation scheme is well-justified.\nKipf & Welling (2017) derived their GCN formulation as follows. In their equation 5, they approximated a graph filter gθ by Chebyshev polynomials Tk as\ngθ ∗ x ≈ K∑ k=0 θkTk(D −1/2AD−1/2)x. (4)\nThen, they took the first two terms and shared the parameters as θ0 = −θ1 to obtain their equation 7: gθ ∗ x ≈ θ ( IN +D −1/2AD−1/2 ) x ≈ θ (2IN − L) (5)\nFinally, they extended a scalar θ to a matrix Θ to accommodate multiple feature dimensions as\nZ = D̃−1/2ÃD̃−1/2XΘ (6)\nKipf & Welling (2017) claimed that the weight matrix Θ can learn different filters, and subsequent works (e.g., (Veličković et al., 2018; Spinelli et al., 2020; Chen et al., 2020b)) also learned filters by Θ. However, neither in theory nor practice it is the case (Oono & Suzuki, 2020). As the construction suggest, a GCN layer only represents a filter of the form f(λ) ≈ 2− λ. To properly learn different graph filters, we should learn the multiplying parameters θ0, θ1, . . . , θK in equation 3. In the next section, we propose a learning model which directly learns these multiplying parameters." }, { "heading": "4 MODEL DESCRIPTION", "text": "The previous discussion provided several insights: (1) Vertex classification model’s frequency is decided by its filter, (2) a mechanism to match the frequencies of data is necessary, and (3) directly learning the polynomial filter’s coefficients is more desirable if we do not want to make any frequency assumption. Based on these observations, we implemented an adaptive Stacked Graph Filter (SGF) model. Figure 1 visually describes SGF.\nDesign decisions. The novelty of our model is the stacked filter, and we directly learn the filtering function by filter coefficients α and β, which makes SGF work well universally without frequency hyper-parameters. The deep filter module consists of filters stacked on top of each other with skipconnections to implement the ideas in Proposition 2. Each filter layer has two learnable scalars: α` and β` which control the shape of the linear filter (Figure 1). Two learnable linear layers Win and Wout with a non-linear activation serve as a non-linear classifier (NT & Maehara, 2019).\nThe input part of our architecture resembles APPNP (Klicpera et al., 2019) in the sense that the input signals (vertex features) are passed through a learning weight, then fed into filtering. The output part of our architecture resembles SGC (Wu et al., 2019) where we learn the vertex labels with filtered signals. This combination naturally takes advantages of both bottom-up (APPNP) and top-down (SGC) approaches. Compared to APPNP and SGC, besides the different in filter learning, our model performs filtering (propagation) on the latent representation and classifies the filtered representation, whereas APPNP propagates the predicted features and SGC classifies the filtered features.\nFrom the spectral filtering viewpoint, our approach is most similar to ChebyNet (Defferrard et al., 2016) since both models aim to learn the filtering polynomial via its coefficients. Chebyshev polynomial basis is often used in signal processing because it provides optimal interpolation points (Cheney, 1966; Hammond et al., 2011). However, since we are learning the coefficients of an unknown polynomial filter, all polynomial bases are equivalent. To demonstrate this point, we implement the Stacked Filter module (Figure 1) using ChebNet’s recursive formula in Section 6. We find that Chebyshev polynomial basis approach has similar performance to the stacked approach with one slight caveat on choosing λmax. We empirically show this problem by setting the scaling factor λmax = 1.5. Note that, as pointed out by Kipf & Welling (2017), such problem can be migrated simply by assuming λmax = 2 so all eigenvalues stay in [−1, 1].\nGiven an instance of Problem 1, let σ be an activation function (e.g., ReLU), à = I − (D + I)−1/2L(D + I)−1/2 be the augmented adjacency matrix, α` and β` be the filter parameters at layer `, a K-layer SGF is given by:\nSGF: Input à SGF: Input L\nH0 = σ(XWin) H0 = σ(XWin)\nH` = α`ÃH`−1 + β`H0, ` = 1 . . .K H` = α`LH`−1 + β`H0, ` = 1 . . .K ŷ = HKWout ŷ = HKWout\nSGF can be trained with conventional objectives (e.g., negative log-likelihood) to obtain a solution to Problem 1. We present our models using the augmented adjacency matrix to show its similarity to existing literature. However, as noted in Figure 1, we can replace à with L.\nThe stacked filter is easy to implement. Moreover, it can learn any polynomial of order-K as follows. The closed-form of the stacked filter (Figure 1) is given by\nβKI + K∑ i=1 ( K∏ j=i αj)βi−1LK−i+1 (7)\nwhere β0 = 1. Because each term of equation 7 contains a unique parameter, we obtain the following. Proposition 2. Any polynomial poly(L) of order K can be represented by the form equation 7.\nNote that the same result holds if we replace L in equation 7 by Ã. In practice, we typically set the initial values of αi = 0.5 and update them via the back-propagation. The learned αi is then likely to satisfy |αi| < 1, which yields a further property of the stacked filter: it prefers a lowdegree filter, because the coefficients of the higher-order terms are higher-order in αi which vanishes exponentially faster. This advantage is relevant when we compare with a trivial implementation of the polynomial filter that learns θi directly (this approach corresponds to horizontal stacking and ChebyNet (Defferrard et al., 2016)). In Appendix A.1, we compare these two implementations and confirm that the stacked filter is more robust in terms of filter degree than the trivial implementation." }, { "heading": "5 RELATED WORK", "text": "GCN-like models cover a subset of an increasingly large literature on graph-structured data learning with graph neural networks (Gori et al., 2005; Scarselli et al., 2008). In general, vertex classification and graph classification are the two main benchmark problems. The principles for representation learning behind modern graph learning models can also be split into two views: graph propagation/diffusion and graph signal filtering. In this section, we briefly summarize recent advances in the vertex classification problem with a focus on propagation and filtering methods. For a more comprehensive view, readers can refer to review articles by Wu et al. (2020), Grohe (2020), and also recent workshops on graph representation learning2.\nFeature Propagation. Feature propagation/message-passing and graph signal filtering are two equivalent views on graph representation learning (Defferrard et al., 2016; Kipf & Welling, 2017). From the viewpoint of feature propagation (Scarselli et al., 2008; Gilmer et al., 2017), researchers focus on novel ways to propagate and aggregate vertex features to their neighbors. Klicpera et al. (2019) proposed PPNP and APPNP models, which propagate the hidden representation of vertices. More importantly, they pioneered in the decoupling of the graph part (propagation) and the classifier part (prediction). Abu-El-Haija et al. (2019) also proposed to use skip-connections to distinguish between 1-hop and 2-hop neighbors. Zeng et al. (2020) later proposed GraphSAINT to aggregate features from random subgraphs to further improve their model’s expressivity. Pei et al. (2020) proposed a more involved geometric aggregation scheme named Geom-GCN to address weaknesses of GCN-like models. Most notably, they discussed the relation between network homophily and GCN’s performance, which is similar to label frequency r(Y ) in Table 1. Spinelli et al. (2020) introduced an adaptive model named AP-GCN, in which each vertex can learn the number of “hops” to propagate its feature via a trainable halting probability. Similar to our discussion in Section 3, they still use a fully-connected layer to implement the halting criteria, which controls feature propagation. AP-GCN’s architecture resembles horizontal stacking of graph filters where they learn coefficients θ directly. However their construction only allows for binary coefficients3. We later show that full horizontal stacking models (more expressive than AP-GCN) is less stable in terms of polynomial order than our approach (Appendix A.1). More recently, Liu et al. (2020) continued to address the difficulty of low homophily datasets and proposed a non-local aggregation based on 1D convolution and the attention mechanism, which has a “reconnecting” effect to increase homophily.\nGraph Filtering. GCN-like models can also be viewed as graph signal filters where vertex feature vectors are signals and graph structure defines graph Fourier bases (Shuman et al., 2012; Defferrard et al., 2016; Li et al., 2018; Wu et al., 2019). This graph signal processing view addresses label efficiency (Li et al., 2019) and provides an analogue for understanding graph signal processing using\n2See, e.g., https://grlplus.github.io/ 3In the manuscript, they showed a construction using coefficients of graph Laplacian, but the actual imple-\nmentation used GCNConv (which is I − L+ c) from pytorch-geometric.\ntraditional signal processing techniques. For example, the Lanczos algorithm is applied in learning graph filters by Liao et al. (2019). Bianchi et al. (2019) applies the ARMA filter to graph neural networks. Similar to (Klicpera et al., 2019), Wu et al. (2019) and NT & Maehara (2019) also follow the decoupling principle but in a reversed way (filter-then-classify). (Chen et al., 2020b) built a deep GCN named GCNII which holds the current best results for original splits of Cora, Citeseer, and Pubmed. They further showed that their model can estimate any filter function with an assumption that the fully-connected layers can learn filter coefficients (Chen et al., 2020b, Proof of Theorem 2)." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "We conduct experiments on benchmark and synthetic data to empirically evaluate our proposed models. First, we compare our models with several existing models in terms of average classification accuracy. Our experimental results show that our single model can perform well across all frequency ranges. Second, we plot the learned filter functions of our model to show that our model can learn the frequency range from the data — such visualization is difficult in existing works as the models’ filters are fixed before the training process." }, { "heading": "6.1 DATASETS", "text": "We use three groups of datasets corresponding to three types of label frequency (low, midrange, high). The first group is low-frequency labeled data, which consists of citation networks: Cora, Citeseer, Pubmed (Sen et al., 2008); and co-purchase networks Amazon-Photo, Amazon-Computer (Shchur et al., 2018). The second group is network datasets with midrange label frequency (close to 1): Wisconsin, Cornell, Texas (Pei et al., 2020); and Chameleon (Rozemberczki et al., 2019). The last group consists of a synthetic dataset with high label frequency (close to 2). For the Biparite dataset, we generate a connected bipartite graph on 2,000 vertices (1,000 on each part) with an edge density of 0.025. We then use the bipartite parts as binary vertex labels. Table 1 gives an overview of these datasets; see Appendix B.3 for more detail." }, { "heading": "6.2 VERTEX CLASSIFICATION", "text": "We compare our method with some of the best models in the current literature. Two layers MLP (our model without graph filters), GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), and APPNP (Klicpera et al., 2019) are used as a baseline. Geom-GCN-(I,P,S) (Pei et al., 2020), JKNet+DE (Xu et al., 2018; Rong et al., 2019), and GCNII (Chen et al., 2020a) are currently among the best models. We implement the Chebyshev polynomial filter as in (Defferrard et al., 2016) and set λmax = 1.5. The Literature section of Table 2 and 3 shows the best results found in the literature where these models are set at the recommended hyper-parameters and recommended variants for each dataset. In our experiment, we fix the graph-related hyper-parameters of each model and report the classification results. Our model contains 16 layers of stacked filters (Ã) and has 64 hidden dimensions. Learning rate is set at 0.01, weight decay is 5e × 10−4, and dropout rate for linear\nlayers is 0.7. From an intuition that the filter should discover the required frequency pattern before the linear layers, we set the learning rate of linear layers to be one-fourth of the main learning rate. This experimental setup shows that SGF can adapt to the label frequency without setting specific hyper-parameters. In Table 2, SGF performs comparably with the current state-of-the-art. On the other hand, in Table 3, SGF is not only better than others in our experiments but also surpassing the best results in the literature. Note that we also the exact same SGF model across all experiments.\nResults in Table 3 also suggest that the ability to adapt of the state of the art model GCNII is sensitive to its parameters α and θ. In our experiment, we fix the θ parameter to 0.5 for all datasets, while in their manuscript the recommended values are around 1.5 depending on the dataset. With the recommended hyper-parameters, GCNII can achieve the average accuracy of 81.57% on Wisconsin data. However, its performance dropped around 3 ∼ 10% with different θ values. This comparison highlights our model’s ability to adapt to a wider range of datasets without any graph-related hyper-parameters.\nThe Chebyshev polynomial basis performs comparably to the staking implementation as we discussed in the previous sections. The value λmax = 1.5 is choosen because the typical maximum eigenvalue of real-world networks are often at this value. However, in practice, one should set λmax = 2 as\n0\n3\n6 Acc: 87.1\n0\n6\n12 Acc: 89.0 x10\n-4\n-2\n1 Acc: 100 x10\n0\n2\n4 Init.\n0\n2\n4 Init.\nCora Wisconsin Bipartite\n0\n0.8\n1.6 Acc: 100\nx10\n3\n5\n7 Acc: 84.4 Acc: 79.2 x10\n1.1\n2.0\n0.2\ndiscussed by Kipf & Welling (2017). Our experiments here intent to highlight the potential numerical instability problem due to the arbitarily large leading coefficient of the Chebyshev polynomial basis. Since for vertex classification any polynomial basis is equivalent, numerical stable ones like our implementation of SGF is certainly more preferable in practice." }, { "heading": "6.3 FILTER VISUALIZATION", "text": "Another advantage of our model is the ability to visualize the filter function using an inversion of Proposition 2. The first row of Figure 2 shows the filtering functions at initialization and after training when input is the normalized augmented adjacency matrix. The second row shows the results when the input is the normalized Laplacian matrix. These two cases can be interpreted as starting with a low-pass filter (Ã) or starting with a high-pass filter (L). Figure 2 clearly shows that our method can learn the suitable filtering shapes from data regardless of the initialization. We expect the visualization here can be used as an effective exploratory tool and baseline method for future graph data." }, { "heading": "6.4 ADAPTIVITY TO STRUCTURAL NOISE", "text": "Recently, Fox & Rajamanickam (2019) raised a problem regarding structural robustness of a graph neural network for graph classification. Zügner et al. (2018) posed a similar problem related to adversarial attack on graphs by perturbations of vertex feature or graph structure for the vertex classification setting (Dai et al., 2018; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2019). Here, we evaluate the robustness of the models against the structural noise, where we perturb a fraction of edges while preserving the degree sequence4. This structural noise collapses the relation between the features and the graph structure; hence, it makes the dataset to have the midrange frequency. This experimental setting shows that adaptive models like ours and GCNII are more robust to structural noise. In the worst-case scenario (90% edges are swapped), the adaptive models are at least as good as an MLP on vertex features. Figure 3 shows vertex classification results at each amount of edge perturbation: from 10% to 90%. APPNP with α = 0.2 and SGC with k = 2 have similar behavior under structural noise since these models give more weights to filtered features. On the other hand, APPNP with α = 0.8 is much more robust to structural noise as it depends more on the vertex features. This result suggests that adaptive models like ours and GCNII can be a good baseline for future graph adversarial attack studies (SGF’s advantage here is being much simpler).\n6.5 DYNAMICS OF α’S AND β’S\nIn addition to Section 6.3, this section studies the dynamic of α and β during training for two representative datasets: Cora (low-frequency) and Wisconsin (mid-frequency). We the value of α and β in SGF (Ã) every 20 training epochs and plot the result. Figure 4 shows the values of α and β in 16 layers of SGF in top to bottom then left to right order (reshaped to 4 by 4 blocks). For the Cora dataset, we see that the over-smoothing effect is quickly migrated as the α’s automatically go to zero with the exception of the last three layers. Similarly, the weights for skip-connections – β’s – quickly\n4https://en.wikipedia.org/wiki/Degree-preserving_randomization\ngo to zero with the exception of few last layers. For the Wisconsin dataset, we can see that there is almost no filtering because all α’s go to zero quickly and there is only one active skip-connection in the last layer. This single active skip-connection phenomenon is further confirmed by the experiment on MLP (Table 3) where MLP performed comparably to graph-based models. These results further explained the ability to adapt of our model.\nAdditional Experiments. We provide several other experimental results in Appendix A. Section A.1 discusses the advantages of vertical stacking (SGF) versus a naı̈ve horizontal stacking (learning θ in equation 3 directly). Section A.2 discusses the difficulty of estimating the frequency range (Rayleigh quotient) of vertex labels when the training set is small. Section A.3 provide additional experiments where α’s and β’s are initialized randomly. We show that our model is still adaptive even with uniform [−1, 1] initialization." }, { "heading": "7 CONCLUSION", "text": "We show that simply by learning the polynomial coefficients rather the linear layers in the formulation of GCN can lead to a highly adaptive vertex classification model. Our experiment shows that by using only one setting, SGF is comparable with all current state-of-the-art methods. Furthermore, SGF can also adapt to structural noise extremely well, promising a robust model in practice. Since our objective is to relax the frequency assumption, one could expect our model will perform weakly when number of training data is limited. Because the estimation of label frequency becomes difficult with a small number of data (Appendix A.2), designing a learning model that is both adaptive and data-efficient is an exciting challenge. We believe an unbiased estimation (Proposition 4) with a more involved filter learning scheme is needed to address this problem in the future." }, { "heading": "A EXTRA EXPERIMENTAL RESULTS", "text": "A.1 VERTICAL AND HORIZONTAL STACKING\nHorizontal stacking is equivalent to learning θ’s in Equation 3 directly instead of stacking them vertically. In parallel to our work, Chien et al. (2020) explored the horizontal stacking idea with the pagerank matrix instead of the Laplacian matrix discussed here. We find that both vertical and horizontal can learn degree K polynomial, but vertical stacking naturally robust to the high order terms. Horizontally stacked filter even loses its ability to adapt when learning order 64 polynomials. Table 4 shows a comparison between vertical stacking (SGF) and horizontal stacking. We also report the average number of iteration until early stopping and average training time per epoch for the 64 filters case. All hyper-parameters are the same as in Table 2 and 3. Figure 5 gives an example of 4 layers stacking to clarify the difference between horizontal and vertical.\nA.2 RAYLEIGH QUOTIENT ESTIMATION FROM TRAINING DATA\nTo obtain an accurate classification solution, the frequency of the model’s output must be close to the frequency of the true labels as follows. Proposition 3. Let ŷ, y ∈ RN be unit length vectors whose signs of entries indicate predicted labels and true labels for vertices in graph G. Let L ∈ Rn×n be the symmetric normalized graph Laplacian of graph G. Suppose the graph frequency gap is at least δ: |r(ŷ) − r(y)| = |ŷ>Lŷ − y>Ly| ≥ δ. Then we have:\n||ŷ − y||22 ≥ δ/4 (8)\nThis proposition explains that a model designed for a specific frequency range (e.g., GCN, SGC, GAT, APPNP, etc for low-frequency range) gives a poor performance on the other frequency ranges. This proposition also leads us a method to seek a model (i.e., a filter) whose output matches the frequency of the true labels. Because the true label frequency is unknown in practice, we must estimate this quantity from the training data. Below, we discuss the difficulty of this estimation.\nA naı̈ve strategy of estimating the frequency is to compute Rayleigh quotient on the training set. However, training features X and training labels yn often have Rayleigh quotient close to 1 (as shown in Table 1 for r(X)), and Figure 7 (Appendix) shows the results when we compute the Rayleigh quotient of labels based on training data. This means that a naı̈ve strategy yields undesirable results and we need some involved process of estimating the frequency.\nIf we can assume that (1) Training vertices are sampled i.i.d., and (2) we know the number of vertices in the whole graph (N = |V |), we can obtain an unbiased estimation of the frequency of the true labels as follows.\nProposition 4. Let p be the proportion of vertices will be used as training data, q be the proportion of label y, N be the total number of vertices in the graph, Ln be the symmetric normalized Laplacian of the subgraph induced by the training vertices, and yn be the training labels. Assuming the training set is obtained by sampling the vertices i.i.d. with probability p, we can estimate the Rayleigh quotient of true labels by\nE(r(yn)) = 4N−1p−2 ( y>n Lnyn − (1− p)y>n diag(Ln)yn ) (9)\nFigure 6 shows an unbiased estimation results using Proposition 4. Unfortunately, at 10% training ratio, the observed variances are high across datasets; thus, we conclude that estimating the label frequency is generally difficult, especially for small training data.\nThus far, we have shown that estimating label’s frequency given limited training data is difficult even with an unbiased estimator. The high data efficiency of GCN-like models could be contributed to the fact that they already assume the labels are low frequency. Without such assumption, we need more data in order to correctly estimate the frequency patterns.\nA.3 RANDOM INITIALIZATION\nWhile the main content of our paper showed the results for α and β initialized at 0.5, our results generally hold even if we initialize them randomly. Table 5 demonstrates this claim by showing our model’s performance with α and β initialized randomly. SGF (0.5) is the setting showed in the main part of our paper. SGF (U[-1,1]) initializes α and β using a uniform distribution in [-1,1].\nBoth Table 5 and Figure 8 show that our model behaves similar to the fixed initialization at 0.5. It is worthwhile to mention that Figure 8a and 8b show SGF initialized randomly at the same seed but converged to two different solutions. The accuracies for these two particular cases are 89.7% for Cora nd 92.0% for Wisconsin. This result and the filter visualization in Section 6.3 refute the argument that our model is also biased toward ”low-frequency”.\nSGF (0.5) 88.97 ± 1.21 77.58 ± 1.11 90.12 ± 0.40 95.58 ± 0.55 92.15 ± 0.41 SGF (U[-1,1]) 88.47 ± 1.40 77.50 ± 1.88 88.23 ± 1.12 92.23 ± 0.53 87.15 ± 3.63\nWisconsin Cornell Texas Chameleon Bipartite\nSGF (0.5) 87.06 ± 4.66 82.45 ± 6.19 80.56 ± 5.63 58.77 ± 1.90 100.0 ± 0.00 SGF (U[-1,1]) 88.66 ± 3.40 79.13 ± 1.60 79.67 ± 3.62 57.83 ± 2.47 100.0 ± 0.00" }, { "heading": "B EXPERIMENTAL DETAILS", "text": "B.1 SOURCE CODE\nThe source code is provided in src.zip. The instruction to install Python environment and running examples can be found in README.md. All results in this paper are obtained using a single machine with an RTX Titan GPU (24GB). We also confirm the results on CPU and another machine with a GeForce 1080Ti GPU (11GB). The provided source code works on both CPU and GPU.\nB.2 EVALUATION PROCEDURE\nFor each dataset and each run, the following training procedure is implemented: Split, use train and validation vertices to estimate Rayleigh quotient; train the model with train set and choose the hyper-parameters using validation set, the hyper-parameters are dropout rate, learning rate, and number of layers; save the model every time best validation accuracy is reached; load the best model on validation set to evaluate on test set. Search set for each hyper-parameters:\n• Dropout rate: {0.4, 0.5, 0.6, 0.7, 0.8} • Weight decay : {1e− 2, 1e− 3, 5e-4, 1e− 4, 5e− 5} • Learing rate: {0.001, 0.01, 0.02, 0.1} • Number of layers: {4, 8, 16, 32, 64}\nWe use the hyper-parameters in bold text to report the result in the main part of our paper.\nB.3 DATA SOURCE\nOur datasets are obtained from the pytorch-geometric repository and the node-classification-dataset repository on GitHub. These datasets are “re-packed” with pickle and stored in src/data. The original URLs are:\n• https://github.com/rusty1s/pytorch geometric • https://github.com/ryutamatsuno/node-classification-dataset\nCitation Networks. Cora (ML), Citeseer, and Pubmed (Sen et al., 2008) are the set of three most commonly used networks for benchmarking vertex classification models. Vertices in these graphs represent papers, and each of them has a bag-of-word vector indicating the content of the paper. Edges are citations between papers. Originally these edges are directed, but they are converted to undirected edges in the trade-off between information loss and efficiency of methods.\nWebKB. WebKB dataset is a collection of university websites collected by CMU5. As we mentioned in previous sections, this dataset is special because it contains many different types of vertices that have mixed frequencies. We use the Wisconsin, Cornel, Texas subsets of this dataset.\nWikipedia. The Chameleon dataset belongs to a collection of Wikipedia pages where edges are references and vertex labels indicate the internet traffic. Originally this dataset was created for the vertex regression task, but here, we follow Pei et al. (2020) to split the traffic amount into 5 categories.\nThe synthetic dataset is generated using NetworkX library and labeled by its bipartite parts. The features are generated randomly with Gaussian N (0, 1).\nB.4 OTHER METHODS\nOther methods are obtained from their respective repository on GitHub. The following are parameter settings for the “Our experiment” section of Table 2 and 3. Since each dataset has a different hyperparameter values, we follow the author’s recommendation for the hyper-parameter not mentioned here. We confirm the results with the recommended hyper-parameters and report them in the “Literature” sections.\n5http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb\n• GCNII: θ = 0.5, α = 0.5. • SGC: k = 2, lr = 0.01, wd = 5× 10−4, dropout = 0.7. • APPNP: K = 2, α = 0.2 and 0.8, lr = 0.02, wd = 5× 10−4, dropout = 0.7. • SGF-Cheby (our implementaion): λmax = {1.5, 2.0}, K = 16 and other hyper-parameters\nare the same as SGF." } ]
2,020
null
SP:f19be0fdce321827638f91d57607ba340b1c3e4b
[ "The main objective of this paper is to reduce the model stability, in particular, the prediction churn of neural networks. The prediction churn is defined as the changed prediction w.r.t. model randomness, e.g. multiple runs of networks. The paper proposed to use a interpolated version of global label smoothing and k-NN label smoothing. Theoretically it is shown that k-NN rule converges to the Bayes rule when k is small, and converges to a kernel smoothed version of Bayes rule when k is linear in n. Experiments are conducted that show the proposed method gives highest test accuracy and lowest churn rate in most cases." ]
Training modern neural networks is an inherently noisy process that can lead to high prediction churn– disagreements between re-trainings of the same model due to factors such as randomization in the parameter initialization and mini-batches– even when the trained models all attain high accuracies. Such prediction churn can be very undesirable in practice. In this paper, we present several baselines for reducing churn and show that utilizing the k-NN predictions to smooth the labels results in a new and principled method that often outperforms the baselines on churn while improving accuracy on a variety of benchmark classification tasks and model architectures.
[]
[ { "authors": [ "Ehsan Amid", "Manfred KK Warmuth", "Rohan Anil", "Tomer Koren" ], "title": "Robust bi-tempered logistic loss based on bregman divergences", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Rohan Anil", "Gabriel Pereyra", "Alexandre Passos", "Robert Ormandi", "George E Dahl", "Geoffrey E Hinton" ], "title": "Large scale distributed neural network training through online distillation", "venue": "arXiv preprint arXiv:1804.03235,", "year": 2018 }, { "authors": [ "Dara Bahri", "Heinrich Jiang", "Maya Gupta" ], "title": "Deep k-nn for noisy labels", "venue": null, "year": 2020 }, { "authors": [ "Kamalika Chaudhuri", "Sanjoy Dasgupta" ], "title": "Rates of convergence for the cluster tree", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Kamalika Chaudhuri", "Sanjoy Dasgupta" ], "title": "Rates of convergence for nearest neighbor classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Andrew Cotter", "Heinrich Jiang", "Maya R Gupta", "Serena Wang", "Taman Narayan", "Seungil You", "Karthik Sridharan" ], "title": "Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Thomas M Cover" ], "title": "Rates of convergence for nearest neighbor procedures", "venue": "In Proceedings of the Hawaii International Conference on Systems Sciences,", "year": 1968 }, { "authors": [ "Luc Devroye", "Laszlo Gyorfi", "Adam Krzyzak", "Gábor Lugosi" ], "title": "On the strong universal consistency of nearest neighbor regression function estimates", "venue": "The Annals of Statistics,", "year": 1994 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Mahdi Milani Fard", "Quentin Cormier", "Kevin Canini", "Maya Gupta" ], "title": "Launch and iterate: Reducing prediction churn", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Evelyn Fix", "Joseph L Hodges Jr." ], "title": "Discriminatory analysis-nonparametric discrimination: consistency properties", "venue": "Technical report, California Univ Berkeley,", "year": 1951 }, { "authors": [ "Stanislav Fort", "Huiyi Hu", "Balaji Lakshminarayanan" ], "title": "Deep ensembles: A loss landscape perspective", "venue": "arXiv preprint arXiv:1912.02757,", "year": 2019 }, { "authors": [ "Gabriel Goh", "Andrew Cotter", "Maya Gupta", "Michael P Friedlander" ], "title": "Satisfying real-world goals with dataset constraints", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "H. Jiang", "B. Kim", "M.Y. Guan", "M.R. Gupta" ], "title": "To trust or not to trust a classifier", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Heinrich Jiang" ], "title": "Non-asymptotic uniform rates of consistency for k-nn regression", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Large-scale celebfaces attributes (celeba) dataset", "venue": "Retrieved August,", "year": 2018 }, { "authors": [ "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Decoupling” when to update” from” how to update", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Rafael Müller", "Simon Kornblith", "Geoffrey E Hinton" ], "title": "When does label smoothing help", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel" ], "title": "Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning", "venue": "arXiv preprint arXiv:1803.04765,", "year": 2018 }, { "authors": [ "Henry WJ Reeve", "Ata Kaban" ], "title": "Fast rates for a kNN classifier robust to unknown asymmetric label noise", "venue": null, "year": 1906 }, { "authors": [ "Aarti Singh", "Clayton Scott", "Robert Nowak" ], "title": "Adaptive Hausdorff estimation of density level sets", "venue": "The Annals of Statistics,", "year": 2009 }, { "authors": [ "Guocong Song", "Wei Chai" ], "title": "Collaborative learning for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Charles J Stone" ], "title": "Consistent nonparametric regression", "venue": "The Annals of Statistics, pp", "year": 1977 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sunil Thulasidasan", "Tanmoy Bhattacharya", "Jeff Bilmes", "Gopinath Chennupati", "Jamal MohdYusof" ], "title": "Combating label noise in deep learning using abstention", "venue": null, "year": 1905 }, { "authors": [ "Alexandre B Tsybakov" ], "title": "On nonparametric estimation of density level sets", "venue": "The Annals of Statistics,", "year": 1997 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Ying Zhang", "Tao Xiang", "Timothy M Hospedales", "Huchuan Lu" ], "title": "Deep mutual learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xiatian Zhu", "Shaogang Gong" ], "title": "Knowledge distillation by on-the-fly native ensemble", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have proved to be immensely successful at solving complex classification tasks across a range of problems. Much of the effort has been spent towards improving their predictive performance (i.e. accuracy), while comparatively little has been done towards improving the stability of training these models. Modern DNN training is inherently noisy due to factors such as the random initialization of network parameters, the mini-batch ordering, and effects of various data augmentation or pre-processing tricks, all of which are exacerbated by the non-convexity of the loss surface. This results in local optima corresponding to models that have very different predictions on the same data points. This may seem counter-intuitive, but even when the different runs all produce very high accuracies for the classification task, their predictions can still differ quite drastically as we will show later in the experiments. Thus, even an optimized training procedure can lead to high prediction churn, which refers to the proportion of sample-level disagreements between classifiers caused by different runs of the same training procedure1.\nIn practice, reducing such predictive churn can be critical. For example, in a production system, models are often continuously improved on by being trained or retrained with new data or better model architectures and training procedures. In such scenarios, a candidate model for release must be compared to the current model serving in production. Oftentimes, this decision is conditioned on more than just overall offline test accuracy– in fact, oftentimes the offline metrics are not completely aligned with actual goal, especially if these models are used as part of a larger system (e.g. maximizing offline click-through rate vs. maximizing revenue or user satisfaction). As a result, these comparisons oftentimes require extensive and costly live experiments, requiring human evaluation in situations where the candidate and the production model disagree (i.e. in many situations, the true labels are not available without a manual labeler). In these cases, it can be highly desirable to lower prediction churn.\nDespite the practical relevance of lowering predictive churn, there has been surprisingly little work done in this area, which we highlight in the related work section. In this work, we focus on predictive churn reduction under retraining the same model architecture on an identical train and test set. Our main contributions are as follows:\n• We provide one of the first comprehensive analyses of baselines to lower prediction churn, showing that popular approaches designed for other goals are effective baselines for churn reduction, even compared to methods designed for this goal.\n1Concretely, given two classifiers applied to the same test samples, the prediction churn between them is the fraction of test samples with different predicted labels.\n• We improve label smoothing, a global smoothing method popular for improving model confidence scores, by utilizing the local information leveraged by the k-NN labels thus introducing k-NN label smoothing which we show to often outperform the baselines on a wide range of benchmark datasets and model architectures.\n• We show new theoretical results for the k-NN labels suggesting the usefulness of the k-NN label. We show under mild nonparametric assumptions that for a wide range of k, the kNN labels uniformly approximates the Bayes-optimal label and when k is tuned optimally, achieves the minimax optimal rate. We also show that when k is linear in n, the distribution implied by the k-NN label approximates the original distribution smoothed with an adaptive kernel." }, { "heading": "2 RELATED WORKS", "text": "Our work spans multiple sub-areas of machine learning. The main problem this paper tackles is reducing prediction churn. In the process, we show that label smoothing is an effective baseline and we improve upon it in a principled manner using deep k-NN label smoothing.\nPrediction Churn. There are only a few works which explicitly address prediction churn. Fard et al. (2016) proposed training a model so that it has small prediction instability with future versions of the model by modifying the data that the future versions are trained on. They furthermore propose turning the classification problem into a regression towards corrected predictions of an older model as well as regularizing the new model towards the older model using example weights. Cotter et al. (2019); Goh et al. (2016) use constrained optimization to directly lower prediction churn across model versions. Simultaneously training multiple identical models (apart from initialization) while tethering their predictions together via regularization has been proposed in the context of distillation (Anil et al., 2018; Zhang et al., 2018; Zhu et al., 2018; Song & Chai, 2018) and robustness to label noise (Malach & Shalev-Shwartz, 2017; Han et al., 2018). This family of methods was termed “co-distillation” by Anil et al. (2018), who also noted that it can be used to reduce churn in addition to improving accuracy. In this paper, we show much more extensively that co-distillation is indeed a reasonable baseline for churn reduction.\nLabel smoothing. Label smoothing (Szegedy et al., 2016) is a simple technique that proposes to train a model the model on the soft labels obtained by a convex combination of the hard true label and the soft uniform distribution across all the labels. It has been shown that it prevents the network from being over-confident and leads to better confidence calibration (Müller et al., 2019). Here we show that label smoothing is a reasonable baseline for reducing prediction churn, and we moreover enhance it for this task by smoothing the labels locally via k-NN rather than a the pure global approach mixing with the uniform distribution.\nk-NN Theory. The theory of k-NN classification has a long history (e.g. Fix & Hodges Jr (1951); Cover (1968); Stone (1977); Devroye et al. (1994); Chaudhuri & Dasgupta (2014)). To our knowledge, the most relevant k-NN classification result is by Chaudhuri & Dasgupta (2014), who show statistical risk bounds under similar assumptions as used in our work. Our analysis shows finitesample L∞ bounds on the k-NN labels, which is a stronger notion of consistency as it provides a uniform guarantee, rather than an average guarantee as is shown in previous works under standard risk measures such asL2 error. We do this by leveraging recent techniques developed in Jiang (2019) for k-NN regression, which assumes an additive noise model instead of classification. Moreover, we provide to our knowledge the first consistency guarantee for the case where k grows linearly with n.\nDeep k-NN. k-NN is a classical method in machine learning which has recently been shown to be useful when applied to the intermediate embeddings of a deep neural network (Papernot & McDaniel, 2018) to obtain more calibrated and adversarially robust networks. This is because standard distance measures are often better behaved in these representations leading to better performance of k-NN on these embeddings than on the raw inputs. Jiang et al. (2018) uses nearest neighbors on the intermediate representations to obtain better uncertainty scores than softmax probabilities and Bahri et al. (2020) uses the k-NN label disagreement to filter noisy labels for better training. Like these works, we also leverage k-NN on the intermediate representations but we show that utilizing the k-NN labels leads to lower prediction churn." }, { "heading": "3 ALGORITHM", "text": "Suppose that the task is multi-class classification with L classes and the training datapoints are (x1, y1), ..., (xn, yn), where xi ∈ X , and X is a compact subset of RD and yi ∈ RL, where represents the one-hot vector encoding of the label– that is, if the i-th example has label j, then yi has 1 in the j-th entry and 0 everywhere else. Then we give the formal definition of the smoothed labels:\nDefinition 1 (Label Smoothing). Given label smoothing parameter 0 ≤ a ≤ 1, then the smoothed label y is (where 1L denotes the vector of all 1’s in RL).\nyLSa := (1− a) · y + a\nL · 1L.\nWe next formally define the k-NN label, which is the average label of the example’s k-nearest neighbors in the training set. Let us use shorthand X := {x1, ..., xn} and yi ∈ RL. Definition 2 (k-NN label). Let the k-NN radius of x ∈ X be rk(x) := inf{r : |B(x, r) ∩X| ≥ k} whereB(x, r) := {x′ ∈ X : |x−x′| ≤ r} and the k-NN set of x ∈ X beNk(x) := B(x, rk(x))∩X . Then for all x ∈ X , the k-NN label is defined as\nηk(x) := 1\n|Nk(x)| n∑ i=1 yi · 1 [xi ∈ Nk(x)] .\nThe label smoothing method can be seen as performing a global smoothing. That is, every label is equally transformed towards the uniform distribution over all labels. While it seems almost deceptively simple, it has only recently been shown to be effective in practice, specifically for better calibrated networks. However, since this smoothing technique is applied equally to all datapoints, it fails to incorporate local information about the datapoint. To this end, we propose using the k-NN label, which smooths the label across its nearest neighbors. We show theoretically that the k-NN label can be a strong proxy for the Bayes-optimal label, that is, the best possible prediction one can make given the uncertainty. In other words, compared to the true label (or even the label smoothing), the k-NN label is robust to variability in the data distribution and provides a more stable estimate of the label than the original hard label which may be noisy. Training on such noisy labels have been shown to hurt model performance (Bahri et al., 2020) and using the smoothed labels can help mitigate these effects. To this end, we define k-NN label smoothing as follows:\nDefinition 3 (k-NN label smoothing). Let 0 ≤ a, b ≤ 1 be k-NN label smoothing parameters. Then the k-NN smoothed label of datapoint (x, y) is defined as:\nykNNa,b = (1− a) · y + a · ( b · 1 L · 1L + (1− b) · ηk(x) ) .\nWe see that a is used to weight between using the true labels vs. using smoothing, and b is used to weight between the global vs. local smoothing. Algorithm 1 shows how k-NN label smoothing is applied to deep learning models. Like Bahri et al. (2020), we perform k-NN on the network’s logits layer.\nAlgorithm 1 Deep k-NN label smoothing Inputs: 0 ≤ a, b ≤ 1, Training data (x1, y1), ..., (xn, yn), model training procedureM. Train model M0 on (x1, y1), ..., (xn, yn) withM. Let z1, ..., zn ∈ RL be the logits of x1, ..., xn, respectively, w.r.t. M0 Let ỹi be the k-NN smoothed label of (zi, yi) computed w.r.t. dataset (z1, y1), ..., (zn, yn). Train model M on (x1, ỹ1), ..., (xn, ỹn) withM." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "In this section, we provide theoretical justification for why the k-NN labels may be useful. In particular, we show results for two settings, where n is the number of datapoints.\n• When k n, we show that with appropriate setting of k, the k-NN smoothed labels approximates the predictions of Bayes-optimal classifier at a minimax-optimal rate.\n• When k = O(n), we show that the distribution implied by the k-NN smoothed labels is equivalent to the original distribution convolved with an adaptive smoothing kernel.\nOur results may also reveal insights into why distillation methods (the procedure of training a model on another model’s predictions instead of the true labels) can work. Another way of considering the result is that the k-NN smoothed label is equivalent to the soft prediction of the k-NN classifier. Thus, if one were to train on the k-NN labels, it would be essentially distillation on the k-NN classifier and our theoretical results show that the labels implied by k-NN approximate the predictions of the optimal classifier (in the k n setting). Learning the optimal classifier may indeed be a better goal than learning from the true labels, because the latter may lead to overfitting to the sampling noise rather than just the true signal implied by the optimal classifer. While distillation is not the topic of this work, our results in this section may be of independent interest to that area.\nFor the analysis, we assume the binary classification setting, but it is understood that our results can be straightforwardly generalized to the multi-class setting. The feature vectors are defined on compact support X ⊆ RD and datapoints are drawn as follows: the features vector is drawn from density pX on X and the labels are drawn according to the label function η : X → [0, 1], i.e. η(x) = P(Y = 1|X = x).\n4.1 k n\nWe make a few mild regularity assumptions for our analysis to hold, which are standard in works analyzing non-parametric methods e.g. Singh et al. (2009); Chaudhuri & Dasgupta (2014); Reeve & Kaban (2019); Jiang (2019); Bahri et al. (2020). The first part ensures that the support X does not become arbitrarily thin anywhere, the second ensures that the density does not vanish anywhere in the support, and the third ensures that the label function η is smooth w.r.t. to its input.\nAssumption 1. The following three conditions hold:\n• Support Regularity: There exists ω > 0 and r0 > 0 such that Vol(X ∩ B(x, r)) ≥ ω · Vol(B(x, r)) for all x ∈ X and 0 < r < r0, where B(x, r) := {x′ ∈ X : |x− x′| ≤ r}.\n• Non-vanishing density: pX,0 := infx∈X pX(x) > 0.\n• Smoothness of η: There exists 0 < α ≤ 1 and Cα > 0 such that |η(x) − η(x′)| ≤ Cα|x− x′|α for all x, x′ ∈ X .\nWe have the following result which provides a uniform bound between the smoothed k-NN label ηk and the Bayes-optimal label η.\nTheorem 1. Let 0 < δ < 1 and suppose that Assumption 1 holds and that k satisfies the following:\n28 ·D log2(4/δ) · log n ≤ k ≤ 1 2 · ω · pX,0 · vD · rD0 · n,\nwhere vD := π D/2\nΓ(d/2+1) is the volume of a D-dimensional unit ball. Then with probability at least 1− δ, we have\nsup x∈X |ηk(x)− η(x)| ≤ Cα\n( 2k\nω · vD · n · pX,0\n)α/D + √ 2 log(4D/δ) + 2D log(n)\nk .\nIn other words, there exists constants C1, C2, C depending on η and δ such that if k satisfies" }, { "heading": "C1 log n ≤ k ≤ C2 · n,", "text": "then with probability at least 1− δ, ignoring logarithmic factors in n and 1/δ:\nsup x∈X |ηk(x)− η(x)| ≤ C ·\n(( k\nn\n)α/D +\n1√ k\n) .\nChoosing k ≈ n2α/(2α+D), gives us a bound of supx∈X |ηk(x) − η(x)| ≤ Õ(n−1/(2α+D)), which is the minimax optimal rate as established by Tsybakov et al. (1997).\nTherefore, the advantage of using the smoothed labels ηk(x1), ..., ηk(xn) instead of the true labels y1, ..., yn, is that the smoothed labels approximate the Bayes-optimal classifier. Moreover, as shown above, with appropriate setting of k, the smoothed labels are a minimax-optimal estimator of the true label function η. Thus, the smoothed labels provide as good of a proxy for η as any estimator possibly can.\nAs suggested earlier, another way of considering this result is that the original labels may contain considerable noise and thus no single label can be guaranteed reliable. Using the smoothed label instead mitigates this effect and allows us to train the model to match the label function η.\n4.2 k LINEAR IN n\nIn the previous subsection, we showed the utility of k-NN label smoothing as a theoretically sound proxy for the Bayes-optimal labels, which attains statistical consistency guarantees as long as k grows faster than log n and k/n → 0. Now, we analyze the case where k grows linearly with n. In this case, the k-NN smoothed labels no longer recover the Bayes-optimal label function η, but instead an adaptive kernel smoothed version of η. We make this relationship precise here.\nSuppose that k = bβ · nc for some 0 < β < 1. We define the β-smoothed label function: Definition 4 (β-smoothed label function). Let rβ(x) := inf{r > 0 : P(B(x, r)) ≥ β}, that is the radii of the smallest ball centered at x with probability mass β w.r.t. PX . Then, let η̃β(x) be the expectation of η on B(x, rβ(x)) w.r.t. PX :\nη̃β(x) := 1\nβ ∫ B(x,rβ(x)) η(x) · PX(x)dx.\nWe can view η̃β as an adaptively kernel smoothed version of η, where adaptivity arises from the density of the point (the more dense, the smaller the bandwidth we smooth it across) and the kernel is based on the density.\nWe now prove the following result which shows that in this setting ηk estimates η̃β(x). It is worth noting that we need very little assumption on η as compared to the previous result because the βsmoothing of η provides a more regular label function; moreover, the rates are fast i.e. Õ( √ D/n).\nTheorem 2. Let 0 < δ < 1 and k = bβ · nc. Then with probability at least 1 − δ, we have for n sufficiently large depending on β, δ:\nsup x∈X |ηk(x)− η̃β(x)| ≤ 3\n√ 2 log(4D/δ) + 2D log(n)\nβ · n ." }, { "heading": "5 EXPERIMENTS", "text": "We now describe the experimental methodology and results for validating our proposed method." }, { "heading": "5.1 BASELINES", "text": "We next detail the suite of baselines we compare against. We tune baseline hyper-parameters extensively, with the precise sweeps and setups available in the Appendix.\n• Control: Baseline where we train for accuracy without regards to lower churn. • `p Regularization: We control the stability of a model’s predictions by simply regularizing\nthem (independently of the ground truth label) using classical `p regularization. The loss function is given by:\nL`p(xi, yi) = L(xi, yi) + a||f(xi)||pp.\nWe experiment with both `1 and `2 regularization.\n• Bi-tempered: This is a baseline by Amid et al. (2019), originally designed for robustness to label noise. It modifies the standard logistic loss function by introducing two temperature scaling parameters t1 and t2. We apply their “bi-tempered” loss here, suspecting that methods which make model training more robust to noisy labels may also be effective at reducing prediction churn. • Anchor: This is based on a method proposed by Fard et al. (2016) specifically for churn re-\nduction. It uses the predicted probabilities from a preliminary model to smooth the training labels of the second model. We first train a preliminary model fprelim using regular crossentropy loss. We then retrain the model using smoothed labels (1 − a)yi + afprelim(xi), thus “anchoring” on a preliminary model’s predictions. In our experiments, we train one preliminary model and fix it across the runs for this baseline to reduce prediction churn. • Co-distillation: We use the co-distillation approach presented by Anil et al. (2018), who\ntouched upon its utility for churn reduction. We train two identical modelsM1 andM2 (but subject to different random initialization) in tandem while penalizing divergence between their predictions. The overall loss is\nLcodistill(xi, yi) = L(f1(xi), yi) + L(f2(xi), yi) + aΨ(f1(xi), f2(xi)). In their paper, the authors set Ψ to be cross-entropy:\nΨ(p(1), p(2)) = ∑ i∈[K] p (1) i log(p (2) i ),\nbut they note KL divergence can be used. We experiment with both cross-entropy and KL divergence. We also tune wcodistill, the number of burn-in steps of training before turning on the regularizer. • Label Smoothing: This is the method of Szegedy et al. (2016) defined earlier in the paper.\nOur proposed method augments global label smoothing by leveraging the local k-NN estimates. Naturally, we compare against doing global smoothing only and this serves as a key ablation model to see the added benefits of leveraging the k-NN labels. • Mixup: This method proposed by Zhang et al. (2017) generates synthetic training examples\non the fly by convex combining random training inputs and their associated labels, where the combination weights are random draws from a Beta(a, a) distribution. Mixup improves generalization, increases robustness to adversarial examples as well as label noise, and also improves model calibration (Thulasidasan et al., 2019).\n• Ensemble: Ensembling deep neural networks can improve the quality of their uncertainty estimation (Lakshminarayanan et al., 2017; Fort et al., 2019). We consider the simple case where m identical deep neural networks are trained independently on the same training data, and at inference time, their predictions are uniformly averaged together." }, { "heading": "5.2 DATASETS AND MODELS.", "text": "For all datasets, we do not use any data augmentation in order to guarantee that the training data used to across different trainings is held fixed. For all datasets we use the Adam optimizer with default learning rate 0.001. We use a minibatch size of 128 throughout.\n• MNIST: We train a two-layer MLP with 256 hidden units per layer and ReLU activations for 20 epochs.\n• Fashion MNIST: We use the same architecture as the one used for MNIST. • SVHN: We use LeNet5 CNN(LeCun et al., 1998) for 30 epochs on the Google Street View\nHousing Numbers (SVHN) dataset, where each image is cropped to be 32× 32 pixels. • CelebA: CelebA (Liu et al., 2018) is a large-scale face attributes dataset with more than\n200k celebrity images, each with 40 attribute annotations. We use the standard train and test splits, which consist of 162770 and 19962 images respectively. Images were resized to be 28 × 28 × 3. We select the “smiling” and “high cheekbone” attributes and perform binary classification, training LeNet5 for 20 epochs.\n• Phishing: To validate our method beyond the image classification setting, we train a twolayer MLP with 256 hidden units per layer on UCI Phishing dataset (Dua & Graff, 2017), which consists of 7406 train and 3649 test examples on a 30-dimensional input feature." }, { "heading": "5.3 EVALUATION METRICS AND HYPERPARAMETER TUNING", "text": "For each dataset, baseline and hyper-parameter setting, we run each method on the same train and test split exactly 5 times. We then report the average test accuracy as well as the test set churn averaged across every possible pair (i, j) of runs (10 total pairs). To give a more complete picture of the sources of churn, we also slice the churn by the whether or not the test predictions of the first run in the pair were correct. Then, lowering the churn on the correct predictions is desirable (i.e. if the base model is correct, we clearly don’t want the predictions to be changing), while churn reduction on incorrect predictions is less relevant (i.e. if the base model was incorrect, then it may be better for there to be higher churn– however at the same time, some examples may be inherently difficult to classify or the label is such an outlier that we don’t expect an optimal model to correctly classify in which case lower churn may be desirable). This is why in the results for Table 1, we bold the best performing baseline for churn on correct examples, but not for churn on incorrect examples.\nIn the results (Table 1), for each dataset and baseline, we chose the optimal hyperparameter setting by first sorting by accuracy and choosing the setting with the highest accuracy, and if there were multiple settings with very close to the top accuracy (defined as within less than 0.1% difference in test accuracy), then we chose the setting with the lowest churn among those settings with accuracy close to the top accuracy. There is often no principled way to trade-off the two sometimes competing objectives of accuracy and churn (e.g. Cotter et al. (2019) offer a heuristic to trade off the two objectives in a more balanced manner on the Pareto frontier). However in this case, biasing towards higher accuracy is most realistic because in practice, when given a choice between two models, it’s usually best to go with the more accurate model. Fortunately, we will see that accuracy and churn are not necessarily competing objectives and our proposed method usually gives the best result for both simultaneously." }, { "heading": "5.4 RESULTS", "text": "We see from Table 1 that mixup and our method, k-NN label smoothing, are consistently the most competitive; mixup outperforms on SVHN and Fashion MNIST while k-NN label smoothing outperforms on all the remaining datasets. Notably, both methods do well on accuracy and churn metrics simultaneously, suggesting that there is no inherent trade-off between predictive performance and churn reduction. Due to space constraints, ablations on SVHN for our method’s hyperparameters (a, b, and k), along with results for the ensemble baseline can be found in the Appendix. While we found ensembling to be remarkably effective, it does come with higher cost (more trainable parameters and higher inference cost), and so we discourage a direct comparison with other methods." }, { "heading": "6 CONCLUSION", "text": "Modern DNN training is a noisy process: randomization arising from stochastic minibatches, weight initialization, and data preprocessing techniques can lead to models with drastically different predictions on the same datapoints when using the same training procedure– and this phenomenon happens even when all the models attain similarly high accuracies.\nReducing such prediction churn is important in practical problems as production ML models are constantly updated and improved on. Since offline metrics usually can only serve as proxies to the live metrics, comparing the models in A/B tests and live experiments oftentimes must involve manual labeling of the disagreements between the models making it a costly procedure. Thus, controlling the amount of predictive churn can be crucial for more efficiently iterating and improving models in a production setting.\nDespite the practical importance of this problem, there has been little work done in the literature on this topic. We provide one of the first comprehensive analyses of reducing predictive churn arising from retraining the model on the same dataset and model architecture. We show that numerous methods used for other goals such as learning with noisy labels and improving model calibration serve as reasonable baselines for lowering prediction churn. Moreover, we propose a new technique, k-NN label smoothing, which is shown to be a principled approach leveraging a local smoothing from the deep k-NN labels to enhance the global smoothing from the vanilla label smoothing procedure. We further show that it often outperforms the baselines across a range of datasets and model architectures." }, { "heading": "A PROOFS", "text": "For the proofs, we make use of the following result from Jiang (2019) which bounds the number of distinct k-NN sets on the sample across all k:\nLemma 1 (Lemma 3 of Jiang (2019)). Let M be the number of distinct k-NN sets over X , that is, M := |{Nk(x) : x ∈ X}|. Then M ≤ D · nD.\nProof of Theorem 1. We have by triangle inequality and the smoothness condition in Assumption 1 that:\n|ηk(x)− η(x)| ≤ ∣∣∣∣∣ n∑ i=1 (η(xi)− η(x)) · 1 [xi ∈ Nk(x)] |Nk(x)| ∣∣∣∣∣+ ∣∣∣∣∣ n∑ i=1 (yi − η(xi)) · 1 [xi ∈ Nk(x)] |Nk(x)| ∣∣∣∣∣ ≤ Cα · rk(x)α +\n∣∣∣∣∣ n∑ i=1 (yi − η(xi)) · 1 [xi ∈ Nk(x)] |Nk(x)| ∣∣∣∣∣ . We now bound each of the two terms separately.\nTo bound rk(x), let r = (\n2k ω·vD·n·pX,0\n)1/D . We have P(B(x, r)) ≥ ω infx′∈B(x,r)∩X pX(x′) ·\nvDr D ≥ ωpX,0vDrD = 2kn , where P is the distribution function w.r.t. pX . By Lemma 7 of Chaudhuri & Dasgupta (2010) and the condition on k, it follows that with probability 1 − δ/2, uniformly in x ∈ X , |B(x, r)∩X| ≥ k, whereX is the sample of feature vectors. Hence, rk(x) < r for all x ∈ X uniformly with probability at least 1− δ/2. Define ξi := yi − η(xi). Then, we have that −1 ≤ ξi ≤ 1 and thus by Hoeffding’s inequality, we have that Ax := ∑n i=1(yi− η(xi)) · 1[xi∈Nk(x)] |Nk(x)| = ∑n i=1 ξi · 1[xi∈Nk(x)] |Nk(x)| satisfies P (|Ax| > t/k) ≤\n2 exp ( −t2/2k ) . Then setting t = √ 2k · √ log(4D/δ) +D log(n) gives\nP ( |Ax| ≥ √ 2 log(4D/δ) + 2D log(n)\nk\n) ≤ δ\n2D · nD .\nBy Lemma 3 of Jiang (2019), the number of unique random variables Ax across all x ∈ X is bounded by D · nD. Thus, by union bound,\nP ( sup x∈X |Ax| ≥ √ 2 log(4D/δ) + 2D log(n) k ) ≤ δ/2.\nThe result follows.\nProof of Theorem 2. Let X be the n sampled feature vectors and let x ∈ X . Define k′(x) := |X ∩B(x, rβ(x))|. We have:\n|ηk(x)− η̃β(x)| ≤ |ηk′(x)(x)− ηk(x)|+ |ηk′(x)(x)− η̃β(x)|.\nWe bound each of the two terms separately. We have\n|k′(x)− k| = ∣∣∣∣∣∑ x∈X 1[x ∈ B(x, r(x))]− β · n ∣∣∣∣∣ By Hoeffding’s inequality we have\nP(|k′(x)− k| ≥ t · n) ≤ 2 exp(−2t2n).\nChoosing t = √\nlog(4D/δ)+D log(n) 2n gives us\nP ( |k′(x)− k| ≥ √ n\n2 · (log(4D/δ) +D log(n))\n) ≤ δ\n2D · nD .\nBy Lemma 3 of Jiang (2019), the number of unique sets of points consisting of balls intersected with the sample is bounded by D · nD and thus by union bound, we have with probability at least 1− δ/2:\nsup x∈X |k′(x)− k| ≤\n√ n\n2 · (log(4D/δ) +D log(n)).\nWe now have |ηk′(x)(x)− ηk(x)| ≤ ∣∣∣∣1k − 1k′(x) ∣∣∣∣min {k, k′(x)}+ min{1k , 1k′(x) } |k − k′(x)|\n≤ 2 k · |k − k′(x)| ≤\n√ 2 log(4D/δ) + 2D log(n)\nβ · n .\nwhere the first inequality follows by comparing the difference contributed by the shared neighbors among the k-NN and k′(x)-NN (first term on RHS) and contributed by the neighbors that are not shared (second term on RHS).\nFor the second term, define Ax := X ∩ B(x, rβ(x)). For any x′ sampled from B(x, rβ(x)), we have that the expected label is η̃β(x). Since ηk′(x)(x) is the mean label among datapoints in Ax, then we have by Hoeffding’s inequality that\nP(|ηk′(x)− η̃β(x)| ≥ k′(x) · t) ≤ 2 exp ( −t2/2k′ ) .\nThen setting t = √ 2k′ · √ log(4D/δ) +D log(n) gives\nP ( |ηk′(x)(x)− η̃β(x)| ≥ √ 2 log(4D/δ) + 2D log(n)\nk′(x)\n) ≤ δ\n2D · nD .\nBy Lemma 3 of Jiang (2019), the number of unique sets Ax across all x ∈ X is bounded by D ·nD. Thus, by union bound, with probability at least 1− δ/2L\n|ηk′(x)(x)− η̃β(x)| ≤\n√ 2 log(4D/δ) + 2D log(n)\nk′(x) .\nThe result follows immediately for n sufficiently large." }, { "heading": "B ENSEMBLE RESULTS", "text": "In Table 2 we present the experimental results for the ensemble baseline. The method performs remarkably well, beating the proposed method and the other baselines on both accuracy and churn reduction across datasets. We do note, however, that ensembling does come at a cost which may prove prohibitive in many practical applications. Firstly, having m times the number of trainable parameters, training time (if done sequentially) takes m times as long, as does inference, since each subnetwork must be evaluated before aggregation." }, { "heading": "C ABLATION STUDY", "text": "In Table 3, we report SVHN results ablating k-NN label smoothing’s hyperparameters: k, a, and b. We observe the following trends: with a fixed to 1, both accuracy and churn improve with increasing b, and a similar relationship holds as a increases with b fixed to 0.9. Lastly, both key metrics are stable with respect to k." }, { "heading": "D HYPERPARAMETER SEARCH", "text": "Our experiments involved performing a grid search over hyperparameters. We detail the search ranges per method below.\nk-NN label smoothing.\n• k ∈ [5, 10, 100, 500] • a ∈ [0.005, 0.01, 0.2, 0.05, 0.1, 0.5, 0.8, 0.9, 1.0] • b ∈ [0, 0.05, 0.1, 0.5, 0.9]\nAnchor.\n• a ∈ [0.005, 0.01, 0.02, 0.05, 0.1, 0.5, 0.8, 0.9, 1.0]\n`1, `2 Regularization.\n• a ∈ [0.001, 0.01, 0.05, 0.1, 0.2, 0.5]\nCo-distill\n• a ∈ [0.001, 0.01, 0.05, 0.1, 0.2, 0.5] • nwarm ∈ [1000, 2000]\nBi-tempered\n• t1 ∈ [0.3, 0.5, 0.7, 0.9] • t2 ∈ [1., 2., 3., 4.] • niters always set to 5.\nMixup\n• a ∈ [0.2, 0.3, 0.4, 0.5]\nEnsemble\n• m ∈ [3, 5]" } ]
2,020
DEEP k-NN LABEL SMOOTHING IMPROVES STABIL-
SP:5751b2abad772e44e69e125a769f25892c2a2e30
[ "This paper proposes Adversarial Feature Desensitization (AFD) as a defense against adversarial examples. AFD employs a min-max adversarial learning framework where the classifier learns to encode features of both clean and adversarial images as the same distribution, thereby desensitizing adversarial features. With the aim of fooling a separate discriminator model into categorizing the classifier’s adversarial features as from clean images, the classifier is trained with the standard cross-entropy loss and adversarial loss terms. The authors showed through experiments on MNIST, CIFAR10 and CIFAR100 datasets that AFD mostly outperform previous defenses across different adversarial attacks under white- and black-box conditions.", "This paper applied domain adaption ideas into the adversarial settings. Based on it, they proposed Adversarial Feature Desensitization (AFD) by leveraging a discriminator network to minimize the distance between adversarial feature and natural feature. They show that their method could improve the robustness compared to TRADE and AT and could also generalize well to other attacks (with different epsilon). ", "This paper proposes a feature desensitization method for adversarial defense. The authors formulate robust representation learning problems from the domain adaptation perspective, the algorithm to solve which is based on two-player minmax methods (i.e., GAN). The evaluation results on multiple datasets demonstrate the advantage over baseline methods. The learned feature also shows to be spare. ", "The paper proposes Adversarial Feature Desensitization (AFD) to train classifiers robust to adversarial attacks. In particular, AFD trains jointly a feature extractor, a classifier for the original task and a discriminator which distinguishes between natural and adversarial inputs, in order to find features which are effective for classification but not sensitive to adversarial attacks. In the experimental evaluation on four datasets, AFD outperforms standard methods in most of the cases." ]
Neural networks are known to be vulnerable to adversarial attacks – slight but carefully constructed perturbations of the inputs which can drastically impair the network’s performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. However, these models often remain vulnerable to new types of attacks not seen during training, and even to slightly stronger versions of previously seen attacks. In this work, we propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field. Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs. This is achieved through a game where we learn features that are both predictive and robust (insensitive to adversarial attacks), i.e. cannot be used to discriminate between natural and adversarial data. Empirical results on several benchmarks demonstrate the effectiveness of the proposed approach against a wide range of attack types and attack strengths. Our code is available at https://github.com/BashivanLab/afd.
[ { "affiliations": [], "name": "Pouya Bashivan" }, { "affiliations": [], "name": "Reza Bayat" }, { "affiliations": [], "name": "Adam Ibrahim" }, { "affiliations": [], "name": "Kartik Ahuja" }, { "affiliations": [], "name": "Mojtaba Faramarzi" }, { "affiliations": [], "name": "Touraj Laleh" }, { "affiliations": [], "name": "Blake Richards" }, { "affiliations": [], "name": "Irina Rish" } ]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yang Bai", "Yuyuan Zeng", "Yong Jiang", "Shu-Tao Xia", "Xingjun Ma", "Yisen Wang" ], "title": "Improving Adversarial Robustness via Channel-wise Activation Suppressing", "venue": "In ICLR,", "year": 2021 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine Learning,", "year": 2010 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Fernando Pereira" ], "title": "Analysis of representations for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards Evaluating the Robustness of Neural Networks", "venue": "Proceedings - IEEE Symposium on Security and Privacy,", "year": 2017 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "John C Duchi", "Percy S Liang" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alvin Chan", "Yi Tay", "Yew-Soon Ong" ], "title": "What it thinks is important is important: Robustness transfers through input gradients", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Alvin Chan", "Yi Tay", "Yew Soon Ong", "Jie Fu" ], "title": "Jacobian Adversarially Regularized Networks for Robustness", "venue": "ICLR, 2020", "year": 2020 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "J. Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "arXiv preprint arXiv:2003.01690,", "year": 2020 }, { "authors": [ "Gavin Weiguang Ding", "Luyu Wang", "Xiaomeng Jin" ], "title": "AdverTorch v0.1: An adversarial robustness toolbox based on pytorch", "venue": "arXiv preprint arXiv:1902.07623,", "year": 2019 }, { "authors": [ "Yinpeng Dong", "Zhijie Deng", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Adversarial Distributional Training for Robust Deep Learning", "venue": "In Neural Information Processing Systems (NIPS), number NeurIPS,", "year": 2020 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting Adversarial Attacks with Momentum", "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Harris Drucker", "Yann Le Cun" ], "title": "Improving generalization performance using double backpropagation", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Sayna Ebrahimi", "Franziska Meier", "Roberto Calandra", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Adversarial continual learning", "venue": "arXiv preprint arXiv:2003.09553,", "year": 2020 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "Exploring the landscape of spatial robustness", "venue": "36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": "32nd International Conference on Machine Learning, ICML 2015,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Dan Boneh", "Patrick Mcdaniel" ], "title": "Ensemble Adversarial Training: Attacks and Defenses", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Sven Gowal", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "Uncovering the limits of adversarial training against norm-bounded adversarial examples", "venue": null, "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using pre-training can improve model robustness and uncertainty. NeurIPS", "venue": null, "year": 2019 }, { "authors": [ "Qiuyuan Huang", "Paul Smolensky", "Xiaodong He", "Li Deng", "Dapeng Wu" ], "title": "Tensor product generation networks for deep NLP modeling", "venue": null, "year": 2017 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing", "venue": "arXiv preprint arXiv:1803.06373,", "year": 2018 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "Hong Liu", "Mingsheng Long", "Jianmin Wang", "Michael Jordan" ], "title": "Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Michael Mathieu", "Camille Couprie", "Yann LeCun" ], "title": "Deep multi-scale video prediction beyond mean square error", "venue": "arXiv preprint arXiv:1511.05440,", "year": 2015 }, { "authors": [ "Alexander Matyasko", "Lap-Pui Chau" ], "title": "Improved network robustness with adversary critic", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Koyama Masanori", "Yoshida Yuichi" ], "title": "Spectral normalization for generative adversarial networks", "venue": null, "year": 2018 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "cgans with projection discriminator", "venue": "arXiv preprint arXiv:1802.05637,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Shin Ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Seyed Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks", "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Krishnamurthy Dvijotham", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial Robustness Through Local Linearization", "venue": null, "year": 2020 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "In Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jerome Rony", "Luiz G. Hafemann", "Luiz S. Oliveira", "Ismail Ben Ayed", "Robert Sabourin", "Eric Granger" ], "title": "Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses", "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Andrew Slavin Ros", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Evgenia Rusak", "Lukas Schott", "Roland S. Zimmermann", "Julian Bitterwolf", "Oliver Bringmann", "Matthias Bethge", "Wieland Brendel" ], "title": "Increasing the robustness of DNNs against image corruptions by playing the Game of Noise", "venue": null, "year": 2020 }, { "authors": [ "Lukas Schott", "Jonas Rauber", "Matthias Bethge", "Wieland Brendel" ], "title": "Towards the first adversarially robust neural network model on mnist", "venue": "arXiv preprint arXiv:1805.09190,", "year": 2018 }, { "authors": [ "Samrath Sinha", "Sayna Ebrahimi", "Trevor Darrell" ], "title": "Variational adversarial active learning", "venue": "Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Chawin Sitawarin", "Supriyo Chakraborty", "David Wagner" ], "title": "Improving adversarial robustness through progressive hardening", "venue": "arXiv preprint arXiv:2003.09347,", "year": 2020 }, { "authors": [ "Chuanbiao Song", "Kun He", "Liwei Wang", "John E Hopcroft" ], "title": "Improving the generalization of adversarial training with domain adaptation", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Chuanbiao Song", "He Kun", "Lin Jiadong", "John E Hopcroft", "Liwei Wang" ], "title": "Robust local features for improving the generalization of adversarial training", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Gaurang Sriramanan", "Sravanti Addepalli", "Arya Baburaj", "R. Venkatesh Babu" ], "title": "Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses", "venue": "In Neural Information Processing Systems (NIPS),", "year": 2020 }, { "authors": [ "Christian Szegedy", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": null, "year": 2013 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Huaxia Wang", "Chun-Nam Yu" ], "title": "A direct approach to robust deep learning using adversarial networks", "venue": "arXiv preprint arXiv:1905.09591,", "year": 2019 }, { "authors": [ "Dongxian Wu", "Yisen Wang", "Xia Shu-Tao" ], "title": "Revisiting Loss Landscape for Adversarial Robustness", "venue": null, "year": 2019 }, { "authors": [ "Runtian Zhai", "Chen Dan", "Di He", "Huan Zhang", "Boqing Gong", "Pradeep Ravikumar", "Cho-Jui Hsieh", "Liwei Wang" ], "title": "Macer: Attack-free and scalable robust training via maximizing certified radius", "venue": null, "year": 2001 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": null, "year": 1901 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Chaowei Xiao", "Sven Gowal", "Robert Stanforth", "Bo Li", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Towards Stable and Efficient Training of Verifiably", "venue": "Robust Neural Networks", "year": 2019 }, { "authors": [ "Jingfeng Zhang", "Jianing Zhu", "Gang Niu", "Bo Han", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Geometry-aware instance-reweighted adversarial training", "venue": "In ICLR,", "year": 2021 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Sicheng Zhu", "Xiao Zhang", "David Evans" ], "title": "Learning adversarially robust representations via worst-case mutual information maximization", "venue": null, "year": 2020 } ]
[ { "heading": "1 Introduction", "text": "When training a classifier, it is common to assume that the training and test samples are drawn from the same underlying distribution. In adversarial machine learning, however, this assumption is intentionally violated by using the classifier itself to perturb the samples from the original (natural) data distribution towards a new distribution over which the classifier’s error rate is increased [52]. As expected, when tested on such adversarially generated input distribution, the classifier severely underperforms. To date, various methods have been proposed to defend the neural networks against adversarial attacks [34, 2], additive noise patterns and corruptions [24, 25, 45], and transformations [17]. Among these methods, two of the most successful adversarial defense methods to date are adversarial training [34], which trains the neural network with examples that are perturbed to maximize the loss on the target model, and TRADES [57], which regularizes the classifier to push the decision boundary away from the data. While past adversarial defence methods have successfully improved the neural network robustness against adversarial examples, it has also been shown that these robust networks remain susceptible to even slightly larger adversarial perturbations or other forms of attacks [19, 46, 48].\nIn this paper, we propose to view the problem of adversarial robustness through the lens of domain adaptation, and to consider distributions of natural and adversarial images as distinct input domains\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nthat a classifier is expected to perform well on. We then focus our attention on learning features that are invariant under such domain shifts. Building upon domain adaptation literature [4], we use the classification-basedH∆H-divergence to quantify the distance between the natural and adversarial domains. The theory of domain adaptation allows us to formulate a bound on the adversarial classification error (i.e. the error under the distribution of adversarial examples) in terms of the classification error on natural images and the divergence between the natural and adversarial features.\nWe further propose an algorithm for minimizing the adversarial error using this bound. For this, we train a classifier and a domain discriminator to respectively minimize their losses on the label classification and domain discrimination tasks. The feature extractor is trained to minimize the label classifier’s loss and maximise the discriminator’s loss. In this way, the feature extractor network is encouraged to learn features that are both predictive for the classification task and insensitive to the adversarial attacks. The proposed setup is conceptually similar to prior work in adversarial domain adaptation [18, 53], where domain-invariant features are learned through an adversarial game between the domain discriminator and a feature extractor network.\nThis setup is similar to the adversarial learning paradigm widely used in image generation and transformation [20, 28, 60], unsupervised and semi-supervised learning [39], video prediction [35, 31], active learning [47], and continual learning [16]. Some prior work have also considered adversarial learning to tackle the problem of adversarial examples [54, 36, 9, 8]. These methods used generative models to learn the distribution of the adversarial images[54, 36], or to learn the distribution of input gradients[9, 8]. Unlike our method which learns a discriminator function between distributions of adversarial and natural features and updates the feature extractor to reduce the discriminability of those distributions.\nThe main contributions of this work are as follows:\n• We apply domain-adaptation theory to the problem of adversarial robustness; this allows to bound the adversarial error in terms of the error on the natural inputs and the divergence between the feature (representation) distributions of adversarial and natural domains.\n• Aiming to minimize this bound, we propose a method which learns adversarially robust features that are both predictive and insensitive to adversarial attacks, i.e. cannot be used to discriminate between natural and adversarial data.\n• We empirically demonstrate the effectiveness of the proposed method in learning robust models against a wide range of attack types and attack strengths, and show that our proposed approach often significantly outperforms most previous defense methods." }, { "heading": "2 Related Work", "text": "There is an extensive literature on mitigating susceptibility to adversarial perturbations [34, 57, 13, 59, 3, 22, 7]. Adversarial training [34] is one of the earliest successful attempts to improve robustness of the learned representations to potential perturbations to the input pattern by solving a min-max optimization problem. TRADES [57] adds a regularization term to the cross-entropy loss which penalizes the network for assigning different labels to natural images and their corresponding perturbed images. [41] proposed an additional regularization term (local linearity regularizer) that encourages the classification loss to behave linearly around the training examples. [55, 51] proposed to regularize the flatness of the loss to improve adversarial robustness.\nOur work is closely related to the domain adaptation literature in which adversarial optimization has recently gained much attention [18, 32, 53]. From this viewpoint one could consider the clean and perturbed inputs as two distinct domains for which a network aims to learn an invariant feature set. Although in our setting, i) the perturbed domain continuously evolves while the parameters of the feature network are tuned; ii) unlike the usual setting in domain-adaptation problems, here we have access to the labels associated with some samples from the perturbed (target) domain. Recent work[49] regularized the network to have similar logit values in response to clean and perturbed inputs and showed that this additional term leads to better robust generalization to unseen perturbations. Related to this, Adversarial Logit Pairing [27] increases robustness by directly matching the logits for clean and adversarial inputs. JARN [9] Another line of work is on developing certified defenses which consist of methods with provable bounds over which the network is certified to operate robustly [58, 56, 10]. While these approaches provide a sense of guarantee about the proposed defenses, they\nare usually prohibitively expensive to train, drastically reduce the performance of the network on natural images, and the empirical robustness gained against standard attacks is low." }, { "heading": "3 Our approach", "text": "We will now make a connection between the domain adaptation and adversarial robustness, and build upon this connection to develop an approach for improving the network’s robustness against adversarial attacks." }, { "heading": "3.1 Preliminaries", "text": "Let Fθ(x) : X → Z , where X ⊆ Rn, Z ⊆ Rm, be a feature extractor (e.g. a neural network with parameters θ) mapping the input x ∈ X into the feature vector (representation) z ∈ Z , and let Cφ : Z → Y , where Y = {1, . . . ,K} are the class labels, be a classifier, with parameters φ (e.g., the last linear layer of a neural network plus the softmax function, on top of the extracted features).\nAdversarial attack: Let π(x, ) denote a perturbation function (an adversarial attack) which, for a given (x, y) ∈ X × Y , generates a perturbed sample x′ ∈ B(x, ) within the -neighborhood of x, B(x, ) = {x′ ∈ X : ‖x′ − x‖ < }, by solving the following maximization problem\nmax t∈B(x, )\nL(Cφ(Fθ(t)), y), (1)\nwhere L is the task classification loss function. In practice, however, the perturbed sample x′ found by an attacker is typically an approximate rather than the exact solution to this maximization problem.\nIn order to characterize the distance between the natural and adversarial data distributions, the following notion of distance between two probability distributions, defined in [4, 18], will be used later to make a connection with domain adaptation theory.\nH∆H-distance: Let H be a set of binary classifiers (hypotheses), called a hypothesis space; then the symmetric difference hypothesis space H∆H defines the set of hypotheses that capture the disagreements between two hypotheses inH, as in [4]:\ng ∈ H∆H ⇐⇒ g(x) = h(x)⊕ h′(x) for some h, h′ ∈ H, (2)\nwhere ⊕ denotes the XOR function. Then theH∆H-distance [4, 18] between two data distributions (domains) S and T , with respect to the hypothesis spaceH, is defined as:\ndH∆H(S, T ) = 2 sup h∈H∆H\n|Px∼S [ h(x) = 1 ] − Px∼T [ h(x) = 1 ] |. (3)\nThis equation turns into an inequation when the supremum is taken over the hypothesis space H instead ofH∆H [18]." }, { "heading": "3.2 A Domain Adaptation View of Adversarial Robustness", "text": "A domain is defined as a data distribution D on the set of inputs X [5]. In the adversarial robustness setting, we consider two domains – the natural and the adversarial domains, corresponding respectively\nto the source and target domains in domain adaptation. We denote by DX and D′X the natural and adversarial distributions of input instances respectively and by DZ and D′Z their corresponding induced distributions over the feature space Z . As in domain adaptation, we assume that f : X → Y is a labeling function common to both domains. The expected classification error Z of the classifier Cφ over DZ is defined as the probability that the classifier Cφ disagrees with the function f̃ :\nZ(Cφ) = Ez∼DZ [ y 6= Cφ(z) ] , (4)\nwhere f̃ : Z → Y is a mapping from the features to the class label such that f(x) = f̃(Fθ(x)). We similarly define ′Z as the expected error of Cφ over DZ′ . Using theorem 2 from [4] that relates the source and the target domain errors, we get an upper bound on the expected adversarial error ′Z as:\n′Z(h) ≤ Z(h) + 1\n2 dH∆H(DZ ,D′Z) + c, (5)\nwhere c is a constant term w.r.t. h. Eq. 5 essentially gives a bound on the adversarial error ′Z in terms of the natural error Z and a divergence dH∆H between the natural and adversarial domains with respect to their induced representation distributions DZ and D′Z . In the next section, we will describe an algorithm for improving adversarial robustness of a model by iteratively estimating and minimizing these two components of the error bound." }, { "heading": "3.3 Adversarial Feature Desensitization", "text": "Based on Eq. 5, the expected adversarial error could be reduced by jointly minimizing the natural error and the divergence between the distributions of natural and adversarial representations dH∆H(DZ ,D′Z). While minimizing the natural error X is straightforward, minimizing the crossdomain divergence requires us to estimate dH∆H(DZ ,D′Z). As was shown before [18], training a domain discriminator Dψ is closely related to estimating the dH∆H(DZ ,D′Z). The domain discriminator is a classifier trained to assign a label of 1 to samples from DZ , and -1 to samples from D′Z . Namely, it is shown [18] that\ndH∆H(DZ ,D′Z) ≤ 2 sup h∈H |αDZ ,D′Z (h)− 1|, (6)\nwhere αDZ ,D′Z (h) = Pz∼DZ [ h(z) = 1 ] +Pz∼D′Z [ h(z) = −1 ] combines the true positives and true negatives, and is thus maximized by the optimal domain discriminator h = Dψ. Note that, if the domain distributions DZ and D′Z are the same, then even the best choice of domain discriminator Dψ will achieve chance-level accuracy, corresponding to αDZ ,D′Z (Dψ) = 1. Our approach will aim at minimizing this estimated distance dH∆H(DZ ,D′Z) by tuning the feature extractor network parameters θ in the direction that pushes the distributions DZ and D′Z closer together. In parallel, we train the domain discriminator to estimate and guide the progress of the feature extractor’s tuning.\nWe now describe the proposed approach (see Algorithm 1) which essentially involves simultaneous training of the feature extractor Fθ, the task classifier Cφ and the domain discriminator Dψ (see Figure 1a)1. One iteration of the training procedure consists of the following three steps.\nFirst, parameters of the feature extractor Fθ and classifier Cφ are updated aiming to minimize the natural error X using the cross-entropy loss on natural inputs:\nLC = − 1\nm m∑ i=1 ỹi · log ( softmax(Cφ(Fθ(xi))) ) , (7)\nwhere ỹi is a one-hot encoding of the true label of the i-th sample xi.\nNext, steps two and three essentially implement a two-player minimax game similar to that in Generative Adversarial Networks (GAN) [20], carried out between the feature extractor network Fθ and the domain discriminator Dψ , with a value function\nV (Fθ, Dψ) = Ep(y) [ Ep(x|y)[S(−Dψ(Fθ(x), y))] ] + Eq(y) [ Eq(x|y)[S(Dψ(Fθ(x), y))] ] , (8)\n1Note that we will somewhat abuse the notation, assuming that Cφ and Dψ below correspond to the logits (last-layer output) of the corresponding networks. Also, we will use class-conditional discriminators, Dψ(Fθ(x, y)), i.e. train different domain discriminator for different label values y.\nAlgorithm 1: AFD training procedure Input: Adversarial perturbation function (attack) π, feature extractor Fθ, task classifier Cφ, domain discriminator Dψ , learning rates α, β, and γ. repeat\ninput next mini-batch {(xi, yi), ..., (xm, ym)} for i=1 to m: x′i ← π(xi, ) Compute LC according to Eq. 7 Compute LD according to Eq. 9 Compute LF according to Eq. 10 (θ, φ)← (θ, φ)− α∇θ, φLC % update feature extractor and task classifier ψ ← ψ − β∇ψLD % update domain discriminator θ ← θ − γ∇θLF % update feature extractor\nuntil convergence;\nwhere S is the softplus function. In particular, parameters of the domain discriminator Dψ are updated to minimize the cross-entropy loss associated with discriminating natural and adversarial inputs, maximizing α(h) in Eq. 6.\nLD = 1\nm m∑ i=1 [ S(−Dψ(Fθ(xi), yi)) + S(Dψ(Fθ(x′i), yi)) ] , (9)\nwhile the parameters of the feature extractor function Fθ are adversarially updated to maximize the domain discriminator’s loss from Eq. 9\nLF = 1\nm m∑ i=1 S(−Dψ(Fθ(x′i), yi)). (10)\nIn Figure 1b, we visually compare the learning dynamics in adversarial training, TRADES and AFD. Essentially, the adversarial training solves the classification problem by pushing the representation of adversarial examples from different classes away. TRADES regularizes the normal classification loss on the natural inputs with an additional term that encourages the representation of adversarial and natural images to match. Similar to TRADES, in AFD, the regular classification loss on natural inputs is augmented but with an adversarial game which consists of training the domain discriminator that distinguishes between the adversarial and natural inputs for each class followed by updates to the feature extractor to make the representations for natural and adversarial examples to become indistinguishable from each other. Notably, because the parameter update for the feature extractor network is done to maximize the domain discriminator loss and not to decrease the loss for particular adversarial examples (as is done in adversarial training or TRADES), it potentially increases the network robustness against any perturbation that could be correctly classified using the same domain discriminator. This could potentially lead to a broader form of generalization learned by the network.\nDiscussion: Relation to Adversarial Training. Adversarial training minimizes the expected error on adversarial examples (the perturbed versions of the natural samples), generated by an attacker in order to maximize the classification loss. The adversarial training procedure involves a minimax optimization problem consisting of an inner maximization to find adversarial examples that maximize the classification loss and an outer minimization to find model parameters that minimize the adversarial loss. From the domain adaptation point of view, the inner optimization of adversarial training is equal to a sampling procedure that generates samples from the target domain. Intuitively, direct training of the classifier on samples from the target domain would be the best way to improve the accuracy in that domain (i.e. adversarial classification accuracy). However, it’s important to note that the adversarial examples found through the inner optimization only approximately maximize the classification loss, and therefore the adversarial error associated with these samples only act as a lower bound on the true adversarial error and therefore the outer loop of the adversarial training method essentially minimizes a lower bound on the adversarial classification error. In contrast to this setup, our proposed method minimizes a conservative upper bound on the adversarial error and therefore is more likely to generalize to a larger set of unseen attacks, and to stronger versions of previously seen attacks (i.e. ones that generate higher-loss samples in the inner optimization loop)." }, { "heading": "4 Experiments", "text": "" }, { "heading": "4.1 Experimental setup", "text": "Datasets. We validated our proposed method on several common datasets including MNIST [30], CIFAR10, CIFAR100 [29], and tiny-Imagenet [26]. The inputs for all datasets were used in their original resolution except for tiny-Imagenet where the inputs were resized to 32× 32 to allow the experiments to finish within reasonable time on two GPUs.\nAdversarial attacks. To fairly assess the generalization ability of each defense method across attack types, we tested each network on 9 well-known adversarial attacks from the literature, using existing implementations from the Foolbox [42] and Advertorch [12] Python packages. Namely, we tested the models against different variations of the Projected Gradient Descent (PGD) [34] (L∞, L2, L1), Fast Gradient Sign Method (FGSM) [21], Momentum Iterative Method (MIM) [14], Decoupled Direction and Norm (DDN) [43], Deepfool [40], C&W [6], and AutoAttack [11] attacks. Also to assess the generalization in robustness across stronger adversarial attacks, for each attack we also varied the value across a wide range and validated different models on each. Specific hyperparameters used for each attack are listed in Table-A2.\nFeature extractor network Fθ and classifierCφ. We used the same network architecture, ResNet18 [23] for the feature extractor and classifier networks in experiments on all datasets and only increased the number of features for more challenging datasets. The number of base filters in the ResNet architecture was set to 16 for MNIST and 64 for other datasets. We used the activations before the last linear layer as the the output of the feature extractor network (Z) and the last linear layer as the classifier network Cφ. We added an activation normalization layer to the output of feature extractor network to provide normalized features to both Cθ and Dψ networks.\nDomain discriminator network Dψ . We compared several variations of the domain discriminator architecture and evaluated its effect on robust classification on MNIST dataset (Table A5). Overall, we found that using deeper networks for domain discriminator and adding projection discriminator layer improves the robust classification accuracy. The number of hidden units in all layers of Dψ were equal (64 for MNIST and 512 for other datasets). Following the common design principles in Generative Adversarial Networks literature, we used the spectral normalization [37] on all layers of Dψ . In all experiments, the domain discriminator (Dψ) consisted of three fully connected layers with Leaky ReLU nonlinearity followed by a projection discriminator layer that incorporated the labels into the adversarial discriminator through a dot product operation [38]. Further details of training for each experiment are listed in Table-A1.\nTraining parameters and baselines. All networks including baselines were trained on an adaptive version of PGD attack [11] that adaptively tunes the step size during the attack with virtually no computational overhead compared to standard PGD attack. We used = 0.3, 0.031, and 0.016 for MNIST, CIFAR, and Tiny-Imagenet datasets respectively. To find the best learning rates, we randomly split the CIFAR10 train set into a train and validation sets (45000 and 5000 images in train and validation sets respectively). We then carried out a grid-search using the train-validation sets and picked the learning rates with highest validation performance. Based on this analysis, we selected the learning rate γ = 0.5 for tuning the feature extractor Fθ, and α = β = 0.1 for tuning the parameters in domain discriminator Dψ , and the task classifier Cφ.\nIn all experiments we trained two versions of the AFD model, one with losses LD and LF according to Eq. 9 and 10 which we call AFD-DCGAN and another version where we substitute the losses with those from the Wasserstein GAN [1] dubbed AFD-WGAN (see Eq. 11 and 12 in the Appendix). We mainly compared the performance of our proposed method with two prominent defense methods, adversarial training and TRADES. We used a re-implementation of adversarial training (AT) method [34] and the official code for TRADES2 [57] and denoted these results with † in the tables. All experiments were run on NVIDIA V100 GPUs. We used one GPU for experiments on MNIST and 2 GPUs for other datasets." }, { "heading": "4.2 Robust classification against nominal attacks", "text": "We first evaluated our method against adversarial attacks under similar settings to those used during training ( = 0.3, 0.031, and 0.015 for MNIST, CIFAR, and Tiny-Imagenet datasets respectively).\n2https://github.com/yaodongyu/TRADES.git\nTable 1 compares the robust classification performance of AFD and several other defense methods against PGD-L∞, C&W-L2 and AutoAttack white-box and black-box attacks. The black-box attacks were carried out by constructing the adversarial examples using a ResNet18 architecture trained on the natural inputs x ∼ DX . Overall both versions of AFD (AFD-DCGAN and AFD-WGAN) were highly robust against all five tested attacks while maintaining a higher \"Clean\" accuracy (on natural data) compared to strong baseline models like TRADES and Adversarial Training. AFD-WGAN was consistently at the top on MNIST and CIFAR10 datasets. On CIFAR100 and Tiny-Imagenet, AFD performed better than or similar to Adversarial Training on all the attacks and performed better than TRADES on most of the attacks, although it was occasionally behind TRADES (on PGD-L∞ and AA white-box attacks). Analysis of feature sensitivity showed that on MNIST and CIFAR10 datasets on which AFD outperformed the other baselines by a larger margin, the features were significantly more insensitive to adversarial perturbations and over a larger range of attack strengths (Figure-A4). In addition to these tests, we also evaluated the AFD model against transfer black-box attacks from Adversarial Training and TRADES models which further demonstrated AFD’s higher robustness to those attacks too (Table-A3)." }, { "heading": "4.3 Robust classification against stronger and unseen attacks", "text": "To evaluate how each network generalizes to unseen domains of adversarial inputs (i.e. adversarial attacks generated with unseen forms of adversarial attacks), we additionally validated the classification robustness against a range of possible values for several widely used attacks that were not used during training. To fairly compare different models while considering both attack types and values, we computed the area-under-the-curve (AUC) for accuracy vs. epsilon for each attack (similar to Figure-2). Table-2 summarizes the AUC values for all 9 attack methods on four tested datasets. Compared with the baselines, we found that, AFD-trained networks consistently performed better on various datasets and on almost all the tested attacks even for substantially larger values (Figure 2, also see Figures A1,A3 in the appendix). These results show that compared to other baselines, AFD-trained networks are robust against a wider range of attacks and attack strengths ( ). This further\nsuggests that the features learned through AFD generalize better across various forms of attacks and can sustain larger perturbations.\nWe also observed that the AFD-WGAN performs better than AFD-DCGAN under most tested conditions. This is potentially due to: 1) WGAN’s ability to avoid vanishing gradients when the discriminator becomes too good compared to the generator (the feature extractor function in our case) [5]; 2) WGAN’s ability to avoid mode-collapses during training. In training GANs, mode collapses lead to the generator network to only output a limited set of patterns instead of learning to produce a diverse set of natural-looking images that fool the discriminator. Under our setting, WGAN potentially leads to learning a feature extractor that can produce a more diverse set of features for perturbed inputs, instead of focusing on a subset of latent dimensions. This suggests that applying more advanced GAN training algorithms could potentially further improve the robust performance in AFD-type models." }, { "heading": "4.4 EstimatedH∆H-distance and adversarial-vs-natural generalization gap", "text": "As stated in Eq. 5, the upper bound on the adversarial error can be stated in terms of the natural error, the divergence between the two domains, and a constant term. In practice, this means that the smaller the divergence term dH∆H is, the smaller the gap between the adversarial and natural errors ( ′Z − Z ) can be. We empirically tested this prediction using the domain discriminator trained on CIFAR10 dataset using the PGD-L∞ attack. Figure-3a shows that the estimated dH∆H using the domain discriminator (i.e., using the corresponding empirical value of α in Eq. 6) trained on PGD − L∞ with = 0.031 is closely related to the adversarial-vs-natural generalization gap over\ndifferent values as predicted by Eq. 5. Moreover, estimations from the same domain discriminator also predicts the gap in generalization error attained for other forms of attacks (even ones not seen during AFD training) and values with high accuracy (Figure-3b). This further supports the proposal that minimizing the estimated distance between the natural and adversarial representations can be an efficient way to improve the model robustness against various adversarial attacks." }, { "heading": "4.5 Learning a sparse representation", "text": "Because the AFD method aims to learn a representation that is insensitive to adversarial attacks, we expected the learned representational space to potentially be of lower dimensionality (i.e. less number of orthogonal features). To test this, we compared the dimensionality of the learned representation using two measures. i) number of non-zero features over the test set within each dataset and ii) number of Principal Component Analysis (PCA) dimensions that explains more than 99% of the variance in the representation computed over the test-set of each dataset. We found that the same network architecture (i.e. ResNet18), when trained with AFD method learns a much sparser and lower dimensional representational space (Table A4) compared to the naturally trained, adversarial training and TRADES models. The representational spaces learned with AFD on MNIST, CIFAR10, and CIFAR100 datasets had only 6, 9, and 76 principal components respectively." }, { "heading": "4.6 Adversarial and norm-based desensitization", "text": "To investigate whether the same level of robustness could be achieved by encouraging the network to produce similar representations in response to natural and adversarial inputs, we ran an additional experiment on the MNIST dataset in which we added a regularization term to the classification loss to directly minimize the representation sensitivity Se = 1n ∑ x‖F (x)− F (x′)‖, during training. We observed that although this augmented loss led to learning robustness representations, it achieved modest levels of robustness (∼ 80%) and showed only weak generalization to stronger and other unseen attacks (Figure-A5). This result suggests that more direct forms of enforcing representational similarity may not lead to the same form of robustness with generalization properties similar to that achieved using an adversarial training with domain discriminator (e.g. as in AFD)." }, { "heading": "5 Conclusion and limitations", "text": "Decreasing the input-sensitivity of features has long been desired in training neural networks [15] and has been suggested as a way to improve adversarial robustness [44, 61]. In this work we proposed an\nalgorithm to decrease the sensitivity of neural network representations using an adversarial learning paradigm that involves joint training of a domain discriminator, a feature extractor, and a task classifier. Essentially, our proposed algorithm iteratively estimates a bound on the adversarial error in terms of the natural error and a classification-based measure of distance between the distributions of natural and adversarial features and then minimizes the adversarial error by concurrently reducing the natural error as well as the distance between the two feature distributions.\nLimitations. The empirical results presented here suggest that AFD-trained models are robust against a wide range of adversarial attacks (distributions) even compared to strong baselines like Adversarial Training and TRADES. However, it is not guaranteed that the model would remain robust against any unseen attacks that we have not tested or may be invented in the future - as is the case in domain adaptation literature and the lack of theoretical guarantees for cross-domain generalization. With regards to the computational cost, when measuring the average per-epoch training time on the CIFAR10 dataset (using 2 NVIDIA V100 GPUs), we found that the AFD training time is 31% longer than adversarial training and only 4% longer than TRADES. This shows that while AFD requires three SGD updates per batch, the additional computational cost is not significantly higher than many prior methods when considering that most of the computational cost is associated with generating the adversarial examples during training." }, { "heading": "6 Acknowledgements", "text": "We would like to thank Isabela Albuquerque, Joao Monteiro, and Alexia Jolicoeur-Martineau for their valuable comments on the manuscript. Pouya Bashivan was partially supported by the Unifying AI and Neuroscience – Québec (UNIQUE) Postdoctoral fellowship and NSERC Discovery grant RGPIN-2021-03035. Irina Rish acknowledges the support from Canada CIFAR AI Chair Program and from the Canada Excellence Research Chairs Program." } ]
2,021
Adversarial Feature Desensitization
SP:95ba9ad102adafaabf9671737e6549728d104629
[ "This paper derives various types of graph embeddings to encode aspects of syntactic information that the brain may be processing during real-time sentence comprehension. These embeddings, along with indicators of punctuation, POS and dependency tags, and BERT embeddings, are used to predict brain activity recorded via fMRI. The authors argue that this is an improvement over use of effort-based metrics to predict brain activity, as these embeddings contain richer information than is captured by distilling down to a single measure of effort. They show that various brain regions are significantly better predicted by the syntactic embeddings than by the effort-based metrics and POS+dependency indicators. BERT embeddings, however, prove to be a better predictor (than syntactic and other predictors) across much more substantial areas of activity. " ]
We are far from having a complete mechanistic understanding of the brain computations involved in language processing and of the role that syntax plays in those computations. Most language studies do not computationally model syntactic structure and most studies that do model syntactic processing use effort-based metrics. These metrics capture the effort needed to process the syntactic information given by every word (Brennan et al., 2012; Hale et al., 2018; Brennan et al., 2016). They can reveal where in the brain syntactic processing occurs, but not what features of syntax are processed by different brain regions. Here, we move beyond effort-based metrics and propose explicit features capturing the syntactic structure that is incrementally built while a sentence is being read. Using these features and functional Magnetic Resonance Imaging (fMRI) recordings of participants reading a natural text, we study the brain representation of syntax. We find that our syntactic structure-based features are better than effort-based metrics at predicting brain activity in various parts of the language system. We show evidence of the brain representation of complex syntactic information such as phrase and clause structures. We see that regions well-predicted by syntactic features are distributed in the language system and are not distinguishable from those processing semantics. Our results call for a shift in the approach used for studying syntactic processing.
[]
[ { "authors": [ "Link" ], "title": "BERT-Large, Cased: 24-layer, 1024-hidden, 16-heads, 340M parameters. URL https://storage.googleapis.com/bert_models/2018_10_18/cased_ L-24_H-1024_A-16.zip", "venue": null, "year": 2018 }, { "authors": [ "Bijaya Adhikari", "Yao Zhang", "Naren Ramakrishnan", "B Aditya Prakash" ], "title": "Sub2vec: Feature learning for subgraphs", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2018 }, { "authors": [ "Y. Benjamini", "Y. Hochberg" ], "title": "Controlling the false discovery rate: a practical and powerful approach to multiple testing", "venue": "Journal of the Royal Statistical Society. Series B (Methodological), pp", "year": 1995 }, { "authors": [ "Idan Blank", "Zuzanna Balewski", "Kyle Mahowald", "Evelina Fedorenko" ], "title": "Syntactic processing is distributed across the language system", "venue": null, "year": 2016 }, { "authors": [ "Marisa Boston", "John Hale", "Reinhold Kliegl", "Umesh Patil", "Shravan Vasishth" ], "title": "Parsing costs as predictors of reading difficulty: An evaluation using the potsdam sentence corpus", "venue": "The Mind Research Repository (beta),", "year": 2008 }, { "authors": [ "Jonathan Brennan", "Yuval Nir", "Uri Hasson", "Rafael Malach", "David J Heeger", "Liina Pylkkänen" ], "title": "Syntactic structure building in the anterior temporal lobe during natural story listening", "venue": "Brain and language,", "year": 2012 }, { "authors": [ "Jonathan R Brennan", "Edward P Stabler", "Sarah E Van Wagenen", "Wen-Ming Luh", "John T Hale" ], "title": "Abstract linguistic structure correlates with temporal activity during naturalistic comprehension", "venue": "Brain and Language,", "year": 2016 }, { "authors": [ "Fatma Deniz", "Anwar O Nunez-Elizalde", "Alexander G Huth", "Jack L Gallant" ], "title": "The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality", "venue": "Journal of Neuroscience,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "E. Fedorenko", "A. Nieto-Castanon", "N. Kanwisher" ], "title": "Lexical and syntactic representations in the brain: An fMRI investigation with multi-voxel pattern", "venue": "analyses. Neuropsychologia,", "year": 2012 }, { "authors": [ "Evelina Fedorenko", "Sharon L Thompson-Schill" ], "title": "Reworking the language network", "venue": "Trends in cognitive sciences,", "year": 2014 }, { "authors": [ "Evelina Fedorenko", "Po-Jang Hsieh", "Alfonso Nieto-Castañón", "Susan Whitfield-Gabrieli", "Nancy Kanwisher" ], "title": "New method for fmri investigations of language: defining rois functionally in individual subjects", "venue": "Journal of neurophysiology,", "year": 2010 }, { "authors": [ "Evelina Fedorenko", "Idan Blank", "Matthew Siegelman", "Zachary Mineroff" ], "title": "Lack of selectivity for syntax relative to word meanings throughout the language", "venue": "network. bioRxiv,", "year": 2020 }, { "authors": [ "Stefan L Frank", "Leun J Otten", "Giulia Galli", "Gabriella Vigliocco" ], "title": "The erp response to the amount of information conveyed by words in sentences", "venue": "Brain and language,", "year": 2015 }, { "authors": [ "Angela D Friederici" ], "title": "The brain basis of language processing: from structure to function", "venue": "Physiological reviews,", "year": 2011 }, { "authors": [ "Angela D Friederici", "Shirley-Ann Rüschemeyer", "Anja Hahne", "Christian J Fiebach" ], "title": "The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes", "venue": "Cerebral cortex,", "year": 2003 }, { "authors": [ "James S Gao", "Alexander G Huth", "Mark D Lescroart", "Jack L Gallant" ], "title": "Pycortex: an interactive surface visualizer for fmri", "venue": "Frontiers in neuroinformatics,", "year": 2015 }, { "authors": [ "Jon Gauthier", "Roger Levy" ], "title": "Linking artificial and human neural representations of language", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Christopher R. Genovese" ], "title": "A Bayesian time-course model for functional magnetic resonance imaging data", "venue": "Journal of the American Statistical Association,", "year": 2000 }, { "authors": [ "Yosef Grodzinsky" ], "title": "The neurology of syntax: Language use without broca’s area", "venue": "Behavioral and brain sciences,", "year": 2000 }, { "authors": [ "Yosef Grodzinsky", "Angela D Friederici" ], "title": "Neuroimaging of syntax and syntactic processing", "venue": "Current opinion in neurobiology,", "year": 2006 }, { "authors": [ "John Hale", "Chris Dyer", "Adhiguna Kuncoro", "Jonathan R Brennan" ], "title": "Finding syntax in human encephalography with beam search", "venue": "arXiv preprint arXiv:1806.04127,", "year": 2018 }, { "authors": [ "John M Henderson", "Wonil Choi", "Matthew W Lowder", "Fernanda Ferreira" ], "title": "Language structure in the brain: A fixation-related fmri study of syntactic surprisal in reading", "venue": null, "year": 2016 }, { "authors": [ "John Hewitt", "Christopher D. Manning" ], "title": "A structural probe for finding syntax in word representations", "venue": "In Proceedings of the", "year": 2019 }, { "authors": [ "Alexander G Huth", "Wendy A de Heer", "Thomas L Griffiths", "Frédéric E Theunissen", "Jack L Gallant", "Wendy a De Heer" ], "title": "Natural speech reveals the semantic maps that tile human cerebral cortex", "venue": "doi: 10.1038/nature17637. Natural", "year": 2016 }, { "authors": [ "Shailee Jain", "Alexander Huth" ], "title": "Incorporating context into language encoding models for fmri", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Nikita Kitaev", "Dan Klein" ], "title": "Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Quoc V. Le", "Tomas Mikolov" ], "title": "Distributed representations of sentences and documents", "venue": "CoRR, abs/1405.4053,", "year": 2014 }, { "authors": [ "William Matchin", "Gregory Hickok" ], "title": "The cortical organization of syntax", "venue": "Cerebral Cortex,", "year": 2020 }, { "authors": [ "T.M. Mitchell", "S.V. Shinkareva", "A. Carlson", "K.M. Chang", "V.L. Malave", "R.A. Mason", "M.A. Just" ], "title": "Predicting human brain activity associated with the meanings of nouns", "venue": null, "year": 2008 }, { "authors": [ "S. Nishimoto", "A.T. Vu", "T. Naselaris", "Y. Benjamini", "B. Yu", "J.L. Gallant" ], "title": "Reconstructing visual experiences from brain activity evoked by natural movies", "venue": "Current Biology,", "year": 2011 }, { "authors": [ "Liina Pylkkänen" ], "title": "Neural basis of basic composition: what we have learned from the red–boat studies and their extensions", "venue": "Philosophical Transactions of the Royal Society B,", "year": 2019 }, { "authors": [ "Brian Roark" ], "title": "Probabilistic top-down parsing and language modeling", "venue": "Computational linguistics,", "year": 2001 }, { "authors": [ "Brian Roark", "Asaf Bachrach", "Carlos Cardenas", "Christophe Pallier" ], "title": "Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing", "venue": "In Proceedings of the 2009 conference on empirical methods in natural language processing,", "year": 2009 }, { "authors": [ "Mariya Toneva", "Leila Wehbe" ], "title": "Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Leila Wehbe", "Brian Murphy", "Partha Talukdar", "Alona Fyshe", "Aaditya Ramdas", "Tom Mitchell" ], "title": "Simultaneously uncovering the patterns of brain regions involved in different story reading Subprocesses", "venue": "PloS one,", "year": 2014 }, { "authors": [ "Leila Wehbe", "Aaditya Ramdas", "Rebecca C Steorts", "Cosma Rohilla Shalizi" ], "title": "Regularized brain reading with shrinkage and smoothing", "venue": "The Annals of Applied Statistics,", "year": 1997 }, { "authors": [ "Roel M Willems", "Stefan L Frank", "Annabel D Nijhof", "Peter Hagoort", "Antal Van den Bosch" ], "title": "Prediction during natural language comprehension", "venue": "Cerebral Cortex,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neuroscientists have long been interested in how the brain processes syntax. To date, there is no consensus on which brain regions are involved in processing it. Classically, only a small number of regions in the left hemisphere were thought to be involved in language processing. More recently, the language system was proposed to involve a set of brain regions spanning the left and right hemisphere (Fedorenko & Thompson-Schill, 2014). Similarly, some findings show that syntax is constrained to specific brain regions (Grodzinsky & Friederici, 2006; Friederici, 2011), while other findings show syntax is distributed throughout the language system (Blank et al., 2016; Fedorenko et al., 2012; 2020).\nThe biological basis of syntax was first explored through studies of the impact of brain lesions on language comprehension or production (Grodzinsky, 2000) and later through non-invasive neuroimaging experiments that record brain activity while subjects perform language tasks, using methods such as functional Magnetic Resonance Imaging (fMRI) or electroencephalography (EEG). These experiments usually isolate syntactic processing by contrasting the activity between a difficult syntactic condition and an easier one and by identifying regions that increase in activity with syntactic effort (Friederici, 2011). An example of these conditions is reading a sentence with an object-relative clause (e.g. “The rat that the cat chased was tired\"), which is more taxing than reading a sentence with a subject-relative clause (e.g. “The cat that chased the rat was tired\"). In the past decade, this approach was extended to study syntactic processing in naturalistic settings such as when reading or listening to a story (Brennan et al., 2012; Hale et al., 2018; Willems et al., 2015). Because such complex material is not organized into conditions, neuroscientists have instead devised effort-based metrics capturing the word-by-word evolving syntactic demands required to understand the material. Brain regions with activity correlated with those metrics are suggested to be involved in processing syntax.\nWe use the term effort-based metrics to refer to uni-dimensional measures capturing word-by-word syntactic demands. A standard approach for constructing a syntactic effort-based metric is to assume\na sentence’s syntactic representation and estimate the number of syntactic operations performed at each word. Node Count is popular such metric. It relies on constituency trees (structures that capture the hierarchical grammatical relationship between the words in a sentence). While traversing the words of the sentence in order, subtrees of the constituency tree get completed; Node Count refers to the number of such subtrees that get completed at each word, effectively capturing syntactic load or effort. Brennan et al. (2012) use Node Count to support the theory that the Anterior Temporal Lobe (ATL) is involved in syntactic processing. Another example of an effort-based metric is given by an EEG study by Hale et al. (2018). They show that parser action count (the number of possible actions a parser can take at each word) is predictive of the P600, a positive peak in the brain’s electrical activity occurring around 600ms after word onset. The P600 is hypothesized to be driven by syntactic processing (to resolve incongruencies), and the results of Hale et al. (2018) align with this hypothesis.\nThough effort-based metrics are a good proposal for capturing the effort involved in integrating a word into the syntactic structure of a sentence, they are not reflective of the entire syntactic information in play. Hence, these metrics cannot be used to study the brain representation of syntactic constructs such as nouns, verbs, relationships and dependencies between words, and the complex hierarchical structure of phrases and sentences.\nConstituency trees and dependency trees are the two main structures that capture a sentence’s syntactic structure. Constituency trees are derived using phrase structure grammars that encode valid phrase and clause structure (see Figure 1(A) for an example). Dependency trees encode relations between pairs of words such as subject-verb relationships. We use representations derived from both types of trees. We derive word level dependency (DEP) labels from dependency trees, and we focus on encoding the structural information given by constituency trees since we want to analyze if the brain builds hierarchical representations of phrase structure. We characterize the syntactic structure inherent in sentence constituency trees by computing an evolving vector representation of the syntactic structure processed at each word using the subgraph embedding algorithm by Adhikari et al. (2018). We show that our syntactic structure embeddings – along with other simpler syntactic structure embeddings built using conventional syntactic features such as part-of-speech (POS) tags and DEP tags – are better than effort-based metrics at predicting the fMRI data of subjects reading text. This indicates that representations of syntax, and not just syntactic effort, can be observed in fMRI.\nWe also address the important question of whether regions that are predicted by syntactic features are selective for syntax, meaning they are only responsive to syntax and not to other language properties such as semantics. To answer this question, we model the semantic properties of words using a contextual word embedding space (Devlin et al., 2018). We find that regions that are predicted by syntactic features are also predicted by semantic features and thus are not selective for syntax.\nScientific questions We ask three main questions: • How can scientists construct syntactic structure embeddings that capture the syntactic structure inherent in phrases and sentences? • Are these embeddings better at predicting brain activity compared to effort-based metrics when used as inputs to encoding models? • Which brain regions are involved in processing complex syntactic structure and are they different from regions involved in semantic processing?\nContributions We make four main contributions: • We propose a subgraph embeddings-based method to model the syntactic structure inherent in phrases and sentences. • We show that effort-based metrics can be complemented by syntactic structure embeddings which can predict brain activity to a larger extent than effort-based metrics. • Using our syntactic structure embeddings, we find some evidence supporting the hypothesis that the brain processes and represents complex syntactic information such as phrase and clause structure. • We find evidence supporting the existing hypothesis that syntactic processing appears to be distributed in the language network in regions that are not selective for syntax." }, { "heading": "2 METHODS", "text": "We first describe the syntactic features used in this study and their generation. All of the features we use are incremental i.e. they are computed per word. We then describe our fMRI data analyses.\nEffort-based metrics We use four effort-based metrics in our analyses - Node Count, Syntactic Surprisal, word frequency and word length. Node Count is an effort-based metric popular in neuroscience. To compute it, we obtain the constituency tree of each sentence using the self-attentive encoder-based constituency parser by Kitaev & Klein (2018). We compute Node Count for each word as the number of subtrees that are completed by incorporating this word into its sentence. Syntactic Surprisal is another effort-based metric proposed by Roark et al. (2009) and is computed using an incremental top down parser (Roark, 2001). Both of these metrics aim to measure the amount of effort that is required to integrate a word into the syntactic structure of its sentence. The word frequency metric is computed using the wordfreq package (Speer et al., 2018) as the Zipf frequency of a word. This is the base-10 logarithm of the number of occurrences per billion of a given word in a large text corpus. Finally, word length is the number of characters in the presented word. The last two metrics approximate the amount of effort that is required to read a word.\nConstituency tree-based Graph Embeddings (ConTreGE) Constituency trees are a rich source of syntactic information. We build three representations of these trees that encode this information:\n(a) The largest subtree which is completed upon incorporating a word into a sentence (see figure 1(B)) is representative of the implicit syntactic information given by the word. Given that Node Count reduces all of the information present in these subtrees to just one number, it is easy to see that it cannot effectively capture this information. POS tags (categorize words into nouns, verbs, adjectives, etc.) also capture some of the information present in these trees as they encode phrase structure to a certain extent. But, they are incapable of completely encoding their hierarchical structure and the parsing decisions which are made while generating them. In order to better encode their structure, we first build subgraph embeddings of these completed subtrees called ConTreGE Comp vectors.\n(b) We hypothesize that the brain not only processes structure seen thus far but also predicts future structure from structure it already knows. To test this, we construct embeddings, simply called ConTreGE vectors, using incomplete subtrees that are constructed by retaining all the phrase structure grammar productions that are required to derive the words seen till now, thereby allowing us to capture future sentence structure (in the form of future constituents) before the full sentence is read (see figure 1 (C)). These subtrees contain leaves that are non-terminal symbols unlike complete subtrees that only have terminal symbols (words and punctuation) as leaves. In this context, a non-terminal symbol is a symbol that can be derived further using some rule in the phrase structure grammar (ex. NP, VP, etc.). If incomplete subtrees are more representative of the brain’s processes, it would mean that the brain expects certain phrase structures even before the entire phrase or sentence is read. ConTreGE Comp and ConTreGE vectors need to be built using accurate constituency trees constructed using the whole sentence. Thus, we reuse the trees generated to compute Node Count to build them.\n(c) Further, the brain could be computing several possible top down partial parses that can derive the words seen thus far (see figures 1 (D) and (E)) and modifying the list of possible parses as future words are read. To test this hypothesis, we designed Incremental ConTreGE (InConTreGE) vectors that are representative of the most probable parses so far. For a given word, its InConTreGE vector is computed as: v = ∑5 i=1 e\n−siWi where Wi is the subgraph embedding of a partial parse tree built by an incremental top-down parser (Roark 2001 CoLing) after reading the word and si is the score assigned to this partial parse that is inversely proportional to the parser’s confidence in this tree.\nTo effectively capture the structure of all subtrees, we encode them using the subgraph embeddings proposed by Adhikari et al. (2018) which preserve the neighbourhood properties of subgraphs. A long fixed length random walk on a subgraph is generated to compute its embedding. Since consecutive nodes in a random walk are neighbours, a long walk can effectively inform us about the neighbourhoods of nodes in the subgraph. Each node in a walk is identified using its unique ID. So, a random walk can be interpreted as a “paragraph\" where the words are the node IDs. Finally, the subgraph’s embedding is computed as the Paragraph Vector (Le & Mikolov, 2014) of this paragraph that is representative of the subgraph’s structure. Note that all of the subtrees of a given type (complete, incomplete or partial parse) are encoded together. This ensures that all ConTreGE Comp vectors, all ConTreGE vectors and all InConTreGE vectors are in our own spaces.\nFigure 2 illustrates the subtree encoding process. First, every unique non-terminal in the subtrees is mapped to a unique number (ex. S is mapped to 1, NP is mapped to 2, etc.) and every terminal is mapped to a unique number that is representative of the order in which they were presented (the first presented token is mapped to 10000, the second token is mapped to 10001 and so on). We did not map each unique terminal to a unique number (for instance, we did not map all instances of \"Harry\" to one number) because a random walk through the tree could give us word co-occurrence information and thus lead to the inclusion of some semantic information in the vectors.\nEvery tree node’s label is then replaced by the number it was mapped to in the previous step. The edge lists of these subtrees are supplied to the subgraph embedding generation algorithm to finally obtain 15-dimensional vectors for every presented word. The length of the random walks is set to 100000 and we use an extension of the Distributed Bag of Nodes (DBON) model proposed by Le & Mikolov (2014) for generating Paragraph Vectors called Sub2Vec-DBON by Adhikari et al. (2018). The length of the sliding window is set to 5 and the model is trained for 20 epochs. Since ConTreGE Comp, ConTreGE and InConTreGE encode information about the neighbourhoods of all nodes in the constituency trees, they can capture their hierarchical structure. Thus, brain regions predicted by these vectors are likely to be involved in building and encoding hierarchical sentence structure.\nPunctuation We create one-hot binary vectors indicating the type of punctuation that was presented along with a word (e.g. . or ,). For example, a sentence might have ended with \"Malfoy.\". In this punctuation-based feature space, the column corresponding to . will be set to 1 for this word. While punctuation is seldom considered a syntactic feature, sentence boundaries are highly correlated with changes in working memory load. These changes are bound to be a great source of variability in the fMRI signal (as we will observe later). Failing to account for sentence boundaries and working memory might be a source of confounding that has been ignored in the literature.\nPart-of-speech tags and dependency tags We use two standard word-level syntactic features - POS and DEP tags. The POS tag of a word is read off previously generated constituency trees. The DEP tag of a word (ex. subject, object, etc.) correspond to its assigned role in the dependency trees of the presented sentences which were generated using the spaCy English dependency parser (2). We create one-hot binary vectors indicating the POS tag and the DEP tag of each word and concatenate them to create one feature space which we refer to as simple syntactic structure embeddings.\nSemantic features We adapt the vectors obtained from layer 12 of a pretrained (1) cased BERTlarge model (Devlin et al., 2018) to identify regions that process semantics. We use layer 12 because of previous work showing that middle layers of sentence encoders are optimal for predicting brain activity (Jain & Huth, 2018; Toneva & Wehbe, 2019). We obtain the contextual embeddings for a word by running the pretrained model only on the words seen thus far, preventing the inclusion of future semantic information. Since a presented word can be broken up into multiple subtokens, we compute its embedding as the average of the subtokens’ embeddings. Using principal component analysis (PCA), we reduce their dimensionality to 15 to match the ConTreGE vectors’ dimensionality.\nfMRI data We use the fMRI data of 9 subjects reading chapter 9 of Harry Potter and the Sorcerer’s Stone (Rowling, 2012), collected and made available by Wehbe et al. (2014). Words are presented one at a time at a rate of 0.5s each. All the brain plots shown here are averages over the 9 subjects in the Montreal Neurological Institute (MNI) space. Preprocessing details are in Appendix B.\nPredicting brain activity The applicability of a given syntactic feature in studying syntactic processing is determined by its efficacy in predicting the brain data described above. Ridge regression is used to perform these predictions and their coefficient of determination (R2 score) measures the feature’s efficacy. For each voxel of each subject, the regularization parameter is chosen independently. We use Ridge regression because of its computational efficiency and because of the Wehbe et al. (2015) results showing that with such fMRI data, as long as the regularization parameter is chosen by cross-validation for each voxel independently, different regularization techniques lead to similar results. Indeed, Ridge regression is a common regularization technique used for predictive fMRI models (Mitchell et al., 2008; Nishimoto et al., 2011; Wehbe et al., 2014; Huth et al., 2016).\nFor every voxel, a model is fit to predict the signals Y = [y1, y2, . . . , yn] recorded in that voxel where n is the number of time points (TR, or time to repetition). The words are first grouped by the TR in which they were presented. Then, the features of words in every group are summed to form a sequence of features X = [x1, x2, . . . , xn] aligned with the brain signals. The response measured by fMRI is an indirect consequence of brain activity that peaks about 6 seconds after stimulus onset. A common solution to account for this delay is to express brain activity as a function of the features of the preceding time points (Nishimoto et al., 2011; Wehbe et al., 2014; Huth et al., 2016). Thus, we train our models to predict any yi using xi−1, xi−2, xi−3 and xi−4.\nWe test the models in a cross-validation loop: the data is first split into 4 contiguous and equal sized folds. Each model uses three folds of the data for training and one fold for evaluation. We remove the data from the 5 TRs which either precede or follow the test fold from the training set of folds. This is done to avoid any unintentional data leaks since consecutive yis are correlated with each other because of the lag and continuous nature of the fMRI signal. The brain signals and the word features which comprise the training and testing data for each model are individually Z-scored. After training we obtain the predictions for the validation fold. The predictions for all folds are concatenated (to form a prediction for the entire experiment in which each time point is predicted from a model trained without the data for that time point). Note that since all 3 ConTreGe vectors are stochastic, we construct them 5 times each, and learn a different model each time. The predictions of the 5 models are averaged together into a single prediction. The R2 score is computed for every voxel using the predictions and the real signals.\nWe run a permutation test to test if R2 scores are significantly higher than chance. We permute blocks of contiguous fMRI TRs, instead of individual TRs, to account for the slowness of the underlying hemodynamic response. We choose a common value of 10 TRs (Deniz et al., 2019). The predictions are permuted within fold 5000 times, and the resulting R2 scores are used as an empirical distribution of chance performance, from which the p-value of the unpermuted performance is estimated. We also run a bootstrap test to test if a model has a higher R2 score than another. The difference is that in each iteration, we permute (using the same indices) the predictions of both models and compute the difference of their R2 and use the resulting distribution to estimate the p-value of the unpermuted difference. Finally, the Benjamni-Hochberg False Discovery Rate correction (Benjamini & Hochberg, 1995) is used for all tests (appropriate because fMRI data is considered to have positive dependence (Genovese, 2000)). The correction is performed by grouping together all the voxel-level p-values (i.e. across all subjects and feature groups) and choosing one threshold for all of our results. The correction is done in this way since we test multiple prediction models across multiple voxels and subjects. To compute Region of Interest (ROI) statistics, left-hemisphere ROI masks for the language\nsystem obtained from a “sentence vs. non-word\" fMRI contrast (Fedorenko et al., 2010) are obtained from (3) and mirrored to obtain the right-hemisphere ROIs." }, { "heading": "3 RESULTS", "text": "Figures 3 and 4 summarize our results (Appendix A has the raw prediction results). Many of our features have overlapping information. POS tags include punctuation, BERT vectors have been shown to encode syntactic information (Hewitt & Manning, 2019) and ConTreGE vectors, built from constituency trees, encode some POS tags information. To detect brain regions sensitive to the distinct information given by a feature space, we build hierarchical feature groups in increasing order of syntactic information and test for significant differences in performance between two consecutive groups. We start with the simplest feature – punctuation, and then add more complex features in order: the effort-based metrics, POS and DEP tags, one of the ConTreGE vectors and the vectors derived from BERT (which can be thought of as a super-set of semantics and syntax). At each step, we test if the introduction of the new feature space leads to significantly larger than chance improvement in R2.\nSyntactic structure embeddings are more predictive of brain activity than effort-based metrics Figures 3 (b)-(e) show that there are a small number of voxels that are predicted by the effort based metrics when taken in isolation. Figures 3 (f)-(i) indicate that although the information provided by the effort metrics combined is predictive of brain activity to some degree (when controlling for punctuation), there is still a considerable amount of structural information that is contained in the POS and DEP tags and in ConTreGE that predict additional portions of the activity. These results are made even clearer by Figure 4. Many voxels have significant increase in the R2 scores (above what is predicted by the effort metrics) after including POS and DEP tags and ConTreGE. We also notice that ConTreGE Comp is not as predictive as ConTreGE, hinting that future syntactic information helps in predicting current brain activity. Additionally, InConTreGE is not as predictive as ConTreGE, suggesting that the top down parser might be generating partial parses that are not reflective of brain representations.\nConTreGE results suggest that complex syntactic information is encoded in the brain In this section we analyze the information in ConTreGE to interpret its brain prediction performance. We estimate how much of the constituency tree is captured by each feature by using it to predict the level\nN ancestor of a word (in its constituency tree). We vary N from 2 to 9 and train a logistic regression model for each N. Since POS tags are the level 1 ancestors of words, we start the analysis at N=2. Because there are many phrase labels, we group them into 7 larger buckets - noun phrases, verb phrases, adverb phrases, adjective phrases, prepositional phrases, clauses and other miscellaneous labels. Also, if a word’s depth in its tree is less than N, the root is considered its level N ancestor.\nTable 1 shows the results of this analysis. We use the constituency trees generated by the Kitaev & Klein (2018) parser. Given the skewed label distribution, the optimal strategy for a predictor that takes random noise as input is to always output the majority class ancestor at that level. Chance performance is thus equal to the frequency of the majority label. The effort-based metrics are not as predictive as ConTreGE at any level. POS and DEP tags are predictive of labels at all levels and produce the highest accuracies for lower levels. The InConTreGE vectors are not as predictive as ConTreGE or ConTreGE Comp, hinting that the top down parser might not be very accurate. ConTreGE is the best predictor of higher level ancestors but ConTreGE Comp is better than ConTreGE at predicting lower level ancestors. This may be because graph embeddings of a tree tend to capture more of the information near the tree’s root (a random walk through a somewhat balanced tree is likely to contain more occurrences of nodes near the root). ConTreGE Comp vectors, created from shallow complete trees, likely over-represent lower level ancestors while ConTreGE vectors, created from relatively deeper trees, likely over-represent higher level ancestors. Given that ConTreGE is predictive of brain activity and contains information about the higher level ancestors of a word, this suggests that the brain represents complex hierarchical syntactic information such as phrase and clause structure.\nSyntax and semantics are processed in a distributed way in overlapping regions across the language system Our results indicate that syntactic and semantic information are processed in a distributed fashion across the language network. Most of the regions in the language system are\nbetter predicted by the BERT embeddings after controlling for all our other feature spaces, and these regions overlap with the regions that are predicted by the syntactic feature spaces. While the BERT embeddings include both semantic and syntactic information, it is likely that the semantic information is at least partially predictive of the brain activity, given that we have already controlled for a lot of syntactic information." }, { "heading": "4 DISCUSSION AND RELATED WORK", "text": "Syntactic representations Apart from Brennan et al. (2012) and Hale et al. (2018), many others (Brennan et al., 2016; Henderson et al., 2016; Frank et al., 2015; Boston et al., 2008; Willems et al., 2015) use effort-based metrics to study syntactic processing during natural reading or listening. However, a few studies do explicitly encode syntactic structure: Wehbe et al. (2014) find that POS and DEP tags are the most predictive out of a set of word, sentence and discourse-level features. Moving away from popular approaches that are dependent on effort-based metrics, we extended the work of Wehbe et al. (2014) by developing a novel graph embeddings-based approach to explicitly capture the syntactic information provided by constituency trees. Our results showed that these explicit features have substantially more information that is predictive of brain activity than effort based metrics. Given these results, we believe that future work in this area should supplement effort-based metrics with features that explicitly encode syntactic structure.\nSyntax in the brain Traditionally, studies have associated a small number of brain regions, usually in the left hemisphere, with syntactic processing. These include parts of the inferior frontal gyrus (IFG), ATL and Posterior Temporal Lobe (PTL) (Grodzinsky & Friederici, 2006; Friederici, 2011; Friederici et al., 2003; Matchin & Hickok, 2020). However, some works point to syntactic processing being distributed across the language system. Blank et al. (2016) shows that significant differences in the activities of most of the language system are greater when reading hard to parse sentences than easier phrases.Wehbe et al. (2014) use POS and DEP tags to arrive at similar conclusions.\nPrevious work generally did not use naturalistic stimuli to study syntax. Instead, subjects are usually presented with sentences or even short phrases that have subtle syntactic variations or violations. Regions with activity well correlated with the presentation of such variations/violations are thought to process syntax (Friederici, 2011). Observations from such studies have limited scope since these variations often cannot be representative of the wide range of variations seen in natural language. This is possibly why such studies report specific regions: it might be that the reported region is particularly sensitive to the exact conditions used. By using one type of stimulus which evokes only one aspect of syntactic processing, syntax might appear more localized than it really is. Our results support the hypothesis that it is instead processed in a distributed fashion across the language system. We believe that our results have a wider applicability since we use naturalistic stimuli and we leave for future work the study of whether different syntactic computations are delegated to different regions.\nSome studies have also doubted the importance of syntactic composition for the brain. Pylkkänen (2020) proposes that there is no conclusive evidence to indicate that the brain puts a lot of weight on syntactic composition, and that even though studies (some with effort-based metrics) have associated certain regions like the left ATL with syntactic processing, numerous studies have later shown that the left ATL might instead be involved in a more conceptually driven process. Gauthier & Levy (2019) showed that BERT embeddings which were fine-tuned on tasks that removed dependency tree-based syntactic information were more reflective of brain activity than those which contained this information. In contrast, our work uses purely syntactic embeddings to show that we can indeed significantly predict many regions of the language system. We attribute these differences in conclusions to our naturalistic stimuli and word-by-word evolving representations of syntax. Pylkkänen (2020)’s conclusions are mostly based on studies that present a phrase with just two words (like \"red boat\"). Gauthier & Levy (2019) use data averaged over entire sentences instead of modeling word-by-word comprehension. Since the syntactic structure of a sentence evolves with every word that is read, this approach is not necessarily adept at capturing such information.\nFurthermore, our analysis of the syntactic information contained in various features highlighted that our ConTreGE vectors are good at encoding complex phrase or clause-level syntactic information whereas POS and DEP tags are good at encoding local word-level syntactic information. Several regions of the brain’s language system were predicted by ConTreGE, hinting that the brain does indeed encode complex syntactic information. Another potentially interesting observation is that\nincluding ConTreGE increases prediction performance in the PTL and IFG by more than when we include POS and DEP tags (Figure 4) but not for the ATL and the Angular Gyrus (AG). These observations very loosely support the theory by Matchin & Hickok (2020) - that parts of the PTL are involved in hierarchical lexical-syntactic structure building, the ATL is a knowledge store of entities and the AG is a store of thematic relations between entities. This is because ConTreGE encodes hierarchical syntactic information and word-level POS and DEP tags are very indicative of the presence of various entities (various types of nouns) and the thematic relations between entities (verbs associated with noun pairs). This hypothesis should be tested more formally in future work.\nWe also observe that ConTreGE is more predictive than ConTreGE Comp and InConTreGE with the latter two being very weakly predictive. Thus, future syntactic information appears to be very useful while predicting BOLD signals, indicating that that the brain anticipates the eventual sentence structure while reading to a more accurate extent than an incremental top down parser.\nSyntactic vs. semantic processing in the brain Finally, our results support the theory that syntax processing is distributed throughout the language network in regions that also process semantics. This theory is supported by other studies (Fedorenko et al., 2012; Blank et al., 2016; Fedorenko et al., 2020). However, Friederici et al. (2003) among others argue that syntax and semantics are processed in specific and distinct regions by localizing the effects of semantic and syntactic violations. Again, these differences might be due to the specialized stimuli and high statistical thresholds that only reject the null hypotheses in the regions with the strongest effect size, thereby precisely identifying small regions. A less conservative threshold might have revealed a more distributed pattern without leading to type I errors." } ]
2,020
null
SP:7327dc440b5c193c1dda156276860f89594721fa
[ "This paper explores the problem of generalizing to novel combinations of verbs and nouns in a task for captioning video stills from videos about cooking. The paper introduces a new dataset based off of EPIC-Kitchens (Damen et al. 2018) which masks out verbs and nouns and splits the evaluation data into seen combinations of verb/noun pairs and unseen combinations of verb/noun pairs, challenging a model to generate captions for pairs which were not seen during training. " ]
Children acquire language subconsciously by observing the surrounding world and listening to descriptions. They can discover the meaning of words even without explicit language knowledge, and generalize to novel compositions effortlessly. In this paper, we bring this ability to AI, by studying the task of multimodal compositional generalization within the context of visually grounded language acquisition. We propose a multimodal transformer model augmented with a novel mechanism for analogical reasoning, which approximates novel compositions by learning semantic mapping and reasoning operations from previously seen compositions. Our proposed method, Analogical Reasoning Transformer Networks (ARTNET), is trained on raw multimedia data (video frames and transcripts), and after observing a set of compositions such as “washing apple” or “cutting carrot”, it can generalize and recognize new compositions in new video frames, such as “washing carrot” or “cutting apple”. To this end, ARTNET refers to relevant instances in the training data and uses their visual features and captions to establish analogies with the query image. Then it chooses a suitable verb and noun to create a new composition that describes the new image best. Extensive experiments on an instructional video dataset demonstrate that the proposed method achieves significantly better generalization capability and recognition accuracy compared to state-of-the-art transformer models.
[]
[ { "authors": [ "Chris Baber" ], "title": "Designing smart objects to support affording situations: Exploiting affordance through an understanding of forms of engagement", "venue": "Frontiers in psychology,", "year": 2018 }, { "authors": [ "Fabien Baradel", "Natalia Neverova", "Christian Wolf", "Julien Mille", "Greg Mori" ], "title": "Object level visual reasoning in videos", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Marco Baroni", "Roberto Zamparelli" ], "title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space", "venue": "In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing,", "year": 2010 }, { "authors": [ "Joost Bastings", "Marco Baroni", "Jason Weston", "Kyunghyun Cho", "Douwe Kiela" ], "title": "Jump to better conclusions: Scan both left and right", "venue": "arXiv preprint arXiv:1809.04640,", "year": 2018 }, { "authors": [ "Michael B Chang", "Abhishek Gupta", "Sergey Levine", "Thomas L Griffiths" ], "title": "Automatically composing representation transformations as a means for generalization", "venue": "arXiv preprint arXiv:1807.04640,", "year": 2018 }, { "authors": [ "Ciprian Chelba", "Tomas Mikolov", "Mike Schuster", "Qi Ge", "Thorsten Brants", "Phillipp Koehn", "Tony Robinson" ], "title": "One billion word benchmark for measuring progress in statistical language modeling", "venue": "arXiv preprint arXiv:1312.3005,", "year": 2013 }, { "authors": [ "Yen-Chun Chen", "Linjie Li", "Licheng Yu", "Ahmed El Kholy", "Faisal Ahmed", "Zhe Gan", "Yu Cheng", "Jingjing Liu" ], "title": "Uniter: Learning universal image-text representations", "venue": null, "year": 1909 }, { "authors": [ "Dima Damen", "Hazel Doughty", "Giovanni Maria Farinella", "Sanja Fidler", "Antonino Furnari", "Evangelos Kazakos", "Davide Moltisanti", "Jonathan Munro", "Toby Perrett", "Will Price", "Michael Wray" ], "title": "Scaling egocentric vision: The epic-kitchens dataset", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Dedre Gentner", "Linsey Smith" ], "title": "Analogical reasoning", "venue": "Encyclopedia of human behavior,", "year": 2012 }, { "authors": [ "James J Gibson" ], "title": "The ecological approach to visual perception: classic edition", "venue": null, "year": 2014 }, { "authors": [ "Lieve Hamers" ], "title": "Similarity measures in scientometric research: The jaccard index versus salton’s cosine formula", "venue": "Information Processing and Management,", "year": 1989 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Xisen Jin", "Junyi Du", "Xiang Ren" ], "title": "Visually grounded continual learning of compositional semantics", "venue": "arXiv preprint arXiv:2005.00785,", "year": 2020 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens Van Der Maaten", "Judy Hoffman", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Inferring and executing programs for visual reasoning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Keizo Kato", "Yin Li", "Abhinav Gupta" ], "title": "Compositional learning for human object interaction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Daniel Keysers", "Nathanael Schärli", "Nathan Scales", "Hylke Buisman", "Daniel Furrer", "Sergii Kashubin", "Nikola Momchev", "Danila Sinopalnikov", "Lukasz Stafiniak", "Tibor Tihon" ], "title": "Measuring compositional generalization: A comprehensive method on realistic data", "venue": "arXiv preprint arXiv:1912.09713,", "year": 2019 }, { "authors": [ "Douwe Kiela", "Alexis Conneau", "Allan Jabri", "Maximilian Nickel" ], "title": "Learning visually grounded sentence representations", "venue": "arXiv preprint arXiv:1707.06320,", "year": 2017 }, { "authors": [ "Satwik Kottur", "Ramakrishna Vedantam", "José MF Moura", "Devi Parikh" ], "title": "Visual word2vec (visw2v): Learning visually grounded word embeddings using abstract scenes", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Brenden M Lake" ], "title": "Compositional generalization through meta sequence-to-sequence learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Brenden M Lake", "Marco Baroni" ], "title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", "venue": "arXiv preprint arXiv:1711.00350,", "year": 2017 }, { "authors": [ "Brenden M. Lake", "Marco Baroni" ], "title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Yuanpeng Li", "Liang Zhao", "Jianyu Wang", "Joel Hestness" ], "title": "Compositional generalization for primitive substitutions", "venue": "arXiv preprint arXiv:1910.02612,", "year": 2019 }, { "authors": [ "Jeannette Littlemore" ], "title": "The relationship between associative thinking, analogical reasoning, image formation and", "venue": "Confronting metaphor in use: An applied linguistic approach,", "year": 2008 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "arXiv preprint arXiv:1711.05101,", "year": 2017 }, { "authors": [ "Joao Loula", "Marco Baroni", "Brenden M Lake" ], "title": "Rearranging the familiar: Testing compositional generalization in recurrent networks", "venue": "arXiv preprint arXiv:1807.07545,", "year": 2018 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gary F Marcus" ], "title": "Rethinking eliminative connectionism", "venue": "Cognitive psychology,", "year": 1998 }, { "authors": [ "Cynthia Matuszek" ], "title": "Grounded language learning: Where robotics and nlp meet", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Jeff Mitchell", "Mirella Lapata" ], "title": "Vector-based models of semantic composition", "venue": "In proceedings of ACL-08: HLT, pp", "year": 2008 }, { "authors": [ "Mitja Nikolaus", "Mostafa Abdou", "Matthew Lamm", "Rahul Aralikatte", "Desmond Elliott" ], "title": "Compositional generalization in image captioning", "venue": "arXiv preprint arXiv:1909.04402,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Candace Ross", "Andrei Barbu", "Yevgeni Berzak", "Battushig Myanganbayar", "Boris Katz" ], "title": "Grounding language acquisition by training semantic parsers using captioned videos", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Jake Russin", "Jason Jo", "Randall C O’Reilly", "Yoshua Bengio" ], "title": "Compositional generalization in a deep seq2seq model by separating syntax and semantics", "venue": null, "year": 1904 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Dı́dac Surı́s", "Dave Epstein", "Heng Ji", "Shih-Fu Chang", "Carl Vondrick" ], "title": "Learning to learn words from narrated video", "venue": null, "year": 1911 }, { "authors": [ "Dı́dac Surı́s", "Dave Epstein", "Heng Ji", "Shih-Fu Chang", "Carl. Vondrick" ], "title": "Learning to learn words from visual scenes", "venue": null, "year": 1911 }, { "authors": [ "Andrew Trask", "Felix Hill", "Scott E Reed", "Jack Rae", "Chris Dyer", "Phil Blunsom" ], "title": "Neural arithmetic logic units", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xenia Vamvakoussi" ], "title": "The use of analogies in mathematics instruction: Affordances and challenges", "venue": "In Cognitive Foundations for Improving Mathematical Learning,", "year": 2019 }, { "authors": [ "Stella Vosniadou", "Andrew Ortony" ], "title": "Similarity and analogical reasoning", "venue": null, "year": 1989 }, { "authors": [ "Kilian Q Weinberger", "John Blitzer", "Lawrence K Saul" ], "title": "Distance metric learning for large margin nearest neighbor classification", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Haonan Yu", "Jeffrey Mark Siskind" ], "title": "Grounded language learning from video described with sentences", "venue": "In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2013 }, { "authors": [ "Chunting Zhou", "Chonglin Sun", "Zhiyuan Liu", "Francis Lau" ], "title": "A c-lstm neural network for text classification", "venue": "arXiv preprint arXiv:1511.08630,", "year": 2015 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Rich Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "AdamW optimizer Loshchilov", "Hutter" ], "title": "2017) with the learning rate 3e-5. For AdamW optimizer Loshchilov & Hutter (2017), we set the update coefficients, for averages of gradient and its square (β1, β2), and on denominator", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "Children acquire language subconsciously by observing the surrounding world and listening to descriptions. They can discover the meaning of words even without explicit language knowledge, and generalize to novel compositions effortlessly. In this paper, we bring this ability to AI, by studying the task of multimodal compositional generalization within the context of visually grounded language acquisition. We propose a multimodal transformer model augmented with a novel mechanism for analogical reasoning, which approximates novel compositions by learning semantic mapping and reasoning operations from previously seen compositions. Our proposed method, Analogical Reasoning Transformer Networks (ARTNET), is trained on raw multimedia data (video frames and transcripts), and after observing a set of compositions such as “washing apple” or “cutting carrot”, it can generalize and recognize new compositions in new video frames, such as “washing carrot” or “cutting apple”. To this end, ARTNET refers to relevant instances in the training data and uses their visual features and captions to establish analogies with the query image. Then it chooses a suitable verb and noun to create a new composition that describes the new image best. Extensive experiments on an instructional video dataset demonstrate that the proposed method achieves significantly better generalization capability and recognition accuracy compared to state-of-the-art transformer models." }, { "heading": "1 INTRODUCTION", "text": "Visually grounded Language Acquisition (VLA) is an innate ability of the human brain. It refers to the way children learn their native language from scratch, through exploration, observation, and listening (i.e., self-supervision), and without taking language training lessons (i.e., explicit supervision). 2-year-old children are able to quickly learn semantics of phrases and their constituent words after repeatedly hearing phrases like “washing apple”, or “cutting carrot” and observing such situations. More interestingly, they will also understand new compositions such as “washing carrot” or “cutting apple”, even before experiencing them. This ability of human cognition is called compositional generalization (Montague (1970); Minsky (1988); Lake et al. (2017)). It helps humans use a limited set of known components (vocabulary) to understand and produce unlimited new compositions (e.g. verb-noun, adjective-noun, or adverb-verb compositions). This is also one of the long-term goals of Artificial Intelligence (AI), e.g. in robotics, where it enables the robot to learn new instructions that they have never heard before.\nNevertheless, contemporary machine intelligence needs to overcome several major challenges of the task. On one hand, learning compositional generalization can be difficult without using datahungry models. The power of existing language models mainly rely on large-scale language corpora (Lake & Baroni (2017); Pennington et al. (2014); Devlin et al. (2018)). They are still inadequate at compositional generalization (Marcus (1998); Lake & Baroni (2018); Surı́s et al. (2019)). Their goal is to recognize training examples rather than focusing on what is missing from training data. On the other hand, the designed model should close the paradigmatic gap (Nikolaus et al. (2019)) between seen compositions and new compositions. For instance, given seen verb-noun compositions “1A” and “2B” (the digit indicates verb, the letter indicates noun), the model should be able to link seen compositions to new compositions (like “1B” or “2A”) in completely new cases.\nDifferent from previous work (Johnson et al. (2017); Baradel et al. (2018); Santoro et al. (2017)), we bring the power of compositional generalization to state-of-the-art language models by incorporating Analogical Reasoning (AR) (Gentner & Smith (2012); Littlemore (2008); Vosniadou & Ortony (1989)). An analogy is a comparison between similar concepts or situations, and AR is analogical semantic reasoning that relies upon an analogy. The human brain spontaneously engages in AR to make sense of unfamiliar situations in every day life (Vamvakoussi (2019)). Inspired by the AR process in the human brain, we design the counterpart for machine language acquisition. To this end, we create a language model that generate appropriate novel compositions by relevant seen compositions, and forming analogies and appropriate arithmetic operations to express the new compositions (e.g. “washing carrot” = “washing apple’ + “cutting carrot” - “cutting apple”). We describe this process in three steps: association, reasoning, and inference, as shown in Figure 1.\nGiven an image (a video frame in our case) and a narrative sentence describing it, we mask the main verb-noun composition from the sentence, and ask the model to guess the correct composition that completes the sentence, considering the provided image. To this end, we propose a novel self-supervised and reasoning-augmented framework, Analogical Reasoning Transformer Networks (ARTNET). ARTNET adopts a multimodal transformer (similar to ViLBERT (Lu et al. (2019))) as its backbone to represent visual-textual data in a common space. Then it builds three novel modules on top of the backbone that corresponds to the aforementioned AR steps: association, reasoning, and inference. First, we design Analogical Memory Module (AMM), which discovers analogical exemplars for a given query scenario, from a reference pool of observed samples. Second, we propose Analogical Reasoning Networks (ARN), which takes the retrieved samples as input, selects candidate analogy pairs from the relevant reference samples, and learns proper reasoning operations over the selected analogy pairs, resulting in an analogy context vector. Third, we devise a Conditioned Composition Engine (CCE), which combines the analogy context vector with the representations of the query sample to predict the masked words and complete the target sentence with a novel composition.\nWe show how ARTNET generalizes to new compositions and excels in visually grounded language acquisition by designing experiments in various evaluations: novel composition prediction, assessment of affordance, and sensitivity to data scarcity. The results on the ego-centric video dataset (EPIC-Kitchens) demonstrate the effectiveness of the proposed solution in various aspects: accuracy, capability, robustness, etc. The project code is publicly available at https://github.com/XX.\nThe main contributions of this paper include the following:\n• We call attention to a challenging problem, compositional generalization, in the context of machine language acquisition, which has seldom been studied.\n• We propose ideas supported by human analogical reasoning: approximating new verb-noun compositions by learned arithmetic operations over relevant compositions seen before.\n• We propose a novel reasoning-augmented architecture for visually grounded language acquisition, which addresses the compositional generalization problem through association and analogical reasoning.\n• We evaluate the proposed model in various aspects, such as composition prediction, validity test, and robustness against data scarcity. The results show that ARTNET achieves significant performance improvements in terms of new composition accuracy, over a large-scale video dataset." }, { "heading": "2 ARTNET: ANALOGICAL REASONING TRANSFORMER NETWORKS", "text": "Our goal is to develop a framework that can support multimodal compositional generalization through learning in a visual-textual environment. The proposed framework learns to acquire the meaning of phrases and words from image-sentence pairs and to create novel compositions via reasoning. We call the framework Analogical Reasoning Transformer Networks (ARTNET), due to its ability to establish analogies with the previously seen, relevant scenarios, and perform reasoning operations to generalize a composition for the new scenario. Figure 2 illustrates an overview of ARTNET, which is composed of a multimodal encoder backbone, followed by three main modules: Analogical Memory Module (AMM), Analogical Reasoning Networks (ARN), and Conditioned Composition Engine (CCE). We elaborate each component in the rest of this section." }, { "heading": "2.1 MULTIMODAL ENCODER BACKBONE", "text": "The backbone network is responsible for encoding image-sentence pairs into compositional semantic representations. To achieve this, we utilize the emerging multimodal transformers (e.g. UNITER (Chen et al. (2019)) or ViLBERT (Lu et al. (2019))), which have recently achieved great success in various vision-language tasks. These models take a set of visual and textual tokens (e.g. objects and words), and extract a multimodal embedding for each token, which is contextualized by all the other tokens through layers of multi-head attention. We follow the architecture of UNITER, as it performs slightly better than ViLBERT and other similar models. Note that since our goal is language acquisition, we intentionally do not use the pretrained weights of UNITER, which are trained on a large-scale corpus. Instead, we train the backbone from scratch on our limited data." }, { "heading": "2.2 ANALOGICAL MEMORY MODULE", "text": "AMM plays the role of analogical association. Like finding a useful puzzle piece, we propose AMM to discover the most useful reference samples for analogical reasoning in a target scenario. Given a target image-sentence pair (query), where some tokens in the sentence are masked, we randomly select N (N = 200 in our experiments) sample image-sentence pairs from the training data to create a reference pool, and find the Top-K most relevant exemplars from that pool. To this end, we measure a multimodal relevance score between the query and each reference. Here, we use the initial embedding of each token on the query and reference samples as described in the Section 2.1. Given a target and a reference sample, we define the multimodal relevance score as a combination of visual and text relevance between the corresponding sets of tokens. For visual tokens, we compute the mean cosine similarity of every pair of tokens from the query and reference token sets. For the language part, the contextual background words that are not masked can provide linguistic clues for semantic relevance. Thus, we compute the Jaccard Index (Hamers et al. (1989)) between two sentences as textual relevance. Specifically, the multimodal relevance score is\nsvl = 1 2 · ( |WT ∩WR| |WT ∪WR| + 1 +\n∑ i ∑ j cos(vTi ,vRj )\nNv\n2 ) (1)\nwhereWT andWR are the set of target words and reference words,Nv is the number of visual token pairs, and vTi and vRj represent the visual embeddings of the ith visual token of the query and the jth visual token of the reference. After computing the scores, AMM ranks reference samples with respect to their relevance scores and selects the Top-K most relevant samples for the given query." }, { "heading": "2.3 ANALOGICAL REASONING NETWORKS", "text": "Given the retrieved analogical exemplars, we devise a neural network with reasoning ability to enrich the original representation of the masked compositions by making analogies with the seen compositions. The idea is to exploit the semantic relation mapping between the candidate analogy composition and the target composition. To this end, we represent the target masked composition as a query vector q, by concatenating the multimodal transformer embeddings of the masked words of that composition (typically a verb and a noun from the target sentence) and learning the representations of ordered constituents in a composition based on a Long Short-Term Memory (LSTM) (Zhou et al. (2015)). Next, we apply the multimodal encoder backbone (as mentioned above) on the retrieved analogy samples, and parse each sample into candidate analogy compositions (pairs of tokens). Since the goal is language acquisition, we do not rely on predefined grammar rules or pretrained models to generate the candidate compositions, such as applying part-of-speech tagging and taking each verb-noun pair. Instead, we enumerate all pairs of adjacent words from each retrieved sentence, and all pairs of detected image regions from each retrieved image. The multimodal resulting set of pairs are called analogy pairs hereafter.\nThe core of ARN consists of three neural network modules for analogy attention, analogical reasoning, and analogy transformation. Analogical attention learns the importance of each pair of candidate analogy composition and query vector respectively and generates analogy aggregation from each modality independently. Analogical reasoning is designed to learn the appropriate arithmetic operations from analogy compositions for reasoning. It consists of modality-wise transformations and Neural Arithmetic Logic Units (Trask et al. (2018)) with multiple layers of Neural Accumulator (NAC) (Trask et al. (2018)). NAC is a simple but effective operator that supports the ability to learn addition and subtraction. This module is applied on the analogy pairs, and computes a single vector that represents the output of some reasoning operations, optimized for our task through gradient descent. Through the analogy transformation, ARN generates the sequential representations of final analogy context vector. Specifically, ARN can be denoted as\ncma = ∑ j αmijh m j , αij = exp a([rmi ; r m i+1; q], [r m j ; r m j+1; q])∑ k exp a([r m i ; r m i+1; q], [r m k ; r m k+1; q]) Analogical Attention, (2)\nhc = fNAC([gv(c v a), gl(c l a)] T ) Analogical Reasoning, (3) c = LSTM(hc) Analogy Transformation, (4)\nwhere v and l represent the vision and language modalities, rmi and r m i+1 (m is modality indicator) are the image regions or text words of candidate analogical compositions. gv and gl are modality transformations that contains two-layer fully connected networks with ReLU activation and dropout, and T represents matrix transpose. The output of ARN is the vector c, which is called analogical context vector, and will be used to augment the composition representations." }, { "heading": "2.4 CONDITIONED COMPOSITION ENGINE", "text": "After analogical reasoning, we create a potentially novel composition based on both the initial comprehension of the given scenario and the result of analogical reasoning. To this end, CCE is designed to enrich the representations of the masked tokens through a conditioned learning network. It takes the analogical context vector as contextual knowledge. Let x = {x1, ...,mi,mi+1, ..., xN} be the input elements of a multimodal encoder, and mli or m l i+1 are the l-th layer of the representations of the masked words. CCE uses the multimodal transformers to transform the embedding features of both each masked word and the analogical context vector. Then it predicts the masked word by aggregating from linguistic clues of all the other unmasked elements. The embedding feature hi of\nthe i-th masked word computed by:\nhl+1i = W l+1 2 · ReLU(W l+1 1 [m l i; c] + b l+1 1 ) + b l+1 2 , Context-conditioned (5)\nhl+2i = GeLU(W l+2 i h l+1 i + b l+2 i ) , Feed-forward (6)\nhi = LayerNorm(hl+2i ) , Feed-forward (7)\nwhere W l+11 , W l+1 2 , b l+1 1 and b l+1 2 are learnable weights and biases for the context-conditioned transformation. W l+2i and b l+2 i are learnable weight and bias for feed-forward transformation, respectively. Given the contextual representation of the masked word hi, the model predicts the masked word by multiplying its contextual representation with a word embedding matrix φ which is trained with the rest of the network, ŵi = φTwhi." }, { "heading": "2.5 LEARNING OBJECTIVES", "text": "Masked Composition Acquisition Our model learns to perform language acquisition by filling in masked words. At both train and test time, we give the model a target example, with a reference set sampled from the training dataset. Note that our masking policy is different at training time and test time (details in Section 3.1). During training and validation, we randomly mask multiple words and visual tokens. However, during test, we only mask one verb-noun composition.\nObjective Function We train the model to acquire words by directly predicting them. We measure the Cross-Entropy loss between the predicted word ŵi and true word wi over a vocabulary of size C, denoted as Ll. This objective is the same as in the original BERT (Devlin et al. (2018)). We also learn visual reconstruction via a regression task. The visual loss Lv is a Triplet Metric loss (Weinberger et al. (2006)) to force a linear projection of vi to be closer to φv(vi) than φv(vk 6=i), and φv(·) is the visual representation network (ResNet). Because the objectives are complementary, we train the parameters of the proposed model by minimizing the sum of losses:\nmin Θ\n( −\nC∑ i wi log(ŵi) + λmax(0,m+ ‖vi − φv(vi)‖ − ‖vi − φv(vk 6=i)‖)\n) (8)\nwhere λ ∈ R is the parameter to balance the loss terms (modalities), m is the triplet margin and Θ represents the trainable parameters of the entire network." }, { "heading": "3 EXPERIMENTS", "text": "We compare our method (ARTNET) and baselines on new and seen composition acquisition tasks. To demonstrate the quantitative and qualitative results, we evaluate our method in a variety of aspects including performance comparison, validity test, incremental data setting, and case study." }, { "heading": "3.1 EXPERIMENT SETTINGS", "text": "Dataset We use the instruction dataset EPIC-Kitchens (Damen et al. (2018)) in our experiments. The dataset consists of 55 hours of egocentric videos across 32 kitchens. Each video clip has a narrative sentence, so we create image-sentence pairs by selecting one key frame of each video clip. For each video frame, we use the object bounding boxes officially provided with EPIC-Kitchens, which are produced by running faster R-CNN (Ren et al. (2015)) too. We discard the object labels due to strict constraints in the language acquisition scenario. Importantly, we train our model our video dataset only, without any language pre-training. We partition the dataset to ensure new compositions used in testing have never been seen in training. The test subsets of new/seen composition tasks are disjoint (without overlap) and equal-size (each set contains 29K samples related to 238/861 unique new/seen compositions); the train subsets of the tasks are identical (142K samples with 3675 unique compositions). The dataset was prepared by the following steps: (1) We took all annotated compositions from the Epic-Kitchens and split them to two groups of compositions - seen consisting of 3675 unique compositions, 124 verbs and 254 nouns (vocabulary size), and new consisting of 238 new compositions, 90 verbs and 149 nouns. All the verbs or nouns of the new compositions are in the seen compositions too. (2) We randomly sampled 140K samples from the seen composition group as a shared train set and each seen composition has at least one sample in the train set. We\nalso sampled a set of seen composition samples (about 20% of the train set) as the test set of seen compositions. (3) The samples of the new compositions were used to create the test set of the new compositions (size about 20% of the train set). To ensure the focus is on new compositions, rather than new words, we removed new compositions that contain new words not seen in the train set.\nEvaluation Metrics To compare the performance of our model against baselines, we evaluate the ability to acquire new or seen compositions (e.g. verb-noun word pairs). During testing, the model takes paired image-sentence data as input, and the target verb-noun composition is replaced with a special “mask” token. The objective is to predict the masked words from the complete vocabulary of all words (unaware of the part of speech). We adopt Top-1 and Top-5 accuracy to measure the performance, which calculates the average accuracy over the predicted nouns and verbs with the top N highest probability. The prediction is correct when both noun and verb are correctly predicted." }, { "heading": "3.2 PERFORMANCE COMPARISON WITH BASELINES", "text": "We compare our model with state-of-the-art methods of language modeling or multimodal language modeling (implementation and more training details of our model and baselins are in the Section 6 Appendix). We consider three variants of the BERT family (Devlin et al. (2018); Chen et al. (2019)):\nBERT (language only) is a powerful language transformer model (Devlin et al. (2018)) that achieved state-of-the-art performance in multiple tasks. We adopt two learning settings: (1) train the BERT model from scratch on the sentences of our task data; (2) utilize the pre-trained BERT model (Zhu et al. (2015); Chelba et al. (2013)) and fine-tune BERT on the task data. Note that the pretrained BERT has the advantage of large-scale data, which is different with language acquisition.\nMultimodal BERT (language + vision) is a generalization of BERT that receives both visual and textual tokens as input, which usually come from an image-sentence pair. Here we use the same architecture as our backbone UNITER (Chen et al. (2019)), and we adopt a pre-trained ResNet-18 (He et al. (2016)) to extract visual features. We train it from scratch on our task dataset.\nThe quantitative and qualitative results are illustrated in Table 1 and Figure 7 (in the appendix section), respectively. For new composition prediction, as shown in Table 1, our model can significantly achieve 2.03% and 4.45% improvement over state-of-the-art baseline on Top-1 and Top-5 accuracy respectively. Such improvement results from the analogical reasoning based on relevant reference samples. For seen composition prediction, the proposed model is nearly unchanged. To understand what the analogical reasoning module has learned, we visualize the attention distribution over multimodal analogy pairs in Figure 3 (detailed analysis and more examples are in Figure 8 and para-\nComposition Acquisition Affordance Accuracy Methods/Settings Overall New Comp Seen Comp\nBERT (w/o vision) 81.67% 81.59% 81.72% Multimodal BERT 85.79% 84.78% 86.36% Proposed ARTNET 86.48% 86.01% 86.75%\nTable 2: Validity Test: Performance comparison of validity accuracy.\ngraphs in the appendix section). As shown in Figure 7, the model can successfully retrieve relevant reference samples that contain analogical pairs (detailed analysis is in the appendix section)." }, { "heading": "3.3 EXPERIMENTS ON DIFFERENT TRAINING SIZES", "text": "To evaluate our model on low-data regime and simulate the language acquisition process of a child, we train on different scales of data. Specifically, we consider 5 different training data size percentages (TPs) (100%, 80%, 60%, 40% and 20%), and plot the performance of our method compared to the stringest baseline in Figure 4. The ARTNET achieves 13.44% Top-5 accuracy with only 20% of training data, which has larger gap with the baseline, suggesting our stronger generalization ability.\nFigure 4: Robustness: Performance curve with different sizes of training data.\nFigure 5: Case Study: Top-5 prediction results for new compositions (highlight words are masked ground-truth compositions to be predicted)." }, { "heading": "3.4 VALIDITY TEST EVALUATION", "text": "We propose a way to aim at assessing whether the predicted results follow human commonsense in the real world, called “Validity Test”, based on the definition of affordance (Gibson (2014)). Affordance is a binary relationship between each noun and verb, which indicates whether a certain verb is typical to involve a certain noun as an argument (Baber (2018)). For examples, “cutting pizza” and “cutting carrot” happen in our daily life, but we never see “cutting plate” or “cutting water”. We annotate the validity accuracy for 8400 Top-1 verb-noun compositions (5704 of which appear in our dataset) predicted by each model given any test image. The type-mismatched compositions, such as “meat apple” (noun-noun) or “peel cut” (verb-verb), are also defined as invalid prediction. In Table 2, we show the Top-1 validity accuracy (the ratio of the number of affordable or valid predictions to all) for each model. Overall, ARTNET achieves higher validity than our baselines 86.5%. The validity of our new composition results outperforms Multimodal BERT/BERT with 1.23%/4.42% improvements, which is significant. We attribute such gains in composition validity to the novel components in our model - discovery of multimodal analogy reference samples and reasoning over such samples to generate the language description. The method’s emphasis on analogical relations to realistic reference samples helps improving the validity of the compositions predicted by the model." }, { "heading": "3.5 CASE STUDY", "text": "To analyze the strengths and failure cases, we select some examples shown in Figure 5. Intuitively, vague visual clues such as ambiguous action and tiny objects, are hard to recognize. However, we observe ARTNET has a unique ability to disambiguate such difficult cases, as shown in the top row of Figure 5. Nevertheless, the model fails in some cases, such as take/stir and close/open. This is mainly due to the fact that we used still keyframes rather than dynamic videos, which discards\nmotion information. Future work should incorporate spatio-temporal features to alleviate this limitation. Moreover, we provide several cases with Top-5 prediction results to show the affordance and rationality of our method’s prediction. Considering Figure 6, we observe that not only the Top-1 results, but also most of the Top-5 are in line with human commonsense, and superior to the baseline." }, { "heading": "4 RELATED WORK", "text": "Compositional Generalization Compositional generalization is an important open reasoning challenge (Keysers et al. (2019); Chang et al. (2018); Loula et al. (2018)) and a priority to achieve human-like abilities (Bastings et al. (2018)), regarding the ability to compose simple constituents into complex structures (Nikolaus et al. (2019)). From the traditional view, numerous studies utilize linguistic principle to explore compositionality (Mitchell & Lapata (2008); Baroni & Zamparelli (2010)). With recent advances, there are more attempts to solve compositional generalization with neural networks for the tasks with synthetic commends or interactions (Lake (2019); Russin et al. (2019); Li et al. (2019); Kato et al. (2018)). The current machine intelligence still lacks compositional generalization ability, because they are prone to fail on such tests in realistic or natural scenarios (Nikolaus et al. (2019); Loula et al. (2018); Keysers et al. (2019)). Moreover, several recent works also try to enhance compositional generalization for natural language (Nikolaus et al. (2019)) with pretrained representations or lexical knowledge. But there are few works addressing this emerging and valuable challenge for language acquisition in a multimodal reasoning view.\nVisually Grounded Language Acquisition (VLA) Like the language acquisition of kids, VLA is a task of acquiring language constituents from scratch within a visual-textual environment. Although the works of grounded language learning achieve the good progress in many tasks (such as visual captioning (Yu & Siskind (2013); Kiela et al. (2017); Ross et al. (2018)) or robotics (Matuszek (2018))), there are major differences between VLA and multimodal pretraining, including the goal of learned model, the data amount needed, and the the architecture condition (e.g., whether use the predefined language parsing). The former attempts to learn from scratch aspects of language models (e.g. compositional semantics) and effectively use limited samples, while the latter uses context from very large training corpora (Lu et al. (2019); Chen et al. (2019)) to learn language representation by a data-hungry model. Several recent works further address acquiring concrete language including word representation (Kottur et al. (2016); Surı́s et al. (2019)), compositional semantics (Jin et al. (2020)) from scratch via visual clues, which are more related to our task. Our work seeks to improve, in the form of reasoning, the constituent generalization to new compositions." }, { "heading": "5 CONCLUSION", "text": "In this paper, we take a step towards visually grounded language acquisition, by studying the problem of compositional generalization in the state-of-the-art multimedia language models. Inspired by the human brain’s analogical reasoning process, we propose to form new compositions by recalling observed compositions and relating them to the new scenario through learned arithmetic operations. Our proposed reasoning-augmented method, Analogical Reasoning Transformer Networks, achieves superior compositional generalization capability compared to the state-of-the-art transformers, which results in significant and stable performance gains in unseen compositions over a large-scale instructional video dataset." }, { "heading": "6 APPENDIX", "text": "Implementation Details In all experiments, training is conducted on 2 GPUs for 200 epochs. In each mini-batch, 256 samples are drawn for each GPU and in each sample, image regions are cropped from the whole image and resized to 112 × 112. The transformer encoder in all models has the same configuration: 4 layers, a hidden size of 384, and 4 self-attention heads in each layer. Besides, we use AdamW optimizer Loshchilov & Hutter (2017) with the learning rate 3e-5. For AdamW optimizer Loshchilov & Hutter (2017), we set the update coefficients, for averages of gradient and its square (β1, β2), and on denominator as 0.9, 0.999, 1e-4. During training, we mask out text tokens 13 of the time and image tokens 1 6 of the time and follow the same setting of random masking strategy with BERT Devlin et al. (2018).\nDuring training and testing, for each target sample, we randomly select 200 remaining samples as the corresponding reference set. In our experiments, we utilize Top-K (K = 3) reference samples to get involved in analogical reasoning.\nIn evaluation, we calculate the accuracy based on the same random mask strategy as training process. An early stop strategy is utilized based on the Top-5 Acc. in validation that the training process will terminate if the validation Top-5 Acc. doesn’t increase again.\nTarget-Reference Case Study From the two examples shown in Figure 7, it’s easy to observe that the model can successfully retrieve relevant reference samples which contain analogical pairs, by computing their visual and language similarity to the target composition. For the correct prediction “peel carrot”, the model discovers “stir carrot”, “peel potato” and “cut potato” for analogical reasoning, while for “wash knife”, the model retrieves “wash plate”, “take knife” and “rinse knife” as reference samples.\nReasoning Attention Distribution over Multimodal Analogy Pairs In the four examples shown in Figure 7, we provide two examples for correct prediction and another two for wrong predictions. From the bottom two examples, the model learns compositional semantics from both visual and textual constituents. For the correct prediction of new composition “put sausage”, the model learns to acquire and approximate novel composition from our multimodal reasoning. The reasoning has more attention on the textual phrase of the first reference sample “put oil in pan” and visual regions “sausage, whole image” of the second reference sample. This implies that the model learns textual and visual semantics from reference samples and compose them under similar scenarios as context. A similar phenomenon also appears in the second prediction example for seen composition “chop onions”. The model is able to learn the phrase “chop onion” from different modalities. For wrong prediction results, the minor visual differences of several verbs will lead to wrong reasoning (e.g. “remove skin of garlic”), although the model can successfully retrieve relevant reference samples with aid by contextual information. While chopped garlic is not visually recognized by the adopted vision model, the attention distribution of the visual analogy pairs for the example seems to focus on the garlic but also be confused by “cream, whole image”. Meanwhile, the accuracy of the reasoning\nis also impacted by the relevant samples (e.g., “fry in pan”). When the model didn’t discover “fry” in the relevant references and can not distinguish the actions (“fry” and “chop”) in a target and a reference sample, the reasoning would easier to get the wrong prediction." } ]
2,020
null
SP:5be9a3c39234c10c226c42eec95e29cbddbaf8c0
[ "This paper presents a unified framework for graph convolutional neural networks based on regularized optimization, connecting different variants of graph neural networks including vanilla, attention-based, and topology-based approaches. The authors also propose a novel regularization technique to approach the oversmoothing problem in graph convolution. Experiments on the standard settings of node classification on Citeseer, Cora, and Pubmed prove the effectiveness of the proposed regularization techniques. " ]
Graph Convolutional Networks (GCNs) have attracted a lot of research interest in the machine learning community in recent years. Although many variants have been proposed, we still lack a systematic view of different GCN models and deep understanding of the relations among them. In this paper, we take a step forward to establish a unified framework for convolution-based graph neural networks, by formulating the basic graph convolution operation as an optimization problem in the graph Fourier space. Under this framework, a variety of popular GCN models, including the vanilla-GCNs, attention-based GCNs and topology-based GCNs, can be interpreted as a same optimization problem but with different carefully designed regularizers. This novel perspective enables a better understanding of the similarities and differences among many widely used GCNs, and may inspire new approaches for designing better models. As a showcase, we also present a novel regularization technique under the proposed framework to tackle the oversmoothing problem in graph convolution. The effectiveness of the newly designed model is validated empirically.
[]
[ { "authors": [ "S. Bai", "F. Zhang", "P. Torr" ], "title": "Hypergraph convolution and hypergraph attention", "venue": "ArXiv, abs/1901.08150,", "year": 2019 }, { "authors": [ "Xavier Bresson", "Thomas Laurent" ], "title": "Residual gated graph convnets", "venue": "arXiv preprint arXiv:1711.07553,", "year": 2017 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "S. Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Jian Du", "Shanghang Zhang", "Guanhang Wu", "José MF Moura", "Soummya Kar" ], "title": "Topology adaptive graph convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "M. Fey", "J.E. Lenssen", "F. Weichert", "H. Müller" ], "title": "Splinecnn: Fast geometric deep learning with continuous b-spline kernels", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen" ], "title": "Fast graph representation learning with pytorch geometric", "venue": null, "year": 1903 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "NT Hoang", "Takanori Maehara" ], "title": "Revisiting graph neural networks: All we have is low-pass filters", "venue": "arXiv preprint arXiv:1905.09550,", "year": 2019 }, { "authors": [ "Amol Kapoor", "Aram Galstyan", "Bryan Perozzi", "Greg Ver Steeg", "Hrayr Harutyunyan", "Kristina Lerman", "Nazanin Alipourfard", "Sami Abu-El-Haija" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Johannes Klicpera", "Stefan Weißenberger", "Stephan Günnemann" ], "title": "Diffusion improves graph learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "arXiv preprint arXiv:1707.01926,", "year": 2017 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Y. Rong", "W. Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards the very deep graph convolutional networks for node classification", "venue": null, "year": 2019 }, { "authors": [ "M. Schlichtkrull", "Thomas Kipf", "P. Bloem", "R.V. Berg", "Ivan Titov", "M. Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In ESWC,", "year": 2018 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": null, "year": 2018 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Dynamic edge-conditioned filters in convolutional neural networks on graphs", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Kiran K Thekumparampil", "Chong Wang", "Sewoong Oh", "Li-Jia Li" ], "title": "Attention-based graph neural network for semi-supervised learning", "venue": null, "year": 2018 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Petar Veličković", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Holanda de Souza Jr.", "Christopher Fifty", "Tao Yu", "Kilian Q Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Shu Wu", "Yuyuan Tang", "Yanqiao Zhu", "Liang Wang", "Xing Xie", "Tieniu Tan" ], "title": "Session-based recommendation with graph neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "CoRR, abs/1810.00826,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "arXiv preprint arXiv:1806.03536,", "year": 2018 }, { "authors": [ "Liang Yao", "Chengsheng Mao", "Yuan Luo" ], "title": "Graph convolutional networks for text classification", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lingxiao Zhao", "Leman Akoglu" ], "title": "Pairnorm: Tackling oversmoothing in gnns", "venue": "arXiv preprint arXiv:1909.12223,", "year": 2019 }, { "authors": [ "Marinka Zitnik", "Jure Leskovec" ], "title": "Predicting multicellular function through multi-layer tissue", "venue": "networks. Bioinformatics,", "year": 2017 }, { "authors": [ "Marinka Zitnik", "Monica Agrawal", "Jure Leskovec" ], "title": "Modeling polypharmacy side effects with graph convolutional", "venue": "networks. Bioinformatics,", "year": 2018 }, { "authors": [ "C. graphs" ], "title": "DATA STATISTICS AND EXPERIMENTAL SETUPS We conduct experiments on four real-world graph datasets, whose statistics are listed in Table 5. For transductive learning, we evaluate our method on the Cora, Citeseer, Pubmed datasets, following the experimental setup in (Sen et", "venue": null, "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent years have witnessed a fast development in graph processing by generalizing convolution operation to graph-structured data, which is known as Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017). Due to the great success, numerous variants of GCNs have been developed and extensively adopted in the field of social network analysis (Hamilton et al., 2017; Wu et al., 2019a; Veličković et al., 2018), biology (Zitnik et al., 2018), transportation forecasting (Li et al., 2017) and natural language processing (Wu et al., 2019b; Yao et al., 2019).\nInspired by GCN, a wide variety of convolution-based graph learning approaches are proposed to enhance the generalization performance of graph neural networks. Several research aim to achieve higher expressiveness by exploring higher-order information or introducing additional learning mechanisms like attention modules. Although proposed from different perspectives, their exist some connections between these approaches. For example, attention-based GCNs like GAT (Veličković et al., 2018) and AGNN (Thekumparampil et al., 2018) share the similar intention by adjusting the adjacency matrix with a function of edge and node features. Similarly, TAGCN (Du et al., 2017) and MixHop (Kapoor et al., 2019) can be viewed as particular instances of PPNP (Klicpera et al., 2018) under certain approximation. However, the relations among these graph learning models are rarely studied and the comparisons are still limited in analyzing generalization performances on public datasets. As a consequence, we still lack a systematic view of different GCN models and deep understanding of the relations among them.\nIn this paper, we resort to the techniques in graph signal processing and attempt to understand GCN-based approaches from a general perspective. Specifically, we present a unified graph convolution framework by establishing graph convolution operations with optimization problems in the graph Fourier domain. We consider a Laplacian regularized least squares optimization problem and show that most of the convolution-based approaches can be interpreted in this framework by adding carefully designed regularizers. Besides vanilla GCNs, we also extend our framework to formulating non-convolutional operations (Xu et al., 2018a; Hamilton et al., 2017), attention-based GCNs (Veličković et al., 2018; Thekumparampil et al., 2018) and topology-based GCNs (Klicpera et al., 2018; Kapoor et al., 2019), which cover a large fraction of the state-of-the-art graph learning ap-\nproaches. This novel perspective provides a re-interpretation of graph convolution operations and enables a better understanding of the similarities and differences among many widely used GCNs, and may inspire new approaches for designing better models.\nAs a conclusion, we summarize our contributions as follow:\n1. We introduce a unified framework for convolution-based graph neural networks and interpret various convolution filters as carefully designed regularizers in the graph Fourier domain, which provides a general methodology for evaluating and relating different graph learning modules.\n2. Based on the proposed framework, we provide new insights on understanding the limitations of GCNs and show new directions to tackle common problems and improve the generalization performance of current graph neural networks in the graph Fourier domain. Additionally, the unified framework can serve as a once-for-all platform for expert-designed modules on convolution-based approaches, where newly designed modules can be easily implemented on other networks as a plugin module with trivial adaptations. We believe that our framework can provide convenience for designing new graph learning modules and searching for better combinations.\n3. As a showcase, we present a novel regularization technique under the proposed framework to alleviate the oversmoothing problem in graph representation learning. As shown in Section 4, the newly designed regularizer can be implemented on several convolution-based networks and effectively improve the generalization performance of graph learning models." }, { "heading": "2 PRELIMINARY", "text": "We start with an overview of the basic concepts of graph signal processing. Let G = (V,A) denote a graph with node feature vectors where V represents the vertex set consisting of nodes {v1,v2, . . . ,vN} and A = (aij) ∈ RN×N is the adjacency matrix implying the connectivity between nodes in the graph. Let D = diag(d(1), . . . , d(N)) ∈ RN×N be the degree matrix of A where d(i) = ∑ j∈V aij is the degree of vertex i. Then, L = D−A is the combinatorial Laplacian and L̃ = I−D(−1/2)AD(−1/2) is the normalized Laplacian of G. Additionally, we let à = A+I and D̃ = D + I denote the augmented adjacency and degree matrices with added self-loops. Then L̃sym = I − D̃−1/2ÃD̃−1/2 (Ãsym = D̃−1/2ÃD̃−1/2) and L̃rw = I − D̃−1à (Ãrw = D̃−1Ã) are the augmented symmetric normalized and random walk normalized Laplacian (augmented adjacency matrices) of G, respectively.\nLet x ∈ RN be a signal on the vertices of the graph. The spectral convolution is defined as a function of a filter gθ parameterized in the Fourier domain (Kipf & Welling, 2017):\ngθ ? x = Ugθ(Λ)U Tx, (1)\nwhere U and Λ are the eigenvectors and eigenvalues of the normalized Laplacian L̃. Also, we follow Hoang & Maehara (2019) and define the variation ∆ and D̃-inner product as:\n∆(x) = ∑ i,j∈V aij(x(i)− x(j))2 = xTLx, (x,y)D̃ = ∑ i∈V (d(i) + 1)x(i)y(i) = xT D̃y, (2)\nwhich specifies the smoothness and importance of the signal respectively." }, { "heading": "3 UNIFIED GRAPH CONVOLUTION FRAMEWORK", "text": "With the success of GCNs, a wide variety of convolution-based approaches are proposed which progressively enhance the expressive power and generalization performance of graph neural networks. Despite the effectiveness of GCN and its derivatives on specific tasks, there still lack a comprehensive understanding on the relations and differences among various graph learning modules.\nGraph signal processing is a powerful technique which has been adopted in several graph learning researches (Kipf & Welling, 2017; Hoang & Maehara, 2019; Zhao & Akoglu, 2019). However, existing researches mainly focus on analyzing the properties of GCNs while ignore the connections between different graph learning modules. Innovatively, in this work, we consider interpreting convolution-based approaches from a general perspective with graph signal processing techniques.\nIn specific, we establish the connections between graph convolution operations and optimization problems in graph Fourier space, showing the effect of each module explicitly with specific regularizers. This novel perspective provides a systematic view of different GCN models and deep understanding of the relations among them." }, { "heading": "3.1 UNIFIED GRAPH CONVOLUTION FRAMEWORK", "text": "Several researches have proved that, in the field of graph signal processing, the representative features are mostly preserved in the low-frequency signals while noises are mostly contained in the high-frequency signals (Hoang & Maehara, 2019). Based on this observation, numerous graph representation learning methods are designed to decrease the high-frequency components, which can be viewed as low-pass filters in the graph Fourier space. With similar inspiration, we consider a Laplacian regularized least squares optimization problem with graph signal regularizers and attempt to build connections with these filters.\nDefinition 1 Unified Graph Convolution Framework. Graph convolution filters can be achieved by solving the following Laplacian regularized least squares optimization:\nmin X̄ ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + λLreg, (3)\nwhere ‖x‖D̃ = √ (x,x)D̃ denotes the norm induced by D̃.\nIn the following sections, we will show that a wide range of convolution-based graph neural networks can be derived from Definition 1 with different carefully designed regularizers, and provide new insights on understanding different graph learning modules from the graph signal perspective." }, { "heading": "3.1.1 GRAPH CONVOLUTIONAL NETWORKS", "text": "Graph convolutional networks (GCNs) (Kipf & Welling, 2017) are the foundation of numerous graph learning models and have received widespread concerns. Several researches have demonstrated that the vanilla GCN is essentially a type of Laplacian smoothing over the whole graph, which makes the features of the connected nodes similar. Therefore, to reformulate GCNs in the graph Fourier space, we consider utilizing the variation ∆(x) as the regularizer.\nDefinition 2 Vanilla GCNs. Let x̄(i)i∈V be the estimation of the input observation x(i)i∈V. A low-pass filter:\nX̄ = ÃrwX, (4) is the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22. (5)\nDerivations of the definitions are presented in Appendix A.\nAs the eigenvalues of the approximated filter Ãrw are bounded by 1, it resembles a low-pass filter that removes the high-frequency signals. By exchanging Ãrw with Ãsym (which has the same eigenvalues as Ãrw), we obtain the same formulation adopted in GCNs.\nIt has been stated that the second term ∆(x) in Eq.(5) measures the variation of the estimation x̄ over the graph structure. By adding this regularizer to the objective function, the obtained filter emphasizes the low-frequency signals through minimizing the variation over the local graph structure, while keeping the estimation close to the input in the graph Fourier space." }, { "heading": "3.1.2 NON-CONVOLUTIONAL OPERATIONS", "text": "Residual Connection. Residual connection is first proposed by He et al. (2016) and has been widely adopted in graph representation learning approaches. In the vanilla GCNs, norms of the eigenvalues of the filter Ãrw (or Ãsym) are bounded by 1 which ensures numerical stability in the training procedure. However, on the other hand, signals in all frequency band will shrink as the convolution layer stacks, leading to a consistent information loss. Therefore, adding the residual connection is deemed to preserve the strength of the input signal.\nDefinition 3 Residual Connection. A graph convolution filter with residual connection:\nX̄ = ÃrwX + X, (6)\nwhere > 0 controls the strength of residual connection, is the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V (‖x̄(i)− x(i)‖2 D̃ − ‖x̄(i)‖2 D̃ ) + ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22. (7)\nBy adding the negative regularizer to penalize the estimations with small norms, we can induce the same formulation as the vanilla graph convolution with residual connection.\nConcatenation. Concatenation is practically a residual connection with different learning weights. Definition 3’ Concatenation. A graph convolution filter concatenating with the input signal:\nX̄ = ÃrwX + XΘΘ T , (8)\nis the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V (‖x̄(i)− x(i)‖2 D̃ − ‖x̄(i)Θ‖2 D̃ ) + ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22, (9)\nwhere > 0 controls the strength of concatenation and Θ is the learning coefficient.\nAlthough the learning weights ΘΘT has a constrained expressive capability, it can be compensated by the following feature learning modules." }, { "heading": "3.1.3 ATTENTION-BASED CONVOLUTIONAL NETWORKS", "text": "Since the convolution filters in GCNs are dependent only on the graph structure, GCNs are proved to have restricted expressive power and may cause the oversmoothing problem. Several researches try to introduce the attention mechanism to the convolution filter, learning to assign different edge weights at each layer based on nodes and edges. GAT (Veličković et al., 2018) and AGNN (Thekumparampil et al., 2018) compute the attention coefficients as a function of the features of connected nodes, while ECC (Simonovsky & Komodakis, 2017) and GatedGCN (Bresson & Laurent, 2017) consider the activations for each connected edge. Although these approaches have different insights, they can be all formulated as (See details in Appendix A):\npij = aijfθ(x(i),x(j), eij), i, j ∈ V, (10)\nwhere eij denotes the edge representation if applicable. Therefore, we replace aij in Definition 2 with learned coefficients to enforce different regularization strength on the connected edges.\nDefinition 4 Attention-based GCNs. An attention-based graph convolution filter:\nX̄ = PX, (11)\nis the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + ∑ i,j∈V pij‖x̄(i)− x̄(j)‖22, s.t. ∑ j∈V pij = D̃ii,∀i ∈ V. (12)\nNotice that we use a normalization trick to constrain the degree of attention matrix to be the same as the original degree matrix D̃ as we want to preserve the strength of the regularization for each node. The formulated filter P corresponds to the matrix D̃−1p with row sum equals to 1, which is also consistent with most of the attention-based approaches after normalization. Through adjusting the regularization strength for edges, nodes with higher attention coefficients tend to have similar features while the distance for nodes with low attention coefficients will be further." }, { "heading": "3.1.4 TOPOLOGY-BASED CONVOLUTIONAL NETWORKS", "text": "Attention-based approaches are mostly designed based on the local structure. Besides focusing on the first-order adjacency matrix, several approaches (Klicpera et al., 2018; 2019; Kapoor et al., 2019; Du et al., 2017) propose to adopt the structural information in the multi-hop neighborhood,\nwhich are referred to as topology-based convolutional networks. We start with an analysis of PPNP (Klicpera et al., 2018) and then derive a general formulation for topology-based approaches.\nPPNP. PPNP provides insights towards the propagation scheme by combining message-passing function with personalized PageRank. As proved in (Xu et al., 2018b), the influence of node i on node j is proportional to a k-step random walk, which converges to the limit distribution with multiple stacked convolution layers. By involving the restart probability, PPNP is able to preserve the starting node i’s information. Similarly, in Definition 2, the first term can also be viewed as a regularization of preserving the original signal information. Therefore, we may achieve the same purpose by adjusting the regularization strength.\nDefinition 5 PPNP. A graph convolution filter with personalized propagation (PPNP): X̄ = α(In − (1− α)Ãrw)−1X, (13)\nis equivalent to the optimal solution of the following optimization:\nmin X̄ α ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + (1− α) ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22, (14)\nwhere α ∈ (0, 1] is the restart probability. Higher α means a higher possibility to teleport back to the starting node, which is consistent with the higher regularization on the original signal in (14).\nMulti-hop PPNP. One of the possible weakness of the original PPNP is that personalized PageRank only utilizes the regularizer over the local structure. Therefore, we may improve the expressive capability by involving multi-hop information, which is equivalent to adding regularizers for higherorder variations.\nDefinition 6 Multi-hop PPNP. Let t be the highest order adopted in the algorithm. A graph convolution filter with multi-hop personalized propagation (Multi-hop PPNP):\nX̄ = α0(In − t∑\nk=1\nαkà k rw) −1X, (15)\nwhere ∑t k=0 αk = 1, α0 > 0 and αk ≥ 0, k = 1, 2, . . . , t, is equivalent to the optimal solution of the following optimization:\nmin X̄ α0 ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + t∑ k=1 αk ∑ i,j∈V a (k) ij ‖x̄(i)− x̄(j)‖ 2 2, (16)\nwhere a(k)ij is proportional to the transition probability of the k-step random walk and the same normalization trick in Section 3.1.3 is adopted on {a(k)ij }. Solving Eq.(15) directly is computationally expensive. Therefore, we derive a first-order approximation by Taylor expansion and result in the form of:\nX̄ = ( T∑ i=0 αià i rw)X +O(à T rwX). (17)\nAs the norm of the eigenvalues of Ãrw are bounded by 1, we can keep the first term in Eq.(17) as a close approximation.\nBy comparing the approximated solution with topology-based graph convolutional networks, we find that most of the approaches can be reformulated as particular instances of Definition 6. For example, the formulation for Mixhop (Kapoor et al., 2019) can be derived as an approximation of Eq.(17) if we let t = 2 and α0 = α1 = α2 = 1/3. Different learning weights can be applied to each hop as Section 3.1.2 to concatenate multi-hop signals. See more examples in Appendix B." }, { "heading": "3.2 REMARKS", "text": "In this section, we build a bridge between graph convolution operations and optimization problems in the graph Fourier space and provide insights into interpreting graph convolution operations with regularizers. For conclusion, we rewrite the general form of the unified framework as follow.\nDefinition 1’ Unified Graph Convolution Framework. Convolution-based graph neural networks can be reformulated (after approximation) as particular instances of the optimal solution of the following optimization problem:\nmin X̄ α0 ∑ i∈V (‖x̄(i)−x(i)‖2 D̃ − ‖x̄(i)Θ‖2\nD̃︸ ︷︷ ︸ Non−Conv ) + t∑ k=1 αk ∑ i,j∈V p (k) ij ‖x̄(i)Θ (k) − x̄(j)Θ(k)‖22︸ ︷︷ ︸ Attention−based︸ ︷︷ ︸\nTopology−based\n+λLreg,\n(18) where ∑t k=0 αk = 1, αk ≥ 0 and ∑ j∈V pij = D̃ii,∀i ∈ V.\nIf we let d be the feature dimension of X , then Θ,Θ(k) ∈ Rd×d are the corresponding learning weights. Lreg corresponds to the personalized regularizer based on the framework, which can be effective if carefully designed as we will show in Section 4.\nBy establishing the unified framework, we interpret various convolution filters as carefully designed regularizers in the graph Fourier domain, which provides new insights on understanding graph learning modules from the graph signal perspective. Several graph learning modules are reformulated as smoothing regularizers over the graph structure with different intentions. While vanilla GCNs focus on minimizing the variation over the local graph structure, attention-based and topology-based GCNs take a step forward and concentrate on the differences between connected edges and graph structure with larger receptive field. This novel perspective enables a better understanding of the similarities and differences among many widely used GCNs, and may inspire new approaches for designing better models." }, { "heading": "4 TACKLING OVERSMOOTHING UNDER THE UNIFIED FRAMEWORK", "text": "Based on the proposed framework, we provide new insights on understanding the limitations of GCNs and inspire a new line of work towards designing better graph learning models. As a showcase, we present a novel regularization technique under the framework to tackle the oversmoothing problem. It is shown that the newly designed regularizer can be easily implemented on other convolution-based networks with trivial adaptations and effectively improve the generalization performances of graph learning approaches." }, { "heading": "4.1 REGULARIZATION ON FEATURE VARIANCE", "text": "Here, we adopt the definition of feature-wise oversmoothing in (Zhao & Akoglu, 2019). Due to multiple layers of Laplacian smoothing, all features fall into the same subspace spanned by the dominated eigenvectors of the normalized adjacency matrix, which also corresponds to the similar situation described in (Klicpera et al., 2018). To tackle this problem, we propose to penalize the features when they are close to each other. Specifically, we consider the pairwise distance between normalized features, which is summarized as:\nδ(X) = 1\nd2 ∑ i,j∈d ‖x·i/‖x·i‖ − x·j/‖x·j‖‖22, (19)\nwhere d is the feature dimension and x·i ∈ Rn represents the i-th dimension for all nodes. Therefore, Eq.(19) can be interpreted as a feature variance regularizer, representing the distance between features after normalization. By adding this regularizer to the unified framework, the proposed filter should have the property to drive different features away.\nDefinition 7 Regularized Feature Variance. Let ⊗ be the Kronecker product operator, vec(X) ∈ Rnd be the vectorized signal X . Let DX be a diagonal matrix whose value is defined by DX(i, i) = ‖x·i‖2. A graph convolution filter with regularized feature variance:\nvec(X̄) = (In ⊗ [(α1 + α2)I − α2Ãrw]− α3[D−1x (I − 1\nd 11T )D−1x ]⊗ D̃−1)−1vec(X) (20)\nis equivalent to the optimal solution of the following optimization:\nmin X̄ α1 ∑ i∈V ‖x̄(i)−x(i)‖2 D̃ +α2 ∑ i,j∈V aij‖x̄(i)−x̄(j)‖22−α3 1 d ∑ i,j∈d ‖x̄·i/‖x·i‖−x̄·j/‖x·j‖‖22, (21)\nwhere α1 > 0, α2, α3 ≥ 0. For computation efficiency, we approximate ‖x̄·i‖ with ‖x·i‖ as we assume that a single convolution filter provides little effect to the norm of features.\nCalculating the Kronecker product and inverse operators are computationally expensive. Nevertheless, we can approximate Eq.(20) via Taylor expansion with an iterative algorithm. If we let:\nA = (α1 + α2)I − α2Ãrw, B = In, (22)\nC = −α3D̃−1, D = D−1x (1− 1\nd 11T )D−1x . (23)\nThen, a t-order approximated formulation is summarized as:\nX̄(0) = X, (24)\nX̄(k+1) = X + X̄(k) −AX̄(k)B −CX̄(k)D, k = 0, 1, . . . , t− 1. (25)\nThrough approximation, computation overhead is greatly reduced. See details in the Appendix A.\nAs far as we are concerned, the advantages of utilizing feature variance regularization are threefold. First, the regularizer measures the difference between features, therefore explicitly preventing all features from falling into the same subspace. Second, the modified convolution filter does not require additional training parameters, avoiding the risk of overfitting. Third, the regularizer is designed based on the proposed unified framework, which means it can be easily implemented on other convolution-based networks as a plug-in module." }, { "heading": "4.2 DISCUSSION", "text": "Several researches have also shared insights on understanding and tackling oversmoothing. It is shown in (Li et al., 2018) that the graph convolution of GCN is a special form of Laplacian smoothing and the authors try to compensate the long-range dependencies by co-training GCN with a random walk model. JKNet (Xu et al., 2018b) proved that the influence score between nodes converges to a fixed distribution when layer stacks, therefore losing local information. As a remedy, they proposed to concatenate layer-wise representations to perform mixed structural information. More recently, Oono & Suzuki (2020) theoretically demonstrated that graph neural networks lose expressive power exponentially due to oversmoothing. Comparing to the aforementioned researches, our proposed method acts explicitly on the graph signals and can be easily implemented on other convolution-based networks as a plug-in module with trivial adaptations." }, { "heading": "4.3 EXPERIMENT", "text": "To testify the effectiveness of the regularizer, we empirically validate the proposed method on several widely used semi-supervised node classification benchmarks, including transductive and inductive settings. As we have stated in Section 4.1, our regularizer can be implemented on various convolution-based approaches under the unified graph convolution framework. Therefore, we consider three different versions by implementing the regularizer on vanilla-GCNs, attention-based GCNs and topology-based GCNs. We achieve state-of-the-art results on almost all of the settings and show the effectiveness of tackling oversmoothing on graph-structured data.\nDataset and Experimental Setup. We conduct experiments on four real-world graph datasets. For transductive learning, we evaluate our method on the Cora, Citeseer, Pubmed datasets, following the experimental setup in (Sen et al., 2008). PPI (Zitnik & Leskovec, 2017) is adopted for inductive learning. Dataset statistics and more experimental setups are presented in Appendix C. For comparison, we categorize state-of-the-art convolution-based graph neural networks into three specific classes, corresponding to the three versions of our proposed method. The first category is based on the vanilla-GCN proposed by Kipf & Welling (2017), including GCN, FastGCN (Chen et al., 2018), SGC (Wu et al., 2019a), GIN (Xu et al., 2018a), and DGI (Veličković et al., 2019). Since GIN is not initially evaluated on citation networks, we implement GIN following the setting in (Xu et al., 2018a). The second category corresponds to the attention-based approaches, including GAT (Veličković et al., 2018), AGNN (Thekumparampil et al., 2018), MoNet (Monti et al., 2017) and GatedGCN (Bresson & Laurent, 2017). The last category of approaches is topologybased GCNs which utilizes the structural information in the multi-hop neighborhood. We consider APPNP (Klicpera et al., 2018), TAGCN (Du et al., 2017) and MixHop (Kapoor et al., 2019) as the baselines.\nTransductive Learning. Table 1 presents the performance of our method and several stateof-the-art graph neural networks on transductive learning datasets. For three classes of convolution-based approaches, we implement our regularizer with GCN, GAT and APPNP as comparisons with other baselines, respectively. For a fair comparison, we adopt the same network structure, hyperparameters and training configurations as baseline models. It is shown that the proposed model achieves state-of-theart results on all three settings. On all of the datasets, we can observe a 0.5∼1.0% higher performance after adopting the proposed reg-\nularizer. Notably, the proposed model achieves the highest improvement on the vanilla GCNs as this simplest version suffers most from the oversmoothing problem. Meanwhile, when combining with GAT, the model achieves the highest results comparing with almost all the baselines. Considering that attention mechanism and the regularization on oversmoothing focus on the local and global properties respectively, this can be an ideal combination for graph representation learning. We also conduct experiments on three citation networks with random splits and present the result in Appendix D.\nInductive Learning. For the inductive learning task, we implement our method on the vanilla GCN and GAT, and adopt the same experimental setup. Table 2 presents the comparison results on\ninductive learning dataset. It can be seen that our model compares favorably with all the competitive baselines. On the PPI dataset, out model achieves 0.5∼1% higher on test Micro-F1 score, showing the effectiveness of applying our method under inductive settings.\nComparison with Other Related Works. To validate the effectiveness of our model, we compare the proposed regularizer with two state-of-the-art approaches on tackling oversmoothing, DropEdge(Rong et al., 2019) and PairNorm(Zhao & Akoglu, 2019). For fair comparison, all approaches are adopted on vanilla-GCN with 2∼8 layers and show the best performance on three transductive datasets respectively. As shown in Table 3, our regularizer achieves best performance on all three settings. As PairNorm is more suitable when a subset of the nodes lack feature vectors, it is less competitive in the general settings.\nAnalysis. As we have stated above, the regularizer can be interpreted as the mean feature variance, which prevents different features from falling into the same subspace. To testify the effect of our method, we compute the mean pairwise distance (Eq.(19)) of the last hidden layer of GCN and GAT, with and without regularizer on the Cora dataset. We show the result of models with 2-8 layers in Figure 1. As we can observe, the feature variances and the accuracies of models with regularization are comparably higher than vanilla models with obvious gaps. Therefore, after applying the regularizer, features are more separated from each other, and the oversmoothing problem is alleviated." }, { "heading": "5 CONCLUSION", "text": "In this paper, we develop a unified graph convolution framework by establishing graph convolution filters with optimization problems in the graph Fourier space. We show that most convolutionbased graph learning models are equivalent to adding carefully designed regularizers. Besides vanilla GCN, our framework is extended to formulating non-convolutional operations, attentionbased GCNs and topology-based GCNs, which cover a large fraction of state-of-the-art graph learning models. On this basis, we propose a novel regularization on tackling the oversmoothing problem as a showcase, proving the effectiveness of designing new modules based on the framework. Through the unified framework, we provide a general methodology for understanding and relating different graph learning modules, with new insights on tackling common problems and improving the generalization performance of current graph neural networks in the graph Fourier domain. Meanwhile, the unified framework can also serve as a once-for-all platform for expert-designed modules on convolution-based approaches. We hope our work can promote the understandings towards graph convolutional networks and inspire more insights in this field." }, { "heading": "A. PROOFS OF THE DEFINITIONS", "text": "Definition 2 Vanilla GCNs. Let x̄(i)i∈V be the estimation of the input observation x(i)i∈V. A low-pass filter:\nX̄ = ÃrwX, (26) is the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22. (27)\nProof. Let l denote the objective function. We have\nl = tr[(X̄ −X)T D̃(X̄ −X)] + tr(X̄TLX̄). Then,\n∂l\n∂X̄ = 2D̃(X̄ −X) + 2LX̄.\nIf we let ∂l ∂X̄ = 0:\n(D̃ + L)X̄ = D̃X\n(I + L̃rw)X̄ = X.\nAs the norm of eigenvalues of Ãrw = I − L̃rw is bounded by 1, I + L̃rw has eigenvalues in range [1, 3], which proves that I + L̃rw is a positive definite matrix. Therefore,\nX̄ = (I + L̃rw) −1X. (28)\nUnfortunately, solving the closed-form solution of Eq.(28) is computationally expensive. Nevertheless, we can derive a simpler form, X̄ ≈ (I−L̃rw)X = ÃrwX , via first-order Taylor approximation which establishes the Definition.\nDefinition 3 Residual Connection. A graph convolution filter with residual connection:\nX̄ = ÃrwX + X, (29) where > 0 controls the strength of residual connection, is the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V (‖x̄(i)− x(i)‖2 D̃ − ‖x̄(i)‖2 D̃ ) + ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22. (30)\nProof. Let l denote the objective function. We have\nl = tr[(X̄ −X)T D̃(X̄ −X)]− tr(X̄T D̃X̄) + tr(X̄TLX̄). Then,\n∂l\n∂X̄ = 2D̃(X̄ −X) + 2(L− D̃)X̄.\nIf we let ∂l ∂X̄ = 0:\n[(1− )D̃ + L]X̄ = D̃X X̄ = [(1− )I + L̃rw]−1X X̄ = [I + (L̃rw − I)]−1X.\nTherefore, the first-order approximation of the optimal solution is\nX̄ ≈ [I − (L̃rw − I)]X = ÃrwX + X.\nDefinition 3’ Concatenation. A graph convolution filter concatenating with the input signal:\nX̄ = ÃrwX + XΘΘ T , (31)\nis the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V (‖x̄(i)− x(i)‖2 D̃ − ‖x̄(i)Θ‖2 D̃ ) + ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22, (32)\nwhere > 0 controls the strength of concatenation and Θ is the learning coefficients for the concatenated signal.\nProof. Let l denote the objective function. We have\nl = tr[(X̄ −X)T D̃(X̄ −X)]− tr((X̄Θ)T D̃(X̄Θ)) + tr(X̄TLX̄).\nThen, ∂l\n∂X̄ = 2D̃(X̄ −X) + 2LX̄ − 2 D̃X̄ΘΘT .\nIf we let ∂l ∂X̄ = 0:\n(D̃ + L)X̄ − D̃X̄ΘΘT = D̃X (I + L̃rw)X̄ − X̄ΘΘT = X.\nWith the help of the Kronecker product operator ⊗ and first-order Taylor expansion, we have\nvec(X̄) = [(I ⊗ (I + L̃rw))− ((ΘΘT )⊗ I)]−1vec(X) ≈ [2I − (I ⊗ (I + L̃rw)) + ((ΘΘT )⊗ I)]vec(X) = vec(2X − (I + L̃rw)X + X̄ΘΘT ) = vec(ÃrwX + XΘΘ T ).\nDefinition 4 Attention-based GCNs. An attention-based graph convolution filter:\nX̄ = PX, (33)\nis the first-order approximation of the optimal solution of the following optimization:\nmin X̄ ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + ∑ i,j∈V pij‖x̄(i)− x̄(j)‖22, s.t. ∑ j∈V pij = D̃ii,∀i ∈ V. (34)\nProof. Let l denote the objective function. We have\nl = tr[(X̄ −X)T D̃(X̄ −X)] + tr(X̄TLX̄).\nThen, ∂l\n∂X̄ = 2D̃(X̄ −X) + 2(D̃ − D̃P )X̄.\nIf we let ∂l ∂X̄ = 0:\n(2D̃ − D̃P )X̄ = D̃X (2I − P )X̄ = X.\nSimilarly, we can prove that (2I −P ) is a positive definite matrix, with eigenvalues in range [1, 3]. Therefore,\nX̄ = (2I − P )−1X ≈ PX.\nDefinition 5 & 6 Topology-based GCNs Due to the fact that most of the topology-based models adopt non-convolutional operations like concatenation, we derive a more general objective function by combining with the non-convolutional operations:\nmin X̄ α0 ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + t∑ k=1 αk ∑ i,j∈V a (k) ij ‖x̄(i)Θ (k) − x̄(j)Θ(k)‖22, (35)\nwhere ∑t k=0 αk = 1, α0 > 0 and αk ≥ 0, k = 1, 2, . . . , t. If we let d be the feature dimension of X , Θ(k) ∈ Rd×d correspond to the learning weights for the kth hop neighborhood. Let l denote the objective function, we have:\n∂l\n∂X̄ = α0D̃(X̄ −X) + t∑ k=1 αk(D̃ − D̃Ãkrw)X̄Θ(k)(Θ(k))T .\nBy letting ∂l ∂X̄ = 0, we have:\nα0X̄ + t∑\nk=1\n(In − Ãkrw)X̄Θ(k)(Θ(k))T = α0X.\nTherefore, with the help of the Kronecker product operator ⊗ and first-order Taylor expansion, we have\n[α0In + t∑ k=1 (αkΘ (k)(Θ(k))T )⊗ (In − Ãkrw)]vec(X̄) = α0vec(X). (36)\nWe can observe that ∑t k=1(αkΘ\n(k)(Θ(k))T ) and (In−Ãkrw) have non-negative eigenvalues. Due to the property of the Kronecker product that the eigenvalues of the Kronecker product (A⊗B) equal to the product of eigenvalues of A and B, the filter (α0In + ∑t k=1(αkΘ\n(k)(Θ(k))T ) is proved to be a positive definite matrix. Therefore,\nvec(X̄) = α0[α0In + t∑ k=1 (αkΘ (k)(Θ(k))T )⊗ (In − Ãkrw)]−1vec(X)\n≈ α0[(2− α0)In − t∑\nk=1\n(αkΘ (k)(Θ(k))T )⊗ (In − Ãkrw)]vec(X)\n= α0vec[(2− α0)X − t∑\nk=1\nαk(In − Ãkrw)XΘ(k)(Θ(k))T ].\nIf we let\nW (0) = 2− α0 α0\nIn − t∑\nk=1\nαk α0 Θ(k)(Θ(k))T ; (37)\nW (k) = Θ(k)(Θ(k))T ), k = 1, 2, . . . , t; (38)\nwe can denote the convolution filter as:\nX̄ = t∑ k=0 αkà k rwXW (k). (39)\nAs we have stated in the Section 2.2.2, although the learning weights has a constrained expressive capability, it can be compensated by the following feature learning module. We omit the proofs of Definition 5 and 6, as they can be viewed as particular instances of (35).\nDefinition 7 Regularized Feature Variance. Let ⊗ be the Kronecker product operator, vec(X) ∈ Rnd be the vectorized signal X . Let DX be a diagonal matrix whose value is defined by DX(i, i) = ‖x·i‖2. A graph convolution filter with regularized feature variance:\nvec(X̄) = (In ⊗ [(α1 + α2)I − α2Ãrw]− α3[D−1x (I − 1\nd 11T )D−1x ]⊗ D̃−1)−1vec(X) (40)\nis equivalent to the optimal solution of the following optimization:\nmin X̄ α1 ∑ i∈V ‖x̄(i)− x(i)‖2 D̃ + α2 ∑ i,j∈V aij‖x̄(i)− x̄(j)‖22 − α3 1 d ∑ i,j∈d ‖x̄·i/‖x·i‖ − x̄·j/‖x·j‖‖22, (41) where α1 > 0, α2, α3 ≥ 0. For computation efficiency, we approximate DX̄ with DX as we assume that a single convolution filter provides little effect to the norm of features.\nProof. Let l denote the objective function. We have\nl = α1tr[(X̄ −X)T D̃(X̄ −X)] + α2(X̄TLX̄)− α3tr[X̄D−1x (I − 1\nd 11T )D−1x X̄ T ].\nThen, ∂l\n∂X̄ = 2α1D̃(X̄ −X) + 2α2LX̄ − 2α3X̄D−1x (I −\n1 d 11T )D−1x .\nIf we let ∂l ∂X̄ = 0:\n[(α1 + α2)I − α2D̃−1Ãrw]X̄ − α3D̃−1X̄D−1x (I − 1\nd 11T )D−1x = α1X.\nWith the help of the Kronecker product operator ⊗, we have\n(In ⊗ [(α1 + α2)I − α2Ãrw]− α3[D−1x (I − 1\nd 11T )D−1x ]⊗ D̃−1)vec(X̄) = vec(X). (42)\nBy setting α3 with a small positive value, the filter in Eq.(42) is still a positive definite matrix. Therefore we complete the proof.\nSimilarly, we can derive a simpler form via Taylor approximation. If we let:\nA = (α1 + α2)I − α2Ãrw, B = In, (43)\nC = −α3D̃−1, D = D−1x (1− 1\nd 11T )D−1x . (44)\nThen, the first-order approximation of Eq.(40) is summarized as:\nvec(X̄) = (BT ⊗A + DT ⊗C)−1vec(X) ≈ (2I −BT ⊗A−DT ⊗C)vec(X) = vec(2X −AXB −CXD).\nAdditionally, we can also derive a t-order approximated formulation:\nvec(X̄(t)) = (I + t∑ i=1 [I − (BT ⊗A + DT ⊗C)]i)vec(X).\nHowever, it is computationally expensive to calculate the Kronecker product. Therefore, we consider utilizing a iterative algorithm. For any 0 ≤ k < t\nvec(X̄(k+1)) = (I + k+1∑ i=1 [I − (BT ⊗A + DT ⊗C)]i)vec(X)\n= [I − (BT ⊗A + DT ⊗C)](I + k∑ i=1 [I − (BT ⊗A + DT ⊗C)]i)vec(X) + vec(X)\n= [I − (BT ⊗A + DT ⊗C)]vec(X̄(k)) + vec(X) = vec(X + X̄(k) −AX̄(k)B −CX̄(k)D). (45)" }, { "heading": "B. REFORMULATION EXAMPLES", "text": "The reformulation examples of GCN derivatives are presented in Table 4." }, { "heading": "C. DATA STATISTICS AND EXPERIMENTAL SETUPS", "text": "We conduct experiments on four real-world graph datasets, whose statistics are listed in Table 5. For transductive learning, we evaluate our method on the Cora, Citeseer, Pubmed datasets, following the experimental setup in (Sen et al., 2008). There are 20 nodes per class with labels to be used for training and all the nodes’ features are available. 500 nodes are used for validation and the generalization performance is tested on 1000 nodes with unseen labels. PPI (Zitnik & Leskovec, 2017) is adopted for inductive learning, which is a protein-protein interaction dataset containing 20 graphs for training, 2 for validation and 2 for testing while testing graphs remain unobserved during training.\nTo ensure a fair comparison with other methods, we implement our module without interfering the original network structure. In all three settings, we use two convolution layers with hidden dimen-\nsion h = 64. We set α1 = 0.2, α2 = 0.8 and α3 = 0.05 for all four datasets. We apply L2 regularization with λ = 0.0005 and use dropout on both layers. For training strategy, we initialize weights using the initialization described in (Glorot & Bengio, 2010) and follow the method proposed in GCN, adopting an early stop if validation loss does not decrease for certain consecutive epochs. The implementations of baseline models are based on the PyTorch-Geometric library (Fey & Lenssen, 2019) in all experiments." }, { "heading": "D. RANDOM SPLITS", "text": "As illustrated in (Shchur et al., 2018), using the same train/validation/test splits of the same datasets precludes a fair comparison of different architectures. Therefore, we follow the setup in (Shchur et al., 2018) and evaluate the performance of our model on three citation networks with random splits. Empirically, for each dataset, we use 20 labeled nodes per class as the training set, 30 nodes per class as the validation set, and the rest as the test set. For every model, we choose the hyperparameters that achieve the best average accuracy on Cora and CiteSeer datasets and applied to Pubmed dataset.\nTable 6 shows the results on three citation networks under the random split setting. As we can observe, our model consistently achieves higher performances on all the datasets. On Citeseer, our model achieves higher accuracy than on the original split. On Cora and Pubmed, the test accuracies of our model are comparable to the original split, while most of the baselines suffer from a serious decline." }, { "heading": "E. TIME CONSUMPTION", "text": "As we have shown in Eq.(20), the computation of graph filter with the regularizer is greatly increased with Kronecker product and inverse matrix operations. Nevertheless, we approximate the filter with an iterative algorithm as stated in Eq.(25) and realize an efficient implement. To empirically testify the computation efficiency, we conduct experiments on Cora and report the training and test time of several GCN models on a single RTX 2080 Ti GPU. Due to the early stopping rule (see details in Appendix C), the training epoch for each module is different. The results are shown in Table 7. As we can observe, when combining with vanilla GCNs, the training and test time of our model is similar to GAT and AGNN and faster than APPNP." }, { "heading": "F. ABLATION STUDY", "text": "To analyze the effects of the regularization strength, we conduct experiments on three transductive datasets and present the results in Table 8. As we can observe, with reasonable choice of the regularization strength, our approach can achieve consistent improvement under all settings. However, when the regularization strength is too large, the training procedure becomes unstable and the model performance suffers from a severe decrease." } ]
2,020
null
SP:dd2a50abff85d2b52b02dfe27cd42e443ea265cf
[ "This article proposes a benchmark of off-policy evaluation, which provides different metrics for policy ranking, evaluation and selection. Offline metrics are provided by evaluating the value function of logged data, and then evaluating absolute error, rank correlation and regret. Verify the effectiveness of different offline evaluation methods. This article provides two evaluation scenarios, one is DOPE RL unplugged, and the other is D4RL. In the experiment, the author verified the benchmark proposed in this article in the MuJoCo environment to evaluate the effectiveness of different offline evaluation methods." ]
Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online data collection is an expensive and potentially dangerous process. Being able to accurately evaluate and select high-performing policies without requiring online interaction could yield significant benefits in safety, time, and cost for these applications. While many OPE methods have been proposed in recent years, comparing results between papers is difficult because currently there is a lack of a comprehensive and unified benchmark, and measuring algorithmic progress has been challenging due to the lack of difficult evaluation tasks. In order to address this gap, we present a collection of policies that in conjunction with existing offline datasets can be used for benchmarking off-policy evaluation. Our tasks include a range of challenging high-dimensional continuous control problems, with wide selections of datasets and policies for performing policy selection. The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles designed to challenge and test the limits of existing OPE methods. We perform an evaluation of state-of-the-art algorithms and provide open-source access to our data and code to foster future research in this area†.
[ { "affiliations": [], "name": "Justin Fu" }, { "affiliations": [], "name": "Mohammad Norouzi" }, { "affiliations": [], "name": "Ofir Nachum" }, { "affiliations": [], "name": "George Tucker" }, { "affiliations": [], "name": "Ziyu Wang" }, { "affiliations": [], "name": "Alexander Novikov" }, { "affiliations": [], "name": "Mengjiao Yang" }, { "affiliations": [], "name": "Michael R. Zhang" }, { "affiliations": [], "name": "Yutian Chen" }, { "affiliations": [], "name": "Aviral Kumar" }, { "affiliations": [], "name": "Cosmin Paduraru" }, { "affiliations": [], "name": "Sergey Levine" }, { "affiliations": [], "name": "Tom Le Paine" } ]
[ { "authors": [ "Gabriel Barth-Maron", "Matthew W. Hoffman", "David Budden", "Will Dabney", "Dan Horgan", "Dhruva TB", "Alistair Muldal", "Nicolas Heess", "Timothy Lillicrap" ], "title": "Distributional policy gradients", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Léon Bottou", "Jonas Peters", "Joaquin Quiñonero-Candela", "Denis X Charles", "D Max Chickering", "Elon Portugaly", "Dipankar Ray", "Patrice Simard", "Ed Snelson" ], "title": "Counterfactual reasoning and learning systems: The example of computational advertising", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shayan Doroudi", "Philip S Thomas", "Emma Brunskill" ], "title": "Importance sampling for fair policy selection", "venue": "Grantee Submission,", "year": 2017 }, { "authors": [ "Miroslav Dudík", "Dumitru Erhan", "John Langford", "Lihong Li" ], "title": "Doubly robust policy evaluation and optimization", "venue": "Statistical Science,", "year": 2014 }, { "authors": [ "Alain Dutech", "Timothy Edmunds", "Jelle Kok", "Michail Lagoudakis", "Michael Littman", "Martin Riedmiller", "Bryan Russell", "Bruno Scherrer", "Richard Sutton", "Stephan Timmer" ], "title": "Reinforcement learning benchmarks and bake-offs ii", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 2005 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Ofir Nachum", "George Tucker", "Sergey Levine" ], "title": "D4rl: Datasets for deep data-driven reinforcement learning", "venue": "arXiv preprint arXiv:2004.07219,", "year": 2020 }, { "authors": [ "Scott Fujimoto", "Herke Hoof", "David Meger" ], "title": "Addressing function approximation error in actorcritic methods", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alexandre Gilotte", "Clément Calauzènes", "Thomas Nedelec", "Alexandre Abraham", "Simon Dollé" ], "title": "Offline a/b testing for recommender systems", "venue": "In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining,", "year": 2018 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy Lillicrap", "Sergey Levine" ], "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "Caglar Gulcehre", "Ziyu Wang", "Alexander Novikov", "Tom Le Paine", "Sergio Gómez Colmenarejo", "Konrad Zolna", "Rishabh Agarwal", "Josh Merel", "Daniel Mankowitz", "Cosmin Paduraru" ], "title": "Rl unplugged: Benchmarks for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.13888,", "year": 2020 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Josiah Hanna", "Scott Niekum", "Peter Stone" ], "title": "Importance sampling policy evaluation with an estimated behavior policy", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Milos Hauskrecht", "Hamish Fraser" ], "title": "Planning treatment of ischemic heart disease with partially observable markov decision processes", "venue": "Artificial Intelligence in Medicine,", "year": 2000 }, { "authors": [ "Matt Hoffman", "Bobak Shahriari", "John Aslanides", "Gabriel Barth-Maron", "Feryal Behbahani", "Tamara Norman", "Abbas Abdolmaleki", "Albin Cassirer", "Fan Yang", "Kate Baumli" ], "title": "Acme: A research framework for distributed reinforcement learning", "venue": "arXiv preprint arXiv:2006.00979,", "year": 2020 }, { "authors": [ "Alexander Irpan", "Kanishka Rao", "Konstantinos Bousmalis", "Chris Harris", "Julian Ibarz", "Sergey Levine" ], "title": "Off-policy evaluation via off-policy classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nan Jiang", "Lihong Li" ], "title": "Doubly robust off-policy value evaluation for reinforcement learning", "venue": "arXiv preprint arXiv:1511.03722,", "year": 2015 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke" ], "title": "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation", "venue": "arXiv preprint arXiv:1806.10293,", "year": 2018 }, { "authors": [ "Alex Kendall", "Jeffrey Hawke", "David Janz", "Przemyslaw Mazur", "Daniele Reda", "John-Mark Allen", "Vinh-Dieu Lam", "Alex Bewley", "Amar Shah" ], "title": "Learning to drive in a day", "venue": "In 2019 International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Jens Kober", "J Andrew Bagnell", "Jan Peters" ], "title": "Reinforcement learning in robotics: A survey", "venue": "The International Journal of Robotics Research,", "year": 2013 }, { "authors": [ "Ilya Kostrikov", "Ofir Nachum" ], "title": "Statistical bootstrapping for uncertainty estimation in off-policy evaluation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Hoang M Le", "Cameron Voloshin", "Yisong Yue" ], "title": "Batch policy learning under constraints", "venue": "arXiv preprint arXiv:1903.08738,", "year": 2019 }, { "authors": [ "Lihong Li", "Wei Chu", "John Langford", "Robert E Schapire" ], "title": "A contextual-bandit approach to personalized news article recommendation", "venue": "In Proceedings of the 19th international conference on World wide web,", "year": 2010 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Travis Mandel", "Yun-En Liu", "Sergey Levine", "Emma Brunskill", "Zoran Popovic" ], "title": "Offline policy evaluation across representations with applications to educational games", "venue": "In AAMAS,", "year": 2014 }, { "authors": [ "Martino Migliavacca", "Alessio Pecorino", "Matteo Pirotta", "Marcello Restelli", "Andrea Bonarini" ], "title": "Fitted policy search: Direct policy search using a batch reinforcement learning approach", "venue": "In 3rd International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS", "year": 2010 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "In NIPS Deep Learning Workshop", "year": 2013 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc G. Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "arXiv preprint arXiv:1606.02647,", "year": 2016 }, { "authors": [ "Xinkun Nie", "Emma Brunskill", "Stefan Wager" ], "title": "Learning when-to-treat policies", "venue": "arXiv preprint arXiv:1905.09751,", "year": 2019 }, { "authors": [ "Cosmin Paduraru" ], "title": "Planning with approximate and learned models of markov decision processes", "venue": null, "year": 2007 }, { "authors": [ "Tom Le Paine", "Cosmin Paduraru", "Andrea Michi", "Caglar Gulcehre", "Konrad Zolna", "Alexander Novikov", "Ziyu Wang", "Nando de Freitas" ], "title": "Hyperparameter selection for offline reinforcement learning", "venue": null, "year": 2007 }, { "authors": [ "Doina Precup" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "Computer Science Department Faculty Publication Series, pp", "year": 2000 }, { "authors": [ "Aniruddh Raghu", "Omer Gottesman", "Yao Liu", "Matthieu Komorowski", "Aldo Faisal", "Finale Doshi-Velez", "Emma Brunskill" ], "title": "Behaviour policy estimation in off-policy policy evaluation: Calibration matters", "venue": "arXiv preprint arXiv:1807.01066,", "year": 2018 }, { "authors": [ "Aravind Rajeswaran", "Vikash Kumar", "Abhishek Gupta", "Giulia Vezzani", "John Schulman", "Emanuel Todorov", "Sergey Levine" ], "title": "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations", "venue": "arXiv preprint arXiv:1709.10087,", "year": 2017 }, { "authors": [ "Martin Riedmiller", "Jan Peters", "Stefan Schaal" ], "title": "Evaluation of policy gradient methods and variants on the cart-pole benchmark", "venue": "IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning,", "year": 2007 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Noah Y Siegel", "Jost Tobias Springenberg", "Felix Berkenkamp", "Abbas Abdolmaleki", "Michael Neunert", "Thomas Lampe", "Roland Hafner", "Martin Riedmiller" ], "title": "Keep doing what worked: Behavioral modelling priors for offline reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Peter Stone", "Richard S Sutton" ], "title": "Scaling reinforcement learning toward robocup soccer", "venue": "In Icml,", "year": 2001 }, { "authors": [ "Richard S Sutton", "Hamid Reza Maei", "Doina Precup", "Shalabh Bhatnagar", "David Silver", "Csaba Szepesvári", "Eric Wiewiora" ], "title": "Fast gradient-descent methods for temporal-difference learning with linear function approximation", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Richard S Sutton", "A Rupam Mahmood", "Martha White" ], "title": "An emphatic approach to the problem of off-policy temporal-difference learning", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Counterfactual risk minimization: Learning from logged bandit feedback", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Adith Swaminathan", "Akshay Krishnamurthy", "Alekh Agarwal", "Miro Dudik", "John Langford", "Damien Jose", "Imed Zitouni" ], "title": "Off-policy evaluation for slate recommendation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Devinder Thapa", "In-Sung Jung", "Gi-Nam Wang" ], "title": "Agent based decision support system using reinforcement learning under emergency circumstances", "venue": "In International Conference on Natural Computation,", "year": 2005 }, { "authors": [ "Georgios Theocharous", "Philip S Thomas", "Mohammad Ghavamzadeh" ], "title": "Personalized ad recommendation systems for life-time value optimization with guarantees", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Philip Thomas", "Emma Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Philip S Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High-confidence off-policy evaluation", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Cameron Voloshin", "Hoang M Le", "Nan Jiang", "Yisong Yue" ], "title": "Empirical study of off-policy policy evaluation for reinforcement learning", "venue": null, "year": 1911 }, { "authors": [ "Yu-Xiang Wang", "Alekh Agarwal", "Miroslav Dudık" ], "title": "Optimal and adaptive off-policy evaluation in contextual bandits", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Ziyu Wang", "Alexander Novikov", "Konrad Żołna", "Jost Tobias Springenberg", "Scott Reed", "Bobak Shahriari", "Noah Siegel", "Josh Merel", "Caglar Gulcehre", "Nicolas Heess", "Nando de Freitas" ], "title": "Critic regularized regression", "venue": null, "year": 2006 }, { "authors": [ "Junfeng Wen", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Batch stationary distribution estimation", "venue": "arXiv preprint arXiv:2003.00722,", "year": 2020 }, { "authors": [ "Yuan Xie", "Boyi Liu", "Qiang Liu", "Zhaoran Wang", "Yuan Zhou", "Jian Peng" ], "title": "Off-policy evaluation and learning from logged bandit feedback: Error reduction via surrogate policy", "venue": "arXiv preprint arXiv:1808.00232,", "year": 2018 }, { "authors": [ "Mengjiao Yang", "Ofir Nachum", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Off-policy evaluation via the regularized lagrangian", "venue": "arXiv preprint arXiv:2007.03438,", "year": 2020 }, { "authors": [ "Michael R Zhang", "Thomas Paine", "Ofir Nachum", "Cosmin Paduraru", "George Tucker", "ziyu wang", "Mohammad Norouzi" ], "title": "Autoregressive dynamics models for offline policy evaluation and optimization", "venue": "In International Conference on Learning Representations,", "year": 2021 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction, such as in robotics (Kober et al., 2013), board games and video games (Tesauro, 1995; Mnih et al., 2013; Vinyals et al., 2019), and recommender systems (Aggarwal et al., 2016). However, this sort of active online interaction is often impractical for real-world problems, where active data collection can be costly (Li et al., 2010), dangerous (Hauskrecht & Fraser, 2000; Kendall et al., 2019), or time consuming (Gu et al., 2017). Batch (or offline) reinforcement learning, has been studied extensively in domains such as healthcare (Thapa et al., 2005; Raghu et al., 2018), recommender systems (Dudík et al., 2014; Theocharous et al., 2015; Swaminathan et al., 2017), education (Mandel et al., 2014), and robotics (Kalashnikov et al., 2018). A major challenge with such methods is the off-policy evaluation (OPE) problem, where one must evaluate the expected performance of policies solely from offline data. This is critical for several reasons, including providing high-confidence guarantees prior to deployment (Thomas et al., 2015), and performing policy improvement and model selection (Bottou et al., 2013; Doroudi et al., 2017).\nThe goal of this paper is to provide a standardized benchmark for evaluating OPE methods. Although considerable theoretical (Thomas & Brunskill, 2016; Swaminathan & Joachims, 2015; Jiang & Li, 2015; Wang et al., 2017; Yang et al., 2020) and practical progress (Gilotte et al., 2018; Nie et al., 2019; Kalashnikov et al., 2018) on OPE algorithms has been made in a range of different domains, there are few broadly accepted evaluation tasks that combine complex, high-dimensional problems ∗Equally major contributors. †Policies and evaluation code are available at https://github.com/google-research/deep_ ope. See Section 5 for links to modelling code.\ncommonly explored by modern deep reinforcement learning algorithms (Bellemare et al., 2013; Brockman et al., 2016) with standardized evaluation protocols and metrics. Our goal is to provide a set of tasks with a range of difficulty, excercise a variety of design properties, and provide policies with different behavioral patterns in order to establish a standardized framework for comparing OPE algorithms. We put particular emphasis on large datasets, long-horizon tasks, and task complexity to facilitate the development of scalable algorithms that can solve high-dimensional problems.\nOur primary contribution is the Deep Off-Policy Evaluation (DOPE) benchmark. DOPE is designed to measure the performance of OPE methods by 1) evaluating on challenging control tasks with properties known to be difficult for OPE methods, but which occur in real-world scenarios, 2) evaluating across a range of policies with different values, to directly measure performance on policy evaluation, ranking and selection, and 3) evaluating in ideal and adversarial settings in terms of dataset coverage and support. These factors are independent of task difficulty, but are known to have a large impact on OPE performance. To achieve 1, we selected tasks on a set of design principles outlined in Section 3.1. To achieve 2, for each task we include 10 to 96 policies for evaluation and devise an evaluation protocol that measures policy evaluation, ranking, and selection as outlined in Section 3.2. To achieve 3, we provide two domains with differing dataset coverage and support properties described in Section 4. Finally, to enable an easy-to-use research platform, we provide the datasets, target policies, evaluation API, as well as the recorded results of state-of-the-art algorithms (presented in Section 5) as open-source.\n2 BACKGROUND\nWe briefly review the off-policy evaluation (OPE) problem setting. We consider Markov decision processes (MDPs), defined by a tuple (S,A, T , R, ρ0, γ), with state space S , action space A, transition distribution T (s′|s, a), initial state distribution ρ0(s), reward function R(s, a) and discount factor γ ∈ (0, 1]. In reinforcement learning, we are typically concerned with optimizing or estimating the performance of a policy π(a|s). The performance of a policy is commonly measured by the policy value V π, defined as the expected sum of discounted rewards:\nV π := Es0∼ρ0,s1:∞,a0:∞∼π [ ∞∑ t=0 γtR(st, at) ] . (1)\nIf we have access to state and action samples collected from a policy π, then we can use the sample mean of observed returns to estimate the value function above. However, in off-policy evaluation we are typically interested in estimating the value of a policy when the data is collected from a separate behavior policy πB(a|s). This setting can arise, for example, when data is being generated online from another process, or in the purely offline case when we have a historical dataset.\nIn this work we consider the latter, purely offline setting. The typical setup for this problem formulation is that we are provided with a discount γ, a dataset of trajectories collected from a behavior policy D = {(s0, a0, r0, s1, . . .)}, and optionally the action probabilities for the behavior policy πB(at|st). In many practical applications, logging action propensities is not possible, for example, when the behavior policy is a mix of ML and hard-coded business logic. For this reason, we focus on the setting without propensities to encourage future work on behavior-agnostic OPE methods. For the methods that require propensities, we estimate the propensities with behavior cloning.\nThe objective can take multiple flavors, as shown in Fig. 1. A common task in OPE is to estimate the performance, or value, of a policy π (which may not be the same as πB) so that the estimated\nvalue is as close as possible to V π under a metric such as MSE or absolute error. A second task is to perform policy selection, where the goal is to select the best policy or set of policies out of a group of candidates. This setup corresponds to how OPE is commonly used in practice, which is to find the best performing strategy out of a pool when online evaluation is too expensive to be feasible." }, { "heading": "3 DOPE: DEEP OFF-POLICY EVALUATION", "text": "The goal of the Deep Off-Policy Evaluation (DOPE) benchmark is to provide tasks that are challenging and effective measures of progress for OPE methods, yet is easy to use in order to better facilitate research. Therefore, we design our benchmark around a set of properties which are known to be difficult for existing OPE methods in order to gauge their shortcomings, and keep all tasks amenable to simulation in order for the benchmark to be accessible and easy to evaluate." }, { "heading": "3.1 TASK PROPERTIES", "text": "We describe our motivating properties for selecting tasks for the benchmark as follows:\nHigh Dimensional Spaces (H) High-dimensionality is a key-feature in many real-world domains where it is difficult to perform feature engineering, such as in robotics, autonomous driving, and more. In these problems, it becomes challenging to accurately estimate quantities such as the value function without the use of high-capacity models such a neural networks and large datasets with wide state coverage. Our benchmark contains complex continuous-space tasks which exercise these challenges.\nLong Time-Horizon (L) Long time horizon tasks are known to present difficult challenges for OPE algorithms. Some algorithms have difficulty doing credit assignment for these tasks. This can be made worse as the state dimension or action dimension increases.\nSparse Rewards (R) Sparse reward tasks increase the difficulty of credit assignment and add exploration challenges, which may interact with data coverage in the offline setting. We include a range robotics and navigation tasks which are difficult to solve due to reward sparsity.\nTemporally extended control (T) The ability to make decisions hierarchically is major challenge in many reinforcement learning applications. We include two navigation tasks which require high-level planning in addition to low-level control in order to simulate the difficulty in such problems.\n3.2 EVALUATION PROTOCOL\nThe goal of DOPE to provide metrics for policy ranking, evaluation and selection. Many existing OPE methods have only been evaluated on point estimates of value such as MSE, but policy selection is an important, practical use-case of OPE. In order to explicitly measure the quality of using OPE for policy selection, we provide a set of policies with varying value, and devise two metrics that measure how well OPE methods can rank policies.\nFor each task we include a dataset of logged experiencesD, and a set of policies {π1, π2, ..., πN} with varying values. For each policy, OPE algorithms must use D to produce an estimate of the policy’s value. For evaluation of these\nestimates, we provide \"ground truth values\" {V π1 , V π2 , ..., V πN } that are computed by running the policy forM ≥ 1000 episodes, where the exact value ofM is given by the number of episodes needed to lower the error bar on the ground truth values to 0.666. The estimated values are then compared to these ground truth values using three different metrics encompassing both policy evaluation and selection (illustrated in Figure 2; see Appendix A.1 for mathematical definitions).\nAbsolute Error This metric measures estimate accuracy instead of its usefulness for ranking. Error is the most commonly used metric to assess performance of OPE algorithms. We opted to use absolute error instead of MSE to be robust to outliers.\nRegret@k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set. It is computed by identifying the top-k policies according to the estimated returns. Regret@k is the difference between the actual expected return of the best policy in the entire set, and the actual value of the best policy in the top-k set.\nRank correlation This metric directly measures how well estimated values rank policies, by computing the correlation between ordinal rankings according by the OPE estimates and ordinal rankings according to the ground truth values." }, { "heading": "4 DOMAINS", "text": "DOPE contains two domains designed to provide a more comprehensive picture of how well OPE methods perform in different settings. These two domains are constructed using two benchmarks previously proposed for offline reinforcement learning: RL Unplugged (Gulcehre et al., 2020) and D4RL (Fu et al., 2020), and reflect the challenges found within them.\nThe DOPE RL Unplugged domain is constrained in two important ways: 1) the data is always generated using online RL training, ensuring there is adequate coverage of the state-action space, and 2) the policies are generated by applying offline RL algorithms to the same dataset we use for evaluation, ensuring that the behavior policy and evaluation policies induce similar state-action distributions. Using it, we hope to understand how OPE methods work as task complexity increases from simple Cartpole tasks to controlling a Humanoid body while controlling for ideal data.\nOn the other hand, the DOPE D4RL domain has: 1) data from various sources (including random exploration, human teleoperation, and RL-trained policies with limited exploration), which results in varying levels of coverage of the state-action space, and 2) policies that are generated using online RL algorithms, making it less likely that the behavior and evaluation policies share similar induced state-action distributions. Both of these result in distribution shift which is known to be challenging for OPE methods, even in simple tasks. So, using it we hope to measure how well OPE methods work in more practical data settings." }, { "heading": "4.1 DOPE RL UNPLUGGED", "text": "DeepMind Control Suite (Tassa et al., 2018) is a set of control tasks implemented in MuJoCo (Todorov et al., 2012). We consider the subset included in RL Unplugged. This subset includes tasks that cover a range of difficulties. From Cartpole swingup, a simple task with a single degree of freedom, to Humanoid run which involves control of a complex bodies with 21 degrees of freedom. All tasks use the default feature representation of the system state, including proprioceptive information such as joint positions and velocity, and additional sensor information and target position where appropriate. The observation dimension ranges from 5 to 67.\nDatasets and policies We train four offline RL algorithms (D4PG (Barth-Maron et al., 2018), ABM (Siegel et al., 2020), CRR (Wang et al., 2020) and behavior cloning), varying their hyperparameters. For each algorithm-task-hyperparameter combination, we train an agent with 3 random seeds on the DM Control Suite dataset from RL Unplugged and record policy snapshots at exponentially increasing intervals (after 25k learner steps, 50k, 100K, 200K, etc). Following Gulcehre et al. (2020), we consider a deterministic policy for D4PG and stochastic policies for BC, ABM and CRR. The datasets are taken from the RL Unplugged benchmark, where they were created by training multiple (online) RL agents and collecting both successful and unsuccessful episodes throughout training. All offline RL algorithms are implemented using the Acme framework (Hoffman et al., 2020)." }, { "heading": "4.2 DOPE D4RL", "text": "Gym-MuJoCo tasks. Gym-MuJoCo consists of several continuous control tasks implemented within the MuJoCo simulator (Todorov et al., 2012) and provided in the OpenAI Gym (Brockman et al., 2016) benchmark for online RL. We include the HalfCheetah, Hopper, Walker2D, and Ant tasks. We include this domain primarily for comparison with past works, as a vast array of popular RL\nmethods have been evaluated and developed on these tasks (Schulman et al., 2015; Lillicrap et al., 2015; Schulman et al., 2017; Fujimoto et al., 2018; Haarnoja et al., 2018).\nGym-MuJoCo datasets and policies. For each task, in order to explore the effect of varying distributions, we include 5 datasets originally proposed by Fu et al. (2020). 3 correspond to different performance levels of the agent – “random”, “medium”, and “expert”. We additionally include a mixture of medium and expert dataset, labeled “medium-expert”, and data collected from a replay buffer until the policy reaches the medium level of performance, labeled “medium-replay”. For policies, we selected 11 policies collected from evenly-spaced snapshots of training a Soft Actor-Critic agent (Haarnoja et al., 2018), which covers a range of performance between random and expert.\nMaze2D and AntMaze tasks. Maze2D and AntMaze are two maze navigation tasks originally proposed in D4RL (Fu et al., 2020). The domain consists of 3 mazes ranging from easy to hard (“umaze”, “medium”, “large”), and two morphologies: a 2D ball in Maze2D and the “Ant” robot of the Gym benchmark in AntMaze. For Maze2D, we provide a less challenging reward computed base on distance to a fixed goal. For the AntMaze environment reward is given only upon reaching the fixed goal.\nMaze2D and AntMaze datasets and policies. Datasets for both morphologies consists of undirect data navigating randomly to different goal locations. The datasets for Maze2D are collected by using a high-level planner to command waypoints to a low-level PID controller in order to reach randomly selected goals. The dataset in AntMaze is generated using the same high-level planner, but the low-\nlevel planner is replaced with a goal-conditioned policy trained to reach arbitrary waypoints. Both of these datasets are generated from non-Markovian policies, as the high-level controller maintains a history of waypoints reached in order to construct a plan to the goal. We provide policies for all environments except “antmaze-large” by taking training snapshots obtained while running the DAPG algorithm (Rajeswaran et al., 2017). Because obtaining high-performing policies for “antmaze-large” was challenging, we instead used imitation learning on a large amount of expert data to generate evaluation policies. This expert data is obtained by collecting additional trajectories that reach the goal using a high-level waypoint planner in conjunction with a low-level goal-conditioned policy (this is the same method as was used to generate the dataset, Sec. 5 (Fu et al., 2020)).\nAdroit tasks. The Adroit domain is a realistic simulation based on the Shadow Hand robot, first proposed by Rajeswaran et al. (2017). There are 4 tasks in this domain: opening a door (“door”), pen twirling (“pen”), moving a ball to a target location (“relocate”), and hitting a nail with a hammer (“hammer”). These tasks all contain sparse rewards and are difficult to learn without demonstrations.\nAdroit datasets and policies. We include 3 datasets for each task. The “human” dataset consists of a small amount of human demonstrations performing the task. The “expert” dataset consists of data collected from an expert trained via DAPG (Rajeswaran et al., 2017). Finally, the “cloned” dataset contains a mixture of human demonstrations and data collected from an imitation learning algorithm trained on the demonstrations. For policies, we include 11 policies collected from snapshots while running the DAPG algorithm, which range from random performance to expert performance." }, { "heading": "5 BASELINES AND RESULTS", "text": "The goal of our evaluation is two-fold. First, we wish to measure the performance of a variety of existing algorithms to provide baselines and reference numbers for future research. Second, we wish to identify shortcomings in these approaches to reveal promising directions for future research." }, { "heading": "5.1 BASELINES", "text": "We selected six methods to evaluate, which cover a variety of approaches that have been explored for the OPE problem.\nFitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy π by bootstrapping from Q(s′, π(s′)). We tried two different implementations, one from Kostrikov & Nachum (2020)3 and another from Paine et al. (2020) labeled FQE-L2 and FQE-D respectively to reflect different choices in loss function and parameterization.\nModel-Based (MB) Similar to Paduraru (2007), we train dynamics and reward models on transitions from the offline dataset D. Our models are deep neural networks trained to maximize the log likelihood of the next state and reward given the current state and action, similar to models from successful model-based RL algorithms (Chua et al., 2018; Janner et al., 2019). We follow the setup detailed in Zhang et al. (2021). We include both the feed-forward and auto-regressive models labeled MB-FF and MB-AR respectively. To evaluate a policy, we compute the return using simulated trajectories generated by the policy under the learned dynamics model.\nImportance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov & Nachum (2020)3, which uses self-normalized (also known as weighted) step-wise importance sampling (Precup, 2000). Since the behavior policy is not known explicitly, we learn an estimate of it via a max-likelihood objective over the dataset D, as advocated by Xie et al. (2018); Hanna et al. (2019). In order to be able to compute log-probabilities when the target policy is deterministic, we add artificial Gaussian noise with standard deviation 0.01 for all deterministic target policies.\n3Code available at https://github.com/google-research/google-research/tree/ master/policy_eval.\nDoubly-Robust (DR) We perform weighted doubly-robust policy evaluation Thomas & Brunskill (2016) using the implementation of Kostrikov & Nachum (2020)3. Specifically, this method combines the IS technique above with a value estimator for variance reduction. The value estimator is learned using deep FQE with an L2 loss function. More advanced approaches that trade variance for bias exist (e.g., MAGIC (Thomas & Brunskill, 2016)), but we leave implementing them to future work.\nDICE This method uses a saddle-point objective to estimate marginalized importance weights dπ(s, a)/dπB (s, a); these weights are then used to compute a weighted average of reward over the offline dataset, and this serves as an estimate of the policy’s value in the MDP. We use the implementation from Yang et al. (2020) corresponding to the algorithm BestDICE.4\nVariational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights dπ(s, a)/dπB (s, a) without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen et al. (2020). We further tune the hyper-parameters including the regularization parameter λ, learning rates αθ and αv, and number of iterations on the Cartpole swingup task using ground-truth policy value, and then fix them for all other tasks." }, { "heading": "5.2 RESULTS", "text": "To facilitate aggregate metrics and comparisons between tasks and between DOPE RL Unplugged and DOPE D4RL, we normalize the returns and estimated returns to range between 0 and 1. For each set of policies we compute the worst value Vworst = min{V π1 , V π2 , ..., V πN } and best value Vbest = max{V π1 , V π2 , ..., V πN } and normalize the returns and estimated returns according to x′ = (x− Vworst)/(Vbest − Vworst). We present results averaged across DOPE RL Unplugged in Fig. 4, and results for DOPE D4RL in Fig. 5. Overall, no evaluated algorithm attains near-oracle performance under any metric (absolute error, regret, or rank correlation). Because the dataset is finite, we do not expect that achieving oracle performance is possible. Nevertheless, based on recent progress on this benchmark (e.g., Zhang et al. (2021)), we hypothesize that the benchmark has room for improvement, making it suitable for driving further improvements on OPE methods and facilitating the development of OPE algorithms that can provide reliable estimates on the types of high-dimensional problems that we consider.\nWhile all algorithms achieve sub-optimal performance, some perform better than others. We find that on the DOPE RL Unplugged tasks model based (MB-AR, MB-FF) and direct value based methods (FQE-D, FQE-L2) significantly outperform importance sampling methods (VPM, DICE, IS) across all metrics. This is somewhat surprising as DICE and VPM have shown promising results in other settings. We hypothesize that this is due to the relationship between the behavior data and evaluation policies, which is different from standard OPE settings. Recall that in DOPE RL Unplugged the behavior data is collected from an online RL algorithm and the evaluation policies are learned via offline RL from the behavior data. In our experience all methods work better when the behavior policy is a noisy/perturbed version of the evaluation policy. Moreover, MB and FQE-based methods may\n4Code available at https://github.com/google-research/dice_rl." }, { "heading": "FQE-L2", "text": "implicitly benefit from the architectural and optimization advancements made in policy optimization settings, which focus on similar environments and where these methods are more popular than importance sampling approaches. Note that within the MB and FQE methods, design details can create a significant difference in performance. For example model architecture (MB-AR vs MB-FF) and implementation differences (FQE-D vs FQE-L2) show differing performance on certain tasks.\nOn DOPE D4RL, direct value based methods still do well, with FQE-L2 performing best on the Absolute Error and Regret@1 metrics. However, there are cases where other methods outperform FQE. Notably, IS and DR outperform FQE-L2 under the rank correlation metric. As expected, there is a clear performance gap between DOPE RL Unplugged and DOPE D4RL. While both domains have challenging tasks, algorithms perform better under the more ideal conditions of DOPE RL Unplugged than under the challenging conditions of DOPE D4RL (0.69 vs 0.25 rank correlation respectively).\nIn Fig. A.2 we show the rank correlation for each task in DOPE RL Unplugged. Most tasks follow the overall trends, but we will highlight a few exceptions. 1) Importance sampling is among the best methods for the humanoid run task, significantly outperforming direct value-based methods. 2) while MB-AR and FQE-D are similar overall, there are a few tasks where the difference is large, for example FQE-D outperfroms MB-AR on finger turn hard, and manipulator insert ball, where as MB-AR outperforms FQE-D on cartpole swingup, fish swim, humanoid run, and manipulator insert peg. We show the scatter plots for MB-AR and FQE-D on these tasks in Fig 7 which highlights different failure modes: when MB-AR performs worse, it assigns similar values for all policies; when FQE-D performs worse, it severely over-estimates the values of poor policies.\nWe present more detailed results, separated by task, in Appendix A.2. Note in particular how in Table A.2.2, which shows the regret@1 metric for different D4RL tasks, the particular choice of dataset for the Gym-MuJoCo, Adroit, and AntMaze domains causes a significant difference in the performance of OPE methods. This indicates the importance of evaluating multiple distinct datasets, with different data distribution properties (e.g., more narrow datasets, such as expert data, vs. broader datasets, such as random data), as no tested method is reliably robust to the effects of dataset variation.\nHigh-dimensional tasks requiring temporally extended control were also challenging, as highlighted by the performance on the AntMaze domain. No algorithm was able to achieve a good absolute error value on such tasks, and importance sampling was the only method able to achieve a correlation consistently above zero, suggesting that these more complex tasks are a particularly important area for future methods to focus on." }, { "heading": "6 RELATED WORK", "text": "Off-policy evaluation (OPE) has been studied extensively across a range of different domains, from healthcare (Thapa et al., 2005; Raghu et al., 2018; Nie et al., 2019), to recommender systems (Li et al., 2010; Dudík et al., 2014; Theocharous et al., 2015), and robotics (Kalashnikov et al., 2018). While a full survey of OPE methods is outside the scope of this article, broadly speaking we can categories OPE methods into groups based the use of importance sampling (Precup, 2000), value functions (Sutton et al., 2009; Migliavacca et al., 2010; Sutton et al., 2016; Yang et al., 2020), and learned transition models (Paduraru, 2007), though a number of methods combine two or more of these components (Jiang & Li, 2015; Thomas & Brunskill, 2016; Munos et al., 2016). A significant body of work in OPE is also concerned with providing statistical guarantees (Thomas et al., 2015). Our focus instead is on empirical evaluation – while theoretical analysis is likely to be a critical part of future OPE research, combining such analysis with empirical demonstration on broadly accepted and standardized benchmarks is likely to facilitate progress toward practically useful algorithms.\nCurrent evaluation of OPE methods is based around several metrics, including error in predicting the true return of the evaluated policy (Voloshin et al., 2019), correlation between the evaluation output and actual returns (Irpan et al., 2019), and ranking and model selection metrics (Doroudi et al., 2017). As there is no single accepted metric used by the entire community, we provide a set of candidate metrics along with our benchmark, with a detailed justification in Section 5. Our work is closely related to (Paine et al., 2020) which studies OPE in a similar setting, however in our work we present a benchmark for the community and compare a range of OPE methods. Outside of OPE, standardized benchmark suites have led to considerable standardization and progress in RL (Stone & Sutton, 2001; Dutech et al., 2005; Riedmiller et al., 2007). The Arcade Learning Environment (ALE) (Bellemare et al., 2013) and OpenAI Gym (Brockman et al., 2016) have been widely used to compare online RL algorithms to good effect. More recently, Gulcehre et al. (2020); Fu et al. (2020) proposed benchmark tasks for offline RL. Our benchmark is based on the tasks and environments described in these two benchmarks, which we augment with a set of standardized policies for evaluation, results for a number of existing OPE methods, and standardized evaluation metrics and protocols. Voloshin et al. (2019) have recently proposed benchmarking for OPE methods on a variety of tasks ranging from tabular problems to image-based tasks in Atari. Our work differs in several key aspects. Voloshin et al. (2019) is composed entirely of discrete action tasks, whereas out benchmark focuses on continuous action tasks. Voloshin et al. (2019) assumes full support for the evaluation policy under the behavior policy data, whereas we designed our datasets and policies to ensure that different cases of dataset and policy distributions could be studied. Finally, all evaluations in Voloshin et al. (2019) are performed using the MSE metric, and they do not provide standardized datasets. In contrast, we provide a variety of policies for each problem which enables one to evaluate metrics such as ranking for policy selection, and a wide range of standardized datasets for reproducbility." }, { "heading": "7 CONCLUSION", "text": "We have presented the Deep Off-Policy Evaluation (DOPE) benchmark, which aims to provide a platform for studying policy evaluation and selection across a wide range of challenging tasks and datasets. In contrast to prior benchmarks, DOPE provides multiple datasets and policies, allowing researchers to study how data distributions affect performance and to evaluate a wide variety of metrics, including those that are relevant for offline policy selection. In comparing existing OPE methods, we find that no existing algorithms consistently perform well across all of the tasks, which further reinforces the importance of standardized and challenging OPE benchmarks. Moreover, algorithms that perform poorly under one metric, such as absolute error, may perform better on other metrics, such as correlation, which provides insight into what algorithms to use depending on the use case (e.g., policy evaluation vs. policy selection).\nWe believe that OPE is an exciting area for future research, as it allows RL agents to learn from large and abundant datasets in domains where online RL methods are otherwise infeasible. We hope that our benchmark will enable further progress in this field, though important evaluation challenges remain. As the key benefit of OPE is the ability to utilize real-world datasets, a promising direction for future evaluation efforts is to devise effective ways to use such data, where a key challenge is to develop evaluation protocols that are both reproducible and accessible. This could help pave the way towards developing intelligent decision making agents that can leverage vast banks of logged information to solve important real-world problems." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 METRICS", "text": "The metrics we use in our paper are defined as follows:\nAbsolute Error We evaluate policies using absolute error in order to be robust to outliers. The absolute error is defined as the difference between the value and estimated value of a policy:\nAbsErr = |V π − V̂ π| (2)\nWhere V π is the true value of the policy, and V̂ π is the estimated value of the policy.\nRegret@k Regret@k is the difference between the value of the best policy in the entire set, and the value of the best policy in the top-k set (where the top-k set is chosen by estimated values). It can be defined as:\nRegret @ k = max i∈1:N V πi − max j∈topk(1:N) V πj (3)\nWhere topk(1 : N) denotes the indices of the top K policies as measured by estimated values V̂ π .\nRank correlation Rank correlation (also Spearman’s ρ) measures the correlation between the ordinal rankings of the value estimates and the true values. It can be written as:\nRankCorr = Cov(V π1:N , V̂ π 1:N )\nσ(V π1:N )σ(V̂ π 1:N )\n(4)" }, { "heading": "A.2 DETAILED RESULTS", "text": "Detailed results figures and tables are presented here. We show results by task in both tabular and chart form, as well as scatter plots which compare the estimated returns against the ground truth returns for every policy." }, { "heading": "A.2.1 CHART RESULTS", "text": "First we show the normalized results for each algorithm and task." }, { "heading": "MB-FF", "text": "" }, { "heading": "FQE-L2", "text": "" }, { "heading": "FQE-L2", "text": "" }, { "heading": "A.2.2 TABULAR RESULTS", "text": "Next, we present the results for each task and algorithm in tabular form, with means and standard deviations reported across 3 seeds.\nHalfcheetah Halfcheetah Halfcheetah Halfcheetah Halfcheetah expert medium medium-expert medium-replay random\nA bs\n.E rr or IS 1404±152 1217±123 1400±146 1409±154 1405±155VPM 945±164 1374±153 1427±111 1384±148 1411±154 Best DICE 944±161 1382±130 1078±132 1440±158 1446±156 Doubly Robust 1025±95 1222±134 1015±103 1001±129 949±126 FQE (L2) 1031±95 1211±130 1014±101 1003±132 938±125\nAntmaze Antmaze Antmaze Antmaze Antmaze large-diverse large-play medium-diverse medium-play umaze\nA bs\n.E rr or IS 0.62±0.01 0.85±0.00 0.55±0.01 0.81±0.00 0.62±0.04VPM 0.02±0.02 0.26±0.24 0.07±0.05 0.11±0.06 0.12±0.03 Best DICE 5.55±0.36 19.62±1.28 2.42±1.56 19.47±2.15 14.97±1.93 Doubly Robust 0.99±0.01 1.59±0.01 0.61±0.03 1.47±0.01 0.87±0.04 FQE (L2) 0.53±0.01 0.78±0.00 0.29±0.01 0.71±0.01 0.39±0.03\nAntmaze Door Door Door Hammer umaze-diverse cloned expert human cloned\nA bs\n.E rr or IS 0.14±0.02 891±188 648±122 870±173 7403±1126VPM 0.12±0.03 1040±188 879±182 862±163 7459±1114 Best DICE 0.17±0.04 697±79 856±134 1108±199 4169±839 Doubly Robust 0.11±0.02 424±73 1353±218 379±65 6101±679 FQE (L2) 0.11±0.03 438±81 1343±84 389±60 5415±558\nHammer Hammer Maze2d Maze2d Maze2d expert human large medium umaze\nA bs\n.E rr or IS 3052±608 7352±1118 45.61±10.43 61.29±7.78 50.20±9.16VPM 7312±1117 7105±1107 44.10±10.69 60.30±8.37 62.81±8.40 Best DICE 3963±758 5677±936 42.46±9.66 58.97±9.57 21.95±4.69 Doubly Robust 3485±590 5768±751 22.94±6.82 23.64±4.96 76.93±4.42 FQE (L2) 2950±728 6000±612 24.31±6.56 35.11±6.33 79.67±4.93\nPen Pen Pen Relocate Relocate cloned expert human cloned expert\nA bs\n.E rr or IS 1707±128 4547±222 3926±128 632±215 2731±147VPM 2324±129 2325±136 1569±215 586±135 620±214 Best DICE 1454±219 2963±279 4193±244 1347±485 1095±221 Doubly Robust 1323±98 2013±564 2846±200 412±124 1193±350 FQE (L2) 1232±105 1057±281 2872±170 439±125 1351±393\nRelocate Ant Ant Ant Ant human expert medium medium-expert medium-replay\nA bs\n.E rr or IS 638±217 605±104 594±104 604±102 603±101VPM 806±166 607±108 570±109 604±106 612±105 Best DICE 4526±474 558±108 495±90 471±100 583±110 Doubly Robust 606±116 584±114 345±66 326±66 421±72 FQE (L2) 593±113 583±122 345±64 319±67 410±79\nAnt Hopper Hopper Hopper Walker2d random expert medium random expert\nA bs\n.E rr or IS 606±103 106±29 405±48 412±45 405±62VPM 570±99 442±43 433±44 438±44 367±68 Best DICE 530±92 259±54 215±41 122±16 437±60 Doubly Robust 404±106 426±99 307±85 289±50 519±179 FQE (L2) 398±111 282±76 283±73 261±42 453±142\nWalker2d Walker2d Walker2d Walker2d Median medium medium-expert medium-replay random\nA bs\n.E rr or IS 428±60 436±62 427±60 430±61 603.82VPM 426±60 425±61 424±64 440±58 585.53 Best DICE 273±31 322±60 374±51 419±57 530.43 Doubly Robust 368±74 217±46 296±54 347±74 411.99 FQE (L2) 350±79 233±42 313±73 354±73 398.37\nHalfcheetah Halfcheetah Halfcheetah Halfcheetah Door expert medium-expert medium-replay random cloned\nR an\nk C\nor r. Best DICE −0.44±0.30 −0.08±0.35 −0.15±0.41 −0.70±0.22 0.18±0.31\nVPM 0.18±0.35 −0.47±0.29 −0.07±0.36 0.27±0.36 −0.29±0.36 FQE (L2) 0.78±0.15 0.62±0.27 0.26±0.37 −0.11±0.41 0.55±0.27 IS 0.01±0.35 −0.06±0.37 0.59±0.26 −0.24±0.36 0.66±0.22 Doubly Robust 0.77±0.17 0.62±0.27 0.32±0.37 −0.02±0.38 0.60±0.28\nDoor Hammer Hammer Maze2d Maze2d expert cloned expert large medium\nR an\nk C\nor r. Best DICE −0.06±0.32 0.35±0.38 −0.42±0.31 0.56±0.21 −0.64±0.23\nVPM 0.65±0.23 −0.77±0.22 0.39±0.31 −0.26±0.33 −0.05±0.39 FQE (L2) 0.89±0.09 −0.15±0.33 0.29±0.34 0.30±0.36 0.16±0.38 IS 0.76±0.17 0.58±0.27 0.64±0.24 0.63±0.19 0.44±0.25 Doubly Robust 0.76±0.13 −0.70±0.20 0.49±0.31 0.31±0.36 0.41±0.35\nPen Relocate Ant Ant Ant expert expert expert medium medium-expert\nR an\nk C\nor r. Best DICE −0.53±0.30 −0.27±0.34 −0.13±0.37 −0.36±0.28 −0.33±0.40\nVPM 0.08±0.33 0.39±0.31 −0.42±0.38 −0.20±0.31 −0.28±0.28 FQE (L2) −0.01±0.33 −0.57±0.28 −0.13±0.32 0.65±0.25 0.37±0.35 IS −0.45±0.31 0.52±0.23 0.14±0.41 −0.17±0.32 −0.21±0.35 Doubly Robust 0.52±0.28 −0.40±0.24 −0.28±0.32 0.66±0.26 0.35±0.35\nAnt Ant Hopper Hopper Hopper medium-replay random expert medium random\nR an\nk C\nor r. Best DICE −0.24±0.39 −0.21±0.35 −0.08±0.32 0.19±0.33 −0.13±0.39\nVPM −0.26±0.29 0.24±0.31 0.21±0.32 0.13±0.37 −0.46±0.20 FQE (L2) 0.57±0.28 0.04±0.33 −0.33±0.30 −0.29±0.33 −0.11±0.36 IS 0.07±0.39 0.26±0.34 0.37±0.27 −0.55±0.26 0.23±0.34 Doubly Robust 0.45±0.32 0.01±0.33 −0.41±0.27 −0.31±0.34 −0.19±0.36\nWalker2d Walker2d Walker2d Walker2d Walker2d expert medium medium-expert medium-replay random\nR an\nk C\nor r. Best DICE −0.37±0.27 0.12±0.38 −0.34±0.34 0.55±0.23 −0.19±0.36\nVPM 0.17±0.32 0.44±0.21 0.49±0.37 −0.52±0.25 −0.42±0.34 FQE (L2) 0.35±0.33 −0.09±0.36 0.25±0.32 −0.19±0.36 0.21±0.31 IS 0.22±0.37 −0.25±0.35 0.24±0.33 0.65±0.24 −0.05±0.38 Doubly Robust 0.26±0.34 0.02±0.37 0.19±0.33 −0.37±0.39 0.16±0.29\nMedian\nR an\nk C\nor r. Best DICE −0.19\nVPM −0.05 FQE (L2) 0.21 IS 0.23 Doubly Robust 0.26\nHalfcheetah Halfcheetah Halfcheetah Halfcheetah Halfcheetah expert medium medium-expert medium-replay random\nR eg\nre t@ 1 Best DICE 0.32±0.40 0.82±0.29 0.38±0.37 0.30±0.07 0.81±0.30 VPM 0.14±0.09 0.33±0.19 0.80±0.34 0.25±0.09 0.12±0.07 Doubly Robust 0.11±0.08 0.37±0.15 0.14±0.07 0.33±0.18 0.31±0.10 FQE (L2) 0.12±0.07 0.38±0.13 0.14±0.07 0.36±0.16 0.37±0.08 IS 0.15±0.08 0.05±0.05 0.73±0.42 0.13±0.10 0.31±0.11\nAntmaze Antmaze Antmaze Antmaze Antmaze large-diverse large-play medium-diverse medium-play umaze\nR eg\nre t@ 1 Best DICE 0.54±0.34 0.96±0.13 0.04±0.11 0.09±0.10 0.69±0.39 VPM 0.88±0.27 0.45±0.30 0.14±0.10 0.03±0.08 0.62±0.32 Doubly Robust 0.83±0.30 0.93±0.21 0.05±0.07 0.17±0.31 0.42±0.36 FQE (L2) 0.93±0.25 1.00±0.03 0.16±0.10 0.05±0.19 0.41±0.35 IS 0.39±0.26 0.71±0.20 0.14±0.09 0.18±0.06 0.86±0.06\nAntmaze Door Door Door Hammer umaze-diverse cloned expert human cloned\nR eg\nre t@ 1 Best DICE 0.42±0.28 0.65±0.45 0.37±0.27 0.10±0.27 0.67±0.48 VPM 0.63±0.32 0.81±0.33 0.03±0.03 0.69±0.24 0.72±0.39 Doubly Robust 0.79±0.14 0.11±0.08 0.05±0.07 0.05±0.09 0.78±0.38 FQE (L2) 0.64±0.37 0.11±0.06 0.03±0.03 0.05±0.08 0.36±0.39 IS 0.22±0.36 0.02±0.07 0.01±0.04 0.45±0.40 0.03±0.15\nHammer Hammer Maze2d Maze2d Maze2d expert human large medium umaze\nR eg\nre t@ 1 Best DICE 0.24±0.34 0.04±0.08 0.15±0.08 0.44±0.05 0.03±0.07 VPM 0.04±0.07 0.18±0.29 0.66±0.10 0.24±0.24 0.06±0.12 Doubly Robust 0.09±0.09 0.46±0.23 0.21±0.16 0.27±0.14 0.03±0.07 FQE (L2) 0.05±0.04 0.46±0.23 0.20±0.14 0.31±0.14 0.03±0.07 IS 0.01±0.04 0.19±0.30 0.16±0.23 0.15±0.15 0.02±0.12\nPen Pen Pen Relocate Relocate cloned expert human cloned expert\nR eg\nre t@ 1 Best DICE 0.12±0.08 0.33±0.20 0.04±0.09 0.96±0.18 0.97±0.07 VPM 0.36±0.18 0.25±0.13 0.28±0.12 0.11±0.29 0.76±0.23 Doubly Robust 0.13±0.06 0.05±0.07 0.09±0.08 0.18±0.27 0.98±0.08 FQE (L2) 0.12±0.07 0.11±0.14 0.07±0.05 0.29±0.42 1.00±0.06 IS 0.14±0.09 0.31±0.10 0.17±0.15 0.63±0.41 0.18±0.14\nRelocate Ant Ant Ant Ant human expert medium medium-expert medium-replay\nR eg\nre t@ 1 Best DICE 0.97±0.11 0.62±0.15 0.43±0.10 0.60±0.16 0.64±0.13 VPM 0.77±0.18 0.88±0.22 0.40±0.21 0.32±0.24 0.72±0.43 Doubly Robust 0.17±0.15 0.43±0.22 0.12±0.18 0.37±0.13 0.05±0.09 FQE (L2) 0.17±0.14 0.43±0.22 0.12±0.18 0.36±0.14 0.05±0.09 IS 0.63±0.41 0.47±0.32 0.61±0.18 0.46±0.18 0.16±0.23\nAnt Hopper Hopper Hopper Walker2d random expert medium random expert\nR eg\nre t@ 1 Best DICE 0.50±0.29 0.20±0.08 0.18±0.19 0.30±0.15 0.35±0.36 VPM 0.15±0.24 0.13±0.10 0.10±0.14 0.26±0.10 0.09±0.19 Doubly Robust 0.28±0.15 0.34±0.35 0.32±0.32 0.41±0.17 0.06±0.07 FQE (L2) 0.28±0.15 0.41±0.20 0.32±0.32 0.36±0.22 0.06±0.07 IS 0.56±0.22 0.06±0.03 0.38±0.28 0.05±0.05 0.43±0.26\nWalker2d Walker2d Walker2d Walker2d Median medium medium-expert medium-replay random\nR eg\nre t@ 1 Best DICE 0.27±0.43 0.78±0.27 0.18±0.12 0.39±0.33 0.38 VPM 0.08±0.06 0.24±0.42 0.46±0.31 0.88±0.20 0.28 Doubly Robust 0.25±0.09 0.30±0.12 0.68±0.23 0.15±0.20 0.25 FQE (L2) 0.31±0.10 0.22±0.14 0.24±0.20 0.15±0.21 0.24 IS 0.70±0.39 0.13±0.07 0.02±0.05 0.74±0.33 0.18" }, { "heading": "A.2.3 SCATTER PLOTS", "text": "Finally, we present scatter plots plotting the true returns of each policy against the estimated returns. Each point on the plot represents one evaluated policy." } ]
2,021
null
SP:1037f94ce6eae4a42ea7913c76007f5f3c26aeaf
[ "This paper proposes Triple-Search (TRIPS), a differentiable framework of jointly searching for network architecture, quantization precision, and accelerator parameters. To address the dilemma between exploding training memory and biased search, the proposed framework leverages heterogeneous sampling where soft Gumbel Softmax is used for weight update and hard Gumbel Softmax is used for probabilities \\beta. To integrate accelerator search, hard Gumbel Softmax is used on hardware design choices and the overall hardware cost is used for penalization. Experiments are conducted on the FPGA platform for CIFAR and ImageNet dataset to show the superiority of TRIPS over NAS-only methods." ]
The record-breaking performance and prohibitive complexity of deep neural networks (DNNs) have ignited a substantial need for customized DNN accelerators which have the potential to boost DNN acceleration efficiency by orders-ofmagnitude. While it has been recognized that maximizing DNNs’ acceleration efficiency requires a joint design/search for three different yet highly coupled aspects, including the networks, adopted precision, and their accelerators, the challenges associated with such a joint search have not yet been fully discussed and addressed. First, to jointly search for a network and its precision via differentiable search, there exists a dilemma of whether to explode the memory consumption or achieve sub-optimal designs. Second, a generic and differentiable joint search of the networks and their accelerators is non-trivial due to (1) the discrete nature of the accelerator space and (2) the difficulty of obtaining operation-wise hardware cost penalties because some accelerator parameters are determined by the whole network. To this end, we propose a Triple-Search (TRIPS) framework to address the aforementioned challenges towards jointly searching for the network structure, precision, and accelerator in a differentiable manner, to efficiently and effectively explore the huge joint search space. Our TRIPS addresses the first challenge above via a heterogeneous sampling strategy to achieve unbiased search with constant memory consumption, and tackles the latter one using a novel co-search pipeline that integrates a generic differentiable accelerator search engine. Extensive experiments and ablation studies validate that both TRIPS generated networks and accelerators consistently outperform state-of-the-art (SOTA) designs (including co-search/exploration techniques, hardware-aware NAS methods, and DNN accelerators), in terms of search time, task accuracy, and accelerator efficiency. All codes will be released upon acceptance.
[]
[ { "authors": [ "Mohamed S Abdelfattah", "Łukasz Dudziak", "Thomas Chau", "Royson Lee", "Hyeji Kim", "Nicholas D Lane" ], "title": "Best of both worlds: Automl codesign of a cnn and its hardware accelerator", "venue": null, "year": 2002 }, { "authors": [ "Rajeev Balasubramonian", "Andrew B. Kahng", "Naveen Muralimanohar", "Ali Shafiee", "Vaishnav Srinivas" ], "title": "Cacti 7: New tools for interconnect exploration in innovative off-chip memories", "venue": "ACM Trans. Archit. Code Optim.,", "year": 2017 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Han Cai", "Chuang Gan", "Tianzhe Wang", "Zhekai Zhang", "Song Han" ], "title": "Once-for-all: Train one network and specialize it for efficient deployment", "venue": null, "year": 1908 }, { "authors": [ "Zhaowei Cai", "Nuno Vasconcelos" ], "title": "Rethinking differentiable search for mixed-precision neural networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Deming Chen", "Jason Cong", "Yiping Fan", "Guoling Han", "Wei Jiang", "Zhiru Zhang" ], "title": "xpilot: A platform-based behavioral synthesis system", "venue": "SRC TechCon,", "year": 2005 }, { "authors": [ "Deming Chen", "Jason Cong", "Yiping Fan", "Lu Wan" ], "title": "Lopass: A low-power architectural synthesis system for FPGAs with interconnect estimation and optimization", "venue": "IEEE Transactions on Very Large Scale Integration (VLSI) Systems,", "year": 2009 }, { "authors": [ "Y. Chen", "T. Krishna", "J. Emer", "V. Sze" ], "title": "Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks", "venue": "JSSC 2017,", "year": 2017 }, { "authors": [ "Yu-Hsin Chen", "Joel Emer", "Vivienne Sze" ], "title": "Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks", "venue": "ACM SIGARCH Computer Architecture News,", "year": 2016 }, { "authors": [ "Yukang Chen", "Gaofeng Meng", "Qian Zhang", "Xinbang Zhang", "Liangchen Song", "Shiming Xiang", "Chunhong Pan" ], "title": "Joint neural architecture search and quantization", "venue": "arXiv preprint arXiv:1811.09426,", "year": 2018 }, { "authors": [ "Kanghyun Choi", "Deokki Hong", "Hojae Yoon", "Joonsang Yu", "Youngsok Kim", "Jinho Lee" ], "title": "Dance: Differentiable accelerator/network co-exploration", "venue": "arXiv preprint arXiv:2009.06237,", "year": 2020 }, { "authors": [ "Z. Du", "R. Fasthuber", "T. Chen", "P. Ienne", "L. Li", "T. Luo", "X. Feng", "Y. Chen", "O. Temam" ], "title": "Shidiannao: Shifting vision processing closer to the sensor", "venue": "In ACM SIGARCH Computer Architecture News,", "year": 2015 }, { "authors": [ "Ahmed Taha Elthakeb", "Prannoy Pilligundla", "Fatemeh Mireshghallah", "Amir Yazdanbakhsh", "Hadi Esmaeilzadeh" ], "title": "Releq: A reinforcement learning approach for automatic deep quantization of neural networks", "venue": "IEEE Micro,", "year": 2020 }, { "authors": [ "Mingyu Gao", "Jing Pu", "Xuan Yang", "Mark Horowitz", "Christos Kozyrakis" ], "title": "Tetris: Scalable and efficient neural network acceleration with 3d memory", "venue": "In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems,", "year": 2017 }, { "authors": [ "Chengyue Gong", "Zixuan Jiang", "Dilin Wang", "Yibo Lin", "Qiang Liu", "David Z Pan" ], "title": "Mixed precision neural architecture search for energy efficient deep learning", "venue": "In ICCAD,", "year": 2019 }, { "authors": [ "Yijin Guan", "Hao Liang", "Ningyi Xu", "Wenqiang Wang", "Shaoshuai Shi", "Xi Chen", "Guangyu Sun", "Wei Zhang", "Jason Cong" ], "title": "FP-DNN: An automated framework for mapping deep neural networks onto FPGAs with RTL-HLS hybrid templates", "venue": "IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM),", "year": 2017 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Hai Victor Habi", "Roy H Jennings", "Arnon Netzer" ], "title": "Hmq: Hardware friendly mixed precision quantization block for cnns", "venue": "arXiv preprint arXiv:2007.09952,", "year": 2020 }, { "authors": [ "Weijun Hong", "Guilin Li", "Weinan Zhang", "Ruiming Tang", "Yunhe Wang", "Zhenguo Li", "Yong Yu" ], "title": "Dropnas: Grouped operation dropout for differentiable architecture search", "venue": "In International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan" ], "title": "Searching for mobilenetv3", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Shoukang Hu", "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Jianping Shi", "Xunying Liu", "Dahua Lin" ], "title": "Dsnas: Direct neural architecture search without parameter retraining", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yibo Hu", "Xiang Wu", "Ran He" ], "title": "Tf-nas: Rethinking three search freedoms of latency-constrained differentiable neural architecture search", "venue": "arXiv preprint arXiv:2008.05314,", "year": 2020 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Weiwen Jiang", "Qiuwen Lou", "Zheyu Yan", "Lei Yang", "Jingtong Hu", "X Sharon Hu", "Yiyu Shi" ], "title": "Device-circuit-architecture co-exploration for computing-in-memory neural accelerators", "venue": "IEEE Transactions on Computers,", "year": 2020 }, { "authors": [ "Weiwen Jiang", "Lei Yang", "Edwin H-M Sha", "Qingfeng Zhuge", "Shouzhen Gu", "Sakyasingha Dasgupta", "Yiyu Shi", "Jingtong Hu" ], "title": "Hardware/software co-exploration of neural architectures", "venue": "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,", "year": 2020 }, { "authors": [ "Qing Jin", "Linjie Yang", "Zhenyu Liao" ], "title": "Adabits: Neural network quantization with adaptive bitwidths", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Guilin Li", "Xing Zhang", "Zitong Wang", "Zhenguo Li", "Tong Zhang" ], "title": "Stacnas: Towards stable and consistent differentiable neural architecture", "venue": null, "year": 1909 }, { "authors": [ "Yuhong Li", "Cong Hao", "Xiaofan Zhang", "Xinheng Liu", "Yao Chen", "Jinjun Xiong", "Wen-mei Hwu", "Deming Chen" ], "title": "Edd: Efficient differentiable dnn architecture and implementation co-search for embedded ai solutions", "venue": null, "year": 2005 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "A. Parashar", "P. Raina", "Y.S. Shao", "Y. Chen", "V.A. Ying", "A. Mukkara", "R. Venkatesan", "B. Khailany", "S.W. Keckler", "J. Emer" ], "title": "Timeloop: A systematic approach to dnn accelerator evaluation", "venue": "IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS),", "year": 2019 }, { "authors": [ "Jiantao Qiu", "Jie Wang", "Song Yao", "Kaiyuan Guo", "Boxun Li", "Erjin Zhou", "Jincheng Yu", "Tianqi Tang", "Ningyi Xu", "Sen Song" ], "title": "Going deeper with embedded fpga platform for convolutional neural network", "venue": "In Proceedings of the 2016 ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays,", "year": 2016 }, { "authors": [ "Yuxian Qiu", "Jingwen Leng", "Cong Guo", "Quan Chen", "Chao Li", "Minyi Guo", "Yuhao Zhu" ], "title": "Adversarial defense through network profiling based path extraction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kyle Rupnow", "Yun Liang", "Yinan Li", "Dongbo Min", "Minh Do", "Deming Chen" ], "title": "High level synthesis of stereo matching: Productivity, performance, and software constraints", "venue": "In 2011 International Conference on Field-Programmable Technology,", "year": 2011 }, { "authors": [ "Y.S. Shao", "B. Reagen", "G. Wei", "D. Brooks" ], "title": "Aladdin: A pre-rtl, power-performance accelerator simulator enabling large design space exploration of customized architectures", "venue": "In 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA),", "year": 2014 }, { "authors": [ "Yongming Shen", "Michael Ferdman", "Peter Milder" ], "title": "Maximizing cnn accelerator efficiency through resource partitioning", "venue": "In Proceedings of the 44th Annual International Symposium on Computer Architecture,", "year": 2017 }, { "authors": [ "Dimitrios Stamoulis", "Ruizhou Ding", "Di Wang", "Dimitrios Lymberopoulos", "Bodhi Priyantha", "Jie Liu", "Diana Marculescu" ], "title": "Single-path nas: Designing hardware-efficient convnets in less than 4 hours", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint arXiv:1905.11946,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Yunjie Tian", "Chang Liu", "Lingxi Xie", "Jianbin Jiao", "Qixiang Ye" ], "title": "Discretization-aware architecture search", "venue": "arXiv preprint arXiv:2007.03154,", "year": 2020 }, { "authors": [ "Rangharajan Venkatesan", "Yakun Sophia Shao", "Miaorong Wang", "Jason Clemons", "Steve Dai", "Matthew Fojtik", "Ben Keller", "Alicia Klinefelter", "Nathaniel Pinckney", "Priyanka Raina" ], "title": "MAGNet: A Modular Accelerator Generator for Neural Networks", "venue": "In Proceedings of the International Conference on Computer-Aided Design (ICCAD),", "year": 2019 }, { "authors": [ "Alvin Wan", "Xiaoliang Dai", "Peizhao Zhang", "Zijian He", "Yuandong Tian", "Saining Xie", "Bichen Wu", "Matthew Yu", "Tao Xu", "Kan Chen" ], "title": "Fbnetv2: Differentiable neural architecture search for spatial and channel dimensions", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Junsong Wang", "Qiuwen Lou", "Xiaofan Zhang", "Chao Zhu", "Yonghua Lin", "Deming Chen" ], "title": "Design flow of accelerating hybrid extremely low bit-width neural network in embedded FPGA", "venue": "In 2018 28th International Conference on Field Programmable Logic and Applications (FPL),", "year": 2018 }, { "authors": [ "Kuan Wang", "Zhijian Liu", "Yujun Lin", "Ji Lin", "Song Han" ], "title": "Haq: Hardware-aware automated quantization with mixed precision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Tianzhe Wang", "Kuan Wang", "Han Cai", "Ji Lin", "Zhijian Liu", "Hanrui Wang", "Yujun Lin", "Song Han" ], "title": "Apq: Joint search for network architecture, pruning and quantization policy", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Ying Wang", "Jie Xu", "Yinhe Han", "Huawei Li", "Xiaowei Li" ], "title": "Deepburning: Automatic generation of fpga-based learning accelerators for the neural network family", "venue": "DAC ’16,", "year": 2016 }, { "authors": [ "Yulong Wang", "Hang Su", "Bo Zhang", "Xiaolin Hu" ], "title": "Interpret neural networks by identifying critical data routing paths", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Bichen Wu", "Yanghan Wang", "Peizhao Zhang", "Yuandong Tian", "Peter Vajda", "Kurt Keutzer" ], "title": "Mixed precision quantization of convnets via differentiable neural architecture search", "venue": "arXiv preprint arXiv:1812.00090,", "year": 2018 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Y.N. Wu", "J.S. Emer", "V. Sze" ], "title": "Accelergy: An architecture-level energy estimation methodology for accelerator designs", "venue": "IEEE/ACM International Conference on Computer-Aided Design (ICCAD),", "year": 2019 }, { "authors": [ "Qingcheng Xiao", "Yun Liang", "Liqiang Lu", "Shengen Yan", "Yu-Wing Tai" ], "title": "Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on fpgas", "venue": "In Proceedings of the 54th Annual Design Automation Conference", "year": 2017 }, { "authors": [ "Lei Yang", "Zheyu Yan", "Meng Li", "Hyoukjun Kwon", "Liangzhen Lai", "Tushar Krishna", "Vikas Chandra", "Weiwen Jiang", "Yiyu Shi" ], "title": "Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting", "venue": null, "year": 2002 }, { "authors": [ "Xuan Yang", "Jing Pu", "Blaine Burton Rister", "Nikhil Bhagdikar", "Stephen Richardson", "Shahar Kvatinsky", "Jonathan Ragan-Kelley", "Ardavan Pedram", "Mark Horowitz" ], "title": "A systematic approach to blocking convolutional neural networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Jiahui Yu", "Pengchong Jin", "Hanxiao Liu", "Gabriel Bender", "Pieter-Jan Kindermans", "Mingxing Tan", "Thomas Huang", "Xiaodan Song", "Ruoming Pang", "Quoc Le" ], "title": "Bignas: Scaling up neural architecture search with big single-stage models", "venue": "arXiv preprint arXiv:2003.11142,", "year": 2020 }, { "authors": [ "Chen Zhang", "Peng Li", "Guangyu Sun", "Yijin Guan", "Bingjun Xiao", "Jason Cong" ], "title": "Optimizing fpga-based accelerator design for deep convolutional neural networks", "venue": "In Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA", "year": 2015 }, { "authors": [ "Chen Zhang", "Guangyu Sun", "Zhenman Fang", "Peipei Zhou", "Peichen Pan", "Jason Cong" ], "title": "Caffeine: Towards uniformed representation and acceleration for deep convolutional neural networks", "venue": "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,", "year": 2018 }, { "authors": [ "Xiaofan Zhang", "Junsong Wang", "Chao Zhu", "Yonghua Lin", "Jinjun Xiong", "Wen-mei Hwu", "Deming Chen" ], "title": "Dnnbuilder: An automated tool for building high-performance dnn hardware accelerators for fpgas", "venue": "In Proceedings of the International Conference on Computer-Aided Design, ICCAD ’18,", "year": 2018 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "arXiv preprint arXiv:1606.06160,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The powerful performance and prohibitive complexity of deep neural networks (DNNs) have fueled a tremendous demand for efficient DNN accelerators which could boost DNN acceleration efficiency by orders-of-magnitude (Chen et al., 2016). In response, extensive research efforts have been devoted to developing DNN accelerators. Early works decouple the design of efficient DNN algorithms and their accelerators. On the algorithms level, pruning, quantization, or neural architecture search (NAS) are adopted to trim down the model complexity; On the hardware level, various FPGA-/ASIC-based accelerators have been developed to customize the micro-architectures (e.g., processing elements dimension, memory sizes, and network-on-chip design) and algorithm-to-hardware mapping methods (e.g., loop tiling strategies and loop orders) in order to optimize the acceleration efficiency for a given DNN. Later, hardware-aware NAS (HA-NAS) has been developed to further improve DNNs’ acceleration efficiency for different applications (Tan et al., 2019).\nMore recently, it has been recognized that (1) optimal DNN accelerators require a joint consideration/search for all the following different yet coupled aspects, including DNNs’ network structure, the adopted precision, and their accelerators’ micro-architecture and mapping methods, and (2) merely exploring a subset of these aspects will lead to sub-optimal designs in terms of hardware efficiency or task accuracy. For example, the optimal accelerators for networks with different structures (e.g., width, depth, and kernel size) can be very different; while the optimal networks and their bitwidths\nfor different accelerators can differ a lot (Wu et al., 2019). However, the direction of jointly designing or searching for all the three aspects has only been slightly touched on previously. For example, (Chen et al., 2018; Gong et al., 2019; Wang et al., 2020) proposed to jointly search for the structure and precision of DNNs for a fixed target hardware; (Abdelfattah et al., 2020; Yang et al., 2020; Jiang et al., 2020a;b) made the first attempt to jointly search for the networks and their accelerators, yet either their network or accelerator choices are limited due to the prohibitive time cost required by their adopted reinforcement learning (RL) based methods; and EDD (Li et al., 2020) contributed a pioneering effort towards this direction by formulating a differentiable joint search framework, which however only consider one single accelerator parameter (i.e., parallel factor) and more importantly, has not yet fully solved the challenges of such joint search.\nAlthough differentiable search is one of the most promising ways in terms of search efficiency to explore the huge joint search space as discussed in Sec. 4.2, plethora of challenges exist to achieve an effective generic joint search for the aforementioned three aspects. First, Challenge 1: to jointly search for a network and its precision via differentiable search, there exists a dilemma whether to activate all the paths during search. On one hand, the required memory consumption can easily explode and thus constrain the search’s scalability to more complex tasks if all paths are activated; on the other hand, partially activating a subset of the paths can lead to a sequential training of different precision on the same weights, which might result in inaccurate accuracy ranking among different precision as discussed in (Jin et al., 2020). Second, Challenge 2: the accelerators’ parameters are not differentiable, and it is non-trivial to derive the operation-wise hardware-cost penalty in order to perform differentiable search (in considering search efficiency). This is because the optimal accelerator is often determined by the whole network instead of one specific operation/layer due to the fact that some accelerator parameters (e.g., the loop order) need to be optimized for the whole network.\nIn this paper, we aim to address the aforementioned challenges towards scalable generic joint search for the network, precision, and accelerator. Specifically, we make the following contributions:\n• We propose a Triple-Search (TRIPS) framework to jointly search for the network, precision, and accelerator in a differentiable manner to efficiently explore the huge joint search space which cannot be afforded by previous RL-based methods due to their required prohibitive search cost. TRIPS identifies and tackles the aforementioned challenges towards scalable generic joint search of the three for maximizing both the accuracy and acceleration efficiency.\n• We develop a heterogeneous sampling strategy for simultaneous updating the weights and network structures to (1) avoid the need to sequentially train different precision and (2) achieve unbiased search with constant memory consumption, i.e., solve the above Challenge 1. In addition, we develop a novel co-search pipeline that integrates a differentiable hardware search engine to address the above Challenge 2.\n• Extensive experiments and ablation studies validate the effectiveness of our proposed TRIPS framework in terms of the resulting search time, task accuracy, and accelerator efficiency, when benchmarked over state-of-the-art (SOTA) co-search/exploration techniques, HA-NAS methods, and DNN accelerators. Furthermore, we visualize the searched accelerators by TRIPS to provide insights towards efficient DNN accelerator design in Appendix." }, { "heading": "2 RELATED WORKS", "text": "Hardware-aware NAS. Hardware-aware NAS has been proposed to automate the design of efficient DNNs. Early works (Tan et al., 2019; Howard et al., 2019; Tan & Le, 2019) utilize RL-based NAS that requires a massive search time/cost, while recent works (Wu et al., 2019; Wan et al., 2020; Cai et al., 2018; Stamoulis et al., 2019) explore the design space in a differentiable way (Liu et al., 2018) with much improved searching efficiency. Along another direction, one-shot NAS methods (Cai et al., 2019; Guo et al., 2020; Yu et al., 2020) pretrain the supernet and directly evaluate the performances of the sub-networks in a weight sharing manner as a proxy of their independently trained performances at the cost of a longer pretrain time. In addition, NAS has been adopted to search for quantization strategies (Wang et al., 2019; Wu et al., 2018; Cai & Vasconcelos, 2020; Elthakeb et al., 2020) for trimming down the complexity of a given DNN. However, these works leave unexplored the\nhardware design space, which is a crucial enabler for DNN’s acceleration efficiency, thus can lead to sub-optimal solutions.\nDNN accelerators. Motivated by customized accelerators’ large potential gains, SOTA accelerators (Du et al., 2015; Chen et al., 2017) innovate micro-architectures and algorithm-to-hardware mapping methods to optimize the acceleration efficiency, given a DNN and the hardware specifications. However, it is non-trivial to design an optimal accelerator as it requires cross-disciplinary knowledge in algorithm, micro-architecture, and circuit design. SOTA accelerator design relies on either experts’ manual design, which is very time consuming or design flow (Chen et al., 2005; 2009; Rupnow et al., 2011) and DNN accelerator design automation (Wang et al., 2016; Zhang et al., 2018a; Guan et al., 2017; Venkatesan et al., 2019; Wang et al., 2018a; Gao et al., 2017). As they merely explore the accelerator design space, they can result in sub-optimal solutions as compared to SOTA co-search/exploration methods and our TRIPS framework.\nCo-exploration/search techniques. Pioneering efforts have been made towards jointly searching of DNNs and their accelerators to some extent. For joint searching of DNNs and their precision, (Chen et al., 2018; Gong et al., 2019; Wang et al., 2020) adopt either differentiable or evolutionary algorithms yet without exploring their hardware accelerators. For joint searching of DNNs and their accelerators, (Abdelfattah et al., 2020; Yang et al., 2020; Jiang et al., 2020a;b) conduct RL-based search for the networks and some accelerator parameters/templates, where they strictly constrain the search space of the network or accelerator to achieve a practical RL search time, limiting their scalability and achievable efficiency. (Lin et al.) is another pioneering work which co-designs the newtork and accelerator in a sequential manner based on the fact that the accelerator’s design cycle is longer than the networks. EDD (Li et al., 2020) extends differentiable NAS to search for layer-wise precision and the accelerators’ parallel factor, which is most relevant to our TRIPS. EDD has not yet fully solved the joint search challenges. First, it does not discuss or address the potentially explosive memory consumption issue of such joint search; second, EDD’s accelerator search space only includes the parallel factor, which can be strictly limited to their accelerator template and cannot generalize to include common accelerator parameters such as the tiling strategies.\nBuilt upon prior art, our TRIPS targets a scalable generic joint search framework to optimally search for the network, its precision, and adopted accelerator in a differentiable manner for improving efficiency." }, { "heading": "3 THE PROPOSED TECHNIQUES", "text": "In this section, we describe our proposed techniques for enabling TRIPS, where Sec. 3.1 provides TRIPS’s formulation, Sec. 3.2 and Sec. 3.3 introduce TRIPS’s enablers that address the key challenges of scalable generic joint search for networks, precision, and accelerators, and Sec. 3.4 unifies the enablers to build a comprehensive co-search framework." }, { "heading": "3.1 TRIPS: FORMULATION", "text": "Fig. 1 shows an overview of TRIPS, which jointly searches for the networks (e.g., kernel size, channel expansion, and group number), precision (e.g., 4-/6-/8-/12-/16-bit), and the accelerators (e.g., PE array type, buffer size, and tiling strategies of each memory hierarchy) in a differentiable manner.\nTRIPS targets a scalable yet generic joint search framework, which we formulate as a bi-level optimization problem:\nmin α,β\nLval(ω∗, net(α), prec(β)) + λLcost(hw(γ∗), net(α), prec(β)) (1)\ns.t. ω ∗ = arg min ω Ltrain(ω, net(α), prec(β)), (2) s.t. γ ∗ = arg min\nγ Lcost(hw(γ), net(α), prec(β)) (3)\nWhere α, β, and γ are the continuous variables parameterizing the probability of different choices for the network operators, precision bitwidths, and accelerator parameters as in (Liu et al., 2018), ω is the supernet weights, Ltrain, Lval, and Lcost are the loss during training and validation and the hardware-cost loss, and net(α), prec(β), and hw(γ) denote the network, precision, and accelerator characterized by α, β, γ, respectively." }, { "heading": "3.2 TRIPS ENABLER 1: HETEROGENEOUS SAMPLING FOR PRECISION SEARCH", "text": "As discussed in Sec.1, there exists a dilemma (i.e., memory consumption explosion or biased search) whether to activate all the paths during precision search, for addressing which we have developed a heterogeneous sampling. Next, we first use real experiments to illustrate the joint search dilemma, and then introduce our heterogeneous sampling which effectively address those challenges/issues.\nActivating all choices - memory explosion and entangled correlation among choices. During precision search, activating all the precision choices as (Wu et al., 2018; Gong et al., 2019) can easily explode the memory consumption especially when the precision is co-searched with the network structures. While composite convolutions (Cai & Vasconcelos, 2020) for mixed precision search can mitigate this memory explosion issue during search by shrinking the required computation, yet this large memory consumption issue would still exist during training when updating the precision parameters, i.e., β in Eq. (1). For example, as shown in Fig. 2 (a), the measured GPU memory consumption of co-searching the network and precision on ImageNet grows linearly with the number of precision choices if activating all precision choices during search. In addition, the entangled correlation (e.g., co-adaptation (Hong et al., 2020), correlation (Li et al., 2019), and cooperation (Tian et al., 2020)) among different precision choices leads to a large gap between the supernet during search and the final derived network, thus failing the joint search.\nActivating only a subset of choices - Biased search. For addressing the above issues of memory explosion and correlation among choices, one natural choice is to adopt hard Gumbel Softmax by reducing the memory consumption, which however can lead to a biased search and thus poor performance. Specifically, activating only a subset of the precision choices implies a sequential training of different precisions that can lead to inaccurate performance\nranking. This is because a sequential training means different precision choices are applied on top of the same weights and activations. As a result, different precision choices can interfere with each other and different training order would lead to a different result. For better understanding, we next show two concrete experiments.\nCo-search network and precision using hard Gumbel Softmax: Fig. 2 (b) shows the resulting precision probability evolution when co-searching the network and precision on CIFAR-100 using hard Gumbel Softmax (activating two precision choices) without imposing any hardware-cost constraints, indicating the desired precision choice would be the highest precision. However, as shown in Fig. 2 (b), the block co-searched using hard Gumbel Softmax collapse to the lowest precision (i.e., the highest probability towards the end of the search is the lowest precision choice 4-bit), indicating an ineffective search direction. Note that the fluctuation in the probability of different precision choices is caused by the intermittent activation of the block due to the hard Gumbel Softmax sampling.\nSequential training of a fixed network with multiple precision choices: As observed in (Jin et al., 2020), when training a fixed network with multiple precision choices, either ascending or descending the precision will incur an inferior convergence and thus chaotic accuracy ranking among different precision choices. For example, as shown in Tab. 1, we compare the accuracy of a fixed network (all blocks adopt the k3e1 (kernel size 3 and channel expansion 1) structure in (Wu et al., 2019)) under different precision choices, when being trained with different precision schedules, and find that only jointly training all the precision choices can maintain the ranking consistent with that of independently trained ones, while sequential training leads to both inferior accuracy and ranking.\nProposed solution - Heterogeneous sampling. To tackle both aspects of the aforementioned dilemma, we propose a heterogeneous sampling strategy as formulated in Eq. (4) where W̄ l / Ā l are the composite weights / activations of the l-th layer as in (Cai & Vasconcelos, 2020) which are the weighted sum of weights / activations under different precision choices, e.g., W lj is the weights quantized to the j-th precision among the total J options of the l-th layer. In our heterogeneous sampling, for updating the weights in Eq. (2), we jointly update the weights under all the precision choices weighted by their corresponding soft Gumbel Softmax GS(βlj), where βlj parameterizes the probability of the j-th precision in the l-th layer, and the gradients can be estimated by STE (Zhou et al., 2016) as ∂Ltrain/∂Al ≈ ∂Ltrain/∂Āl so that no extra intermediate feature maps are needed to be stored into the memory during backward. For updating β, we adopt hard Gumbel Softmax (Jang et al., 2016) with one-hot outputs GShard(βlj) to save memory and computation while reducing the correlation among precision choices. In the same co-search setting as Fig. 2 (b), all the blocks searched using our proposed heterogeneous sampling converge to the highest precision choice towards the end of the search as shown in Fig. 2 (c).\nA l+1 = W̄ l ∗ Āl =\nJ\n∑ j=1 β̄ljW l j ∗\nJ\n∑ j=1 β̄ljA l j where β̄ l j = { GS(βlj) if updating weight GShard(βlj) if updating β (4)" }, { "heading": "3.3 TRIPS ENABLER 2: DIFFERENTIABLE ACCELERATOR SEARCH ENGINE", "text": "Motivation. Although EDD (Li et al., 2020) also co-searches the accelerator with the network, their search space is limited to include only the parallel factor within their template which can be analytically fused into their theoretical computational cost, whereas this is not always applicable to other naturally non-differentiable accelerator design parameters such as tiling strategies. A more general and efficient search engine is needed towards generic differentiable accelerator search.\nSearch algorithm. We propose a differentiable search engine to efficiently search for the optimal accelerator (including the micro-architectures and mapping methods) given a DNN model and its precision based on single-path sampling as discussed in Sec. 3.1. We solve Eq. (3) in a differentiable way:\nγ ∗ = arg min\nγ\nM\n∑ m=1 GShard(γm)Lcost(hw({GShard(γm)}), net({Olfw}), prec({Blfw})) (5)\nwhere M is the number of accelerator design parameters. Given the network net({Olfw}) and precision prec({Blfw}), where Olfw and Blfw are the only operator and precision activated during forward as discussed in Sec. 3.4, our search engine utilizes hard Gumbel Softmax GShard sampling\non each design parameter γm to build an accelerator hw({GShard(γm)}) and penalize each sampled accelerator parameter with the overall hardware-cost Lcost through relaxation in a gradient manner.\nHardware template. We adopt a unified template for both the FPGA and ASIC accelerators, which is a parameterized chunk-based pipeline micro-architecture inspired by (Shen et al., 2017). In particular, the hardware/micro-architecture template comprises multiple sub-accelerators (i.e., chunks) and executes DNNs in a pipeline fashion. Each chunk is assigned with multiple but not necessarily consecutive layers which are executed sequentially within the chunk. Similar to Eyeriss, each chunk consists of levels of buffers/memories (e.g., on-chip buffer and local register files) and processing elements (PEs) to facilitate data reuses and parallelism with searchable design knobs such as PE interconnections (i.e., Network-on-chip), allocated buffer sizes, MAC operations’ scheduling and tiling (i.e., dataflows), and so on (see more details in Appendix B).\nDiscussion about the general applicability. Our search approach is general and can be applicable to different hardware architectures, since we do not hold any prior assumptions about the adopted hardware architecture. Specifically, for any target hardware architecture, including TPU-like or GEMM or other accelerators, our search approach can be directly applied once given (1) a simulator to estimate the hardware cost, and (2) a set of user-defined searchable design knobs abstracted from the target hardware architecture." }, { "heading": "3.4 TRIPS: THE OVERALL CO-SEARCH FRAMEWORK", "text": "Objective and challenges. TRIPS’s iterative search starts from updating both the supernet weights ω and accelerator parameters γ, given the current network net(α) quantized using precision prec(β), and then updates α and β based on the derived optimal weights ω∗ and accelerator hw(γ∗) resulting from the previous step. Updating ω∗ and γ∗ have been discussed in Sec. 3.2 and Sec. 3.3, respectively. The key objective of TRIPS is captured by Eq. (1) which involves all the three major aspects towards efficient DNN accelerators. The key challenges in achieving TRIPS include (1) the prohibitively large joint search space (e.g., 2.3E+21 in this work) which if not addressed will limit the scalability of TRIPS to practical yet complex tasks; (2) the entangled co-adaptation (Hong et al., 2020), correlation (Li et al., 2019), and cooperation (Tian et al., 2020) issues among different network and precision choices can enlarge the gap between the supernet during search and the final derived network, thus failing the joint search; and (3) the non-triviality of deriving hardware-cost for the layer/block-wise update during network search, as the hardware-cost is determined by the whole network.\nForward ∶ A l+1 =\nN\n∑ i=1 GShard(αli)Oi(Al) = Olfw(Al) (6)\nBackward ∶ ∂Lval\n∂αli =\nK\n∑ k=1\n∂Lval ∂GS(αlk) ∂GS(αlk) ∂αli = ∂Lval ∂Al+1 K ∑ k=1 O l k(Al) ∂GS(αlk) ∂αli\n(7)\n∂Lcost\n∂αli = 1(GShard(αli) = 1)L\nα l i\ncost(hw(γ ∗), net(αli), prec(β)) (8)\nTRIPS implementation. To tackle the three aforementioned challenges, TRIPS integrates a novel co-search pipeline which can be illustrated using the co-search for α as follows and is similarly applicable to co-search for β . Note that here we define path to be one of the parallelled candidate operators between the layer input and layer output within one searchable layer, which can be viewed as a coarse-grained (layer-wise) version of the path definition in (Wang et al., 2018b; Qiu et al., 2019).\nSingle-path forward: For updating both α (see Eq. (6)) and β during forward, TRIPS adopts hard Gumbel Softmax sampling (Hu et al., 2020a), i.e., only the choice with the highest probability will be activated to narrow the gap between the search and evaluation thanks to the single-path property of hard Gumbel Softmax sampling. In Eq. (6), Al and Al+1 denote the feature maps of the l-th and (l + 1)-th layer, respectively, N is the total number of operator choices, Oli is the i-th operator in the l-th layer parameterized by αli, and O l fw is the only operator activated during forward.\nMulti-path backward: For updating both α (see Eq. (7)) and β during backward, TRIPS activates multiple paths to calculate the gradients of α and β through Gumbel Softmax relaxation in order to balance the search efficiency and stability motivated by (Cai et al., 2018; Hu et al., 2020b). For\nexample, αli’s gradients are calculated using Eq. (7), where K is the number of activated choices with the top K Gumbel Softmax probability. Similar to (Cai et al., 2018), K ∈ (1, N) in TRIPS to control the computational cost.\nHardware-cost penalty: The network search in Eq. (1) is performed in a layer/block-wise manner as in (Liu et al., 2018), thus requiring layer/block-wise hardware-cost penalty which is determined by the layer/block-to-accelerator mapping method and the corresponding layer/block execution cost on the optimal accelerator hw(γ∗). The optimal mapping method of an accelerator is yet determined by the whole network. To handle this gap, we derive the layer/block-wise hardware-cost assuming that the single-path network derived from the current forward would be the final derived network, as this single-path network has a higher if not the highest probability to be finally derived. In Eq. (8), 1 is an indicator denoting whether αli (i.e., the i-th operator in the l-th layer) is activated during forward." }, { "heading": "4 EXPERIMENT RESULTS", "text": "" }, { "heading": "4.1 EXPERIMENT SETUP", "text": "Software settings. Search space and hyper-params. We adopt the same search space in (Wu et al., 2019) for the ImageNet experiments and disable the first two down sampling operations for the CIFAR-10/100 experiments. We use [4, 6, 8, 12, 16] as candidate precision choices and one block shares the same precision of weights and activations for more hardware friendly implementation. We activate two paths during backward, i.e., K = 2 in Eq. (7), for search efficiency. For Lcost in Eq. ( 3), we use latency for FPGA as the target metric is Frame-Per-Second (FPS), and Energy-Delay-Product (EDP) for ASIC. Detailed search and training settings are elaborated in Appendix A.\nBaselines. We mainly benchmark over four kinds of SOTA baselines: (1) the most relevant baseline EDD (Li et al., 2020) which co-searches networks, precision, and accelerators, (2) SOTA methods co-exploring networks and accelerators including HS-Co-Opt (Jiang et al., 2020b), NASAIC (Yang et al., 2020), and BSW (Abdelfattah et al., 2020), (3) SOTA methods co-searching the networks and precision including APQ (Wang et al., 2020) and MP-NAS (Gong et al., 2019), and (4) hardwareaware NAS with uniform precision, including FBNet (Wu et al., 2019), ProxylessNAS (Cai et al., 2018), Single-Path NAS (Stamoulis et al., 2019), and EfficientNet-B0 (Tan & Le, 2019).\nHardware settings. To evaluate the generated network and accelerator designs, for FPGA cases, we adopt the standard Vivado HLS (Xilinx Inc., a) design flow, on the target Xilinx ZC706 development board (Xilinx Inc., b), which has a total 900 DSP48s (Digital Signal Processor) and 19.1Mb BRAM (Block RAM). For ASIC implementations, we use the SOTA energy estimation tool Timeloop (Parashar et al., 2019) and Accelergy, (Wu et al., 2019), to validate our generated design’s performance, with CACTI7 (Balasubramonian et al., 2017) and Aladdin (Shao et al., 2014) at a 32nm CMOS technology as unit energy and timing cost plugins. Details about the accelerator search space are discussed in Appendix B." }, { "heading": "4.2 BENCHMARK SEARCH EFFICIENCY", "text": "To evaluate the superiority of TRIPS in terms of search efficiency, we compare the search space size and search time of TRIPS with both RL-based co-search works and one-shot NAS methods using the reported data from the baselines’ original papers as shown in Tab. 2. We can see that TRIPS consistently require notably less search time while handling the largest joint search space on all the considered tasks. In particular, compared with the one-shot NAS methods (Guo et al., 2020; Cai\net al., 2019) which can be potentially extended to co-search frameworks while suffering from a large pretraining cost, TRIPS achieves 3.6× ∼ 30× less search time on ImageNet, while being end-to-end, justifying our choice of differentiable co-search.\n4.3 BENCHMARK OVER SOTA METHODS\nCo-exploration of networks, precision, and accelerators. We benchmark our TRIPS framework with SOTA efficient DNN solutions on ImageNet and FPGA-based accelerators under the 512 DSP limits in Fig. 3 following (Abdelfattah et al., 2020). Specifically, we provide four searched results of our TRIPS framework; we use the reported results for EDD, and search for the optimal accelerator in our accelerator space for APQ, MP-NAS, and SOTA hardware-aware NAS methods; for EfficientNet-B0, we apply the SOTA mixed precision strategy searched by (Habi et al., 2020) for a fair comparison and the ProxylessNAS8bit is reported by APQ (Wang et al., 2020); and the other baselines are all quantized to 8-bit for hardware measurement and the accuracies are from the original papers with-\nout considering the quantization effect. We can observe from Fig. 3 that (1) the searched networks by our TRIPS framework consistently push forward the frontier of accuracy-FPS trade-offs, (2) compared with the most relevant baseline EDD, we achieve a +1.3% higher accuracy with a 1.59× FPS. The effectiveness of TRIPS over various SOTA methods that represent most of the existing co-design directions verifies the necessity and effectiveness of co-searching all the three aspects towards efficient DNN accelerators.\nCo-exploration of networks and accelerators. Software-Hardware co-design is a significant property of our TRIPS framework, so we further benchmark it with both searched precision and fixedprecision over SOTA network/accelerator co-search works for a fair comparison.\nCo-search on FPGA. We benchmark with HSCo-Opt (Jiang et al., 2020b) and BSW (Abdelfattah et al., 2020) on ZC706 under the same DSP limits as the baselines on CIFAR10/100/ImageNet. Note that all our baselines here adopt a 16-bit fixed-point design, so we provide TRIPS with fixed 16-bit in addition to the one with searched precision for a fair comparison. From Fig. 4, we can see that (1) on both\nCIFAR-10/100 dataset, TRIPS with fixed 16-bit consistently achieves a better accuracy (up to 10.91% and 5.15%, respectively) and higher FPS (up to 2.21× and 2.15×, respectively) under the same DSP constraint, and (2) when co-searching the precision, our method can more aggressively push forward the FPS improvement (up to 6.79× and 6.54×, respectively on CIFAR-10/100), implying the importance of the co-exploration of the precision dimension in addition to network and accelerator co-explorations. Specifically, TRIPS with searched precision achieves a +5.96% higher accuracy and 4.4× FPS on ImageNet over (Jiang et al., 2020b).\nCo-search on ASIC. We benchmark with NASAIC, the first exploration towards network / accelerator co-search targeting ASIC accelerators, with both the co-search results and their reported sequential optimization/hardware aware optimization results (Yang et al., 2020) on CIFAR10 in Tab. 3. We can observe that compared with both co-search, sequential optimization, and hardware-aware optimization methods for\nexploring the ASIC design space, our TRIPS framework consistently achieves notably improved trade-offs between accuracy and energy delay product (EDP), which is energy multiplied with latency. In particular, we achieve a +0.17% ∼ +1.81% higher accuracy with a 371.56 ∼ 756.88× reduction in EDP. In the baseline implementations, most of the area is occupied by the support for heterogeneous functionalities, which leads to severe under utilization when executing one task, thus contributing to the surprisingly higher area and energy consumption.\nWe further benchmark TRIPS over other two co-search baselines targeting ASIC accelerators, i.e., NHAS (Lin et al.) and DANCE (Choi et al., 2020). In particular, we fix the precision of TRIPS to be 4-bit and 16-bit to fairly compare with (1) NHAS which adopts 4-bit and (2) DANCE which adopts 16-bit, respectively. As shown in Tab. 4, TRIPS achieves a 0.96%/3.5% higher accuracy and a 20.9%/64.9% reduction in latency together with a 6.3%/22.3% reduction in area consumption, as compared with NHAS and DANCE, respectively, verifying the superiority of our TRIPS." }, { "heading": "4.4 ABLATION STUDIES ABOUT TRIPS", "text": "Scalability under the same DSP. In Fig. 5, we show the pareto frontier of our TRIPS framework under the same DSP constraint with different accuracy and FPS trade-offs on CIFAR-100 to show our TRIPS can handle and is scalable with a large range of DNN solutions.\nEffectiveness of heterogeneous sampling. In addition to the example and analysis in Sec. 3.2, we further benchmark with the baseline that adopts the same sampling strategy for updating both the weights and precision. We integrate the baseline’s sampling strategy into our TRIPS framework (K = 2 for all the experiments), termed as TRIPS w/o h-sampling, and show\nthe trade-offs between the achieved accuracy and FPS in Fig. 5. We find it tends to select lower precision choices which are harmful to the overall accuracy, which is consistently inferior than that of TRIPS with heterogeneous sampling, due to the inaccurate estimation for different precision ranking .\nComparison with sequential optimization. Due to the great flexibility on both software and hardware side, a natural baseline is to search the network and precision based on theoretical efficiency metrics (e.g., bit operations) and then search for the best matched accelerator given the searched network and precision from the first search. We benchmark over results from such a design flow in Fig. 5 on CIFAR-100 and observe that TRIPS consistently outperforms the sequential optimization baseline, e.g., a 1.95% higher accuracy with 1.75× FPS, indicating the poor correlation between theoretical efficiency and real-device efficiency measurement.\nMore ablation studies about the accelerator search engine and visualization of the searched network, precision and accelerator can be found in Appendix C and D, respectively." }, { "heading": "5 CONCLUSION", "text": "We propose a Triple-Search (TRIPS) framework to jointly search for the network structure, precision, and accelerator in a differentiable manner. Our TRIPS framework adopts a heterogeneous sampling strategy and a novel co-search pipeline that integrates a generic differentiable accelerator search engine to achieve unbiased search with constant memory consumption. Extensive experiments validate the superiority of TRIPS over SOTA designs in terms of accuracy and efficiency." }, { "heading": "A TRAINING AND SEARCH SETTING", "text": "Search settings. For searching on CIFAR-10/100 dataset, we use half of the dataset for updating supernet weight ω and the other half for updating network and precision parameter α and β. We search for 90 epochs with an initial gumbel softmax temperature 5 decayed by a factor 0.975 every epoch. For searching on ImageNet, we randomly sample 100 classes as a proxy search dataset and\nuse 80% data for updating ω and the other 20% for updating α and β. We pretrain the supernet by 30 epochs without updating network architecture and precision, then search for 90 epochs with an initial temperature 5 decayed by 0.956 every epoch, following (Wu et al., 2019). For both CIFAR-10/100 and ImageNet, we use initial learning rate 0.1 and an annealing cosine learning rate.\nTraining settings. For CIFAR-10/100, we train the derived network for 600 epochs with a 0.1 initial learning rate and an annealing cosine learning rate on a single NVIDIA RTX-2080Ti gpu following (Liu et al., 2018). For ImageNet, we adopt a 0.05 initial learning rate and an annealing cosine learning rate for 150 epochs with four NVIDIA Tesla V100 gpus." }, { "heading": "B ACCELERATOR SEARCH SPACE", "text": "To flexibly maintain the balance between hardware resource consumption and throughput across different generated networks, we employ the chunk-wise pipeline styled accelerator architecture inspired by (Shen et al., 2017; Zhang et al., 2018b). To enable the automatic hardware optimization and explore as much as the performance frontier, we further free up the hardware configurations during the co-optimization. Adopted from (Chen et al., 2017; Zhang et al., 2015; Yang et al., 2016), these configurations, as illustrated in Fig. 1, cover 1) parallel processing elements (PE) settings: number and inter-connections of PEs, 2) buffer management: allocation of lower memory levels between input, weight and output, 3) tiling and scheduling of MAC(Multiply and Accumulate) computations and 4) layer allocation: ways to assign each layer to the corresponding pipeline stage (sub-accelerator). All the above configurations are formatted and maintained through vectors of options to be compatible with the optimization formulation in Sec. 3. Taking AlexNet as an example workload, the total accelerator space size can reach up to 105 for each sub-accelerator and the space can go exponentially larger as the number of sub-accelerator (pipeline chunks) increases." }, { "heading": "C ABLATION STUDIES ABOUT THE ACCELERATOR SEARCH ENGINE", "text": "The proposed accelerator search engine is one key enabler of our TRIPS framework. To evaluate its efficacy, we compare the accelerator efficiency of the TRIPS generated accelerators with SOTA accelerators under the same datasets and models. For FPGA-based accelerators, we consider three representative including (Qiu et al., 2016; Xiao et al., 2017; Zhang et al., 2018b) on two DNN models (AlexNet and VGG16) on ImageNet. For a fair comparison, when using our own engine to generate optimal accelerators, we adopt the same precision and FPGA resource as the baselines. The results in Tab. 5 show that the TRIPS generated accelerators outperform both SOTA expert-designed and tool-generated accelerators under the same dataset, DNNs, and FPGA resources. For example, the TRIPS generated accelerators achieve up to 2.16× increase in throughput on VGG16. The consistent better performance of our auto generated accelerators validates the effectiveness of our accelerator search engine in navigating the large and discrete design space of DNN accelerators to search for optimal DNN accelerators.\nD VISUALIZATION OF SEARCHED NETWORK, PRECISION, AND ACCELERATOR.\nFig. 6 visualizes the searched network, precision, and accelerator achieved a 72.2% top-1 accuracy on ImageNet and 110 FPS on ZC706 FPGA. We conclude the insights below.\nInsights for the searched network. We find that wide-shallow networks will better favor real device efficiency on ZC706 FPGA while achieve a similar accuracy. We conjecture the reason is that wider networks offer more opportunities for feature/channel-wise parallelism when batch size equals to one, leading to higher resource utilization and thus overall higher throughput.\nInsights for the searched accelerator of TRIPS. The whole network is partitioned into multiple pipeline chunks to prioritize the resulting throughput, with each color representing one chunk. Deterministically, two blocks with different quantization schemes will not be put into one chunk due to hardware constraints. Adopted from the design in (Shen et al., 2017), there is no dependency of computation among blocks, so the pipeline chunks can take non-consecutive layers and better group the layers with similar dimensions together. Generally, as observed in Fig. 6 the chunks which take mostly the early blocks of the network favor spatially tiling the feature map height and width as it offers more parallelism, while the later chunks tends to choose the architecture to tile channels as, after down-sampling, parallelism opportunity is more prominent along channel dimensions." } ]
2,020
null
SP:d850572819200f79545616fc92e789ce958b30d4
[ "This paper deals with continual learning. Specifically, given a stream of tasks we want to maximise performance across all tasks. Typically neural networks suffer from catastrophic forgetting which results in worse performance on tasks seen earlier in training. There are many proposed solutions to this problem. One specific set of approaches are \"memory based\" algorithms. Here we store some training examples in memory from the tasks seen thus far. These are then mixed in with new training data so as to encourage the model to not forget past tasks. " ]
Continual learning often assumes a knowledge of (strict) task boundaries and identities for the instances in a data stream—i.e., a “task-aware” setting. However, in practice it is rarely the case that practitioners can expose task information to the model; thus needing “task-free” continual learning methods. Recent attempts towards task-free continual learning focus on developing memory-construction and replay strategies such that model performance over previously seen instances is best retained. In this paper, looking from a complementary angle, we propose to “edit” memory examples to allow for the model to better retain past performance when memory is replayed. Such memory editing is achieved by making gradient updates to memory examples so that they are more likely to be “forgotten” by the model when viewing new instances in the future. Experiments on five benchmark datasets show our proposed method can be seamlessly combined with baselines to significantly improve performance and achieve state-of-the-art results. 1
[]
[ { "authors": [ "Tameem Adel", "Han Zhao", "Richard E. Turner" ], "title": "Continual learning with adaptive weights (claw)", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Rahaf Aljundi", "Klaas Kelchtermans", "Tinne Tuytelaars" ], "title": "Task-free continual learning", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Rahaf Aljundi", "Lucas Caccia", "Eugene Belilovsky", "Massimo Caccia", "Min Lin", "Laurent Charlin", "Tinne Tuytelaars" ], "title": "Online continual learning with maximally interfered retrieval", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Rahaf Aljundi", "Min Lin", "Baptiste Goujaud", "Yoshua Bengio" ], "title": "Gradient based sample selection for online continual learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Massimo Caccia", "P. Rodrı́guez", "O. Ostapenko", "Fabrice Normandin", "Min Lin", "L. Caccia", "Issam H. Laradji", "I. Rish", "Alexande Lacoste", "D. Vázquez", "Laurent Charlin" ], "title": "Online fast adaptation and knowledge accumulation: a new approach to continual learning", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-GEM", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Marcus Rohrbach", "Mohamed Elhoseiny", "Thalaiyasingam Ajanthan", "Puneet K. Dokania", "Philip H.S. Torr", "Marc’Aurelio Ranzato" ], "title": "On tiny episodic memories in continual learning. 2019b", "venue": null, "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Albert Gordo", "Puneet Kumar Dokania", "Philip H.S. Torr", "David Lopez-Paz" ], "title": "Using hindsight to anchor past knowledge in continual learning", "venue": null, "year": 2002 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Sayna Ebrahimi", "Mohamed Elhoseiny", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Uncertainty-guided continual learning with bayesian neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ian J Goodfellow", "Mehdi Mirza", "Da Xiao", "Aaron Courville", "Yoshua Bengio" ], "title": "An empirical investigation of catastrophic forgetting in gradient-based neural networks", "venue": "arXiv preprint arXiv:1312.6211,", "year": 2013 }, { "authors": [ "J. Harrison", "Apoorva Sharma", "Chelsea Finn", "M. Pavone" ], "title": "Continuous meta-learning without tasks", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Xu He", "Jakub Sygnowski", "Alexandre Galashov", "Andrei A. Rusu", "Yee Whye Teh", "Razvan Pascanu" ], "title": "Task agnostic continual learning via meta learning", "venue": null, "year": 1906 }, { "authors": [ "Yen-Chang Hsu", "Yen-Cheng Liu", "Anita Ramasamy", "Zsolt Kira" ], "title": "Re-evaluating continual learning scenarios: A categorization and case for strong baselines", "venue": "arXiv preprint arXiv:1810.12488,", "year": 2018 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "Christopher JC Burges" ], "title": "The mnist database of handwritten digits", "venue": "URL http://yann. lecun. com/exdb/mnist,", "year": 1998 }, { "authors": [ "Soochan Lee", "Junsoo Ha", "Dongsu Zhang", "Gunhee Kim" ], "title": "A neural dirichlet process mixture model for task-free continual learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xilai Li", "Yingbo Zhou", "Tianfu Wu", "Richard Socher", "Caiming Xiong" ], "title": "Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting", "venue": null, "year": 1904 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "James L. McClelland", "Bruce L. McNaughton", "Randall C. O’Reilly" ], "title": "Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory", "venue": "Psychological review,", "year": 1995 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Cuong V. Nguyen", "Yingzhen Li", "Thang D. Bui", "Richard E. Turner" ], "title": "Variational continual learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Amal Rannen", "Rahaf Aljundi", "Matthew B Blaschko", "Tinne Tuytelaars" ], "title": "Encoder based lifelong learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Dushyant Rao", "Francesco Visin", "Andrei Rusu", "Razvan Pascanu", "Yee Whye Teh", "Raia Hadsell" ], "title": "Continual unsupervised representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Roger Ratcliff" ], "title": "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions", "venue": "Psychological review,", "year": 1990 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander I Kolesnikov", "Georg Sperl", "Christoph H. Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Anthony V. Robins" ], "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "venue": "Connect. Sci.,", "year": 1995 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy P. Lillicrap", "Greg Wayne" ], "title": "Experience replay for continual learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Joan Serrà", "Didac Suris", "Marius Miron", "Alexandros Karatzoglou" ], "title": "Overcoming catastrophic forgetting with hard attention to the task", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": null, "year": 1985 }, { "authors": [ "JMLR. org", "2017. Chen Zeno", "Itay Golan", "Elad Hoffer", "Daniel Soudry" ], "title": "Task agnostic continual learning", "venue": null, "year": 2017 }, { "authors": [ "Chaudhry" ], "title": "2020) learns an pseudo “anchor” example per task per class in addition to the replay memory by maximizing its estimated forgetting, and tries to fix model outputs on the anchors at training. However, unlike GMED, they estimate forgetting with loss increase on examples when the model train for a pass on the replay memory (and thus forgetting is estimated with “hindsight”). The approach is not task-free", "venue": "Hindsight Anchor Learning", "year": 2020 }, { "authors": [ "Aljundi" ], "title": "Maximally Interfering Retrieval", "venue": null, "year": 2019 }, { "authors": [ "Ebrahimi" ], "title": "We perform a grid search over all combinations of α and β and select the one with the best validation performance on the first three tasks", "venue": "We select α from [0.01,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Accumulating past knowledge and adapting to evolving environments are one of the key traits in human intelligence (McClelland et al., 1995). While contemporary deep neural networks have achieved impressive results in a range of machine learning tasks Goodfellow et al. (2015), they haven’t yet manifested the ability of continually learning over evolving data streams (Ratcliff, 1990). These models suffer from catastrophic forgetting (McCloskey & Cohen, 1989; Robins, 1995) when trained in an online fashion—i.e., performance drops over previously seen examples during the sequential learning process. To this end, continual learning (CL) methods are developed to alleviate catastrophic forgetting issue when models are trained on non-stationary data streams (Goodfellow et al., 2013).\nMost existing work on continual learning assume that, when models are trained on a stream of tasks sequentially, the task specifications such as task boundaries or identities are exposed to the models. These task-aware CL methods make explicit use of task specifications to avoid catastrophic forgetting issue, including consolidating important parameters on previous tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2018), distilling knowledge from previous tasks (Li & Hoiem, 2017; Rannen et al., 2017), or separating task-specific model parameters (Rusu et al., 2016; Serrà et al., 2018). However, in practice, it is more likely that the data instances comes in a sequential, non-stationary fashion without task identity or boundary—a setting that is commonly termed as task-free continual learning (Aljundi et al., 2018). To tackle this setting, recent attempts on task-free CL methods have been made (Aljundi et al., 2018; Zeno et al., 2018; Lee et al., 2020). These efforts revolve around regularization and model expansion based approaches, which rely on inferring task boundaries or identities (Aljundi et al., 2018; Lee et al., 2020) and perform online paramater importance estimation (Zeno et al., 2018), to consolidate or separate model parameters.\nIn another line of efforts, memory-based CL methods have achieved strong results in task-free setting Aljundi et al. (2019b). These methods store a small set of previously seen instances in a fix-sized memory, and utilize them for replay (Robins, 1995; Rolnick et al., 2019) or regularization (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a). The core problem in memory-based CL methods is how to manage the memory instances (e.g., which to replace with new instances) and replay them given a restricted computation budget, so that the model performance can be maximally preserved or\n1Code has been uploaded in the supplementary materials and will be published.\nenhanced. Prior work developing these methods have tried to either identify: 1) what instances to include in memory from a data stream (Aljundi et al., 2019b; Rebuffi et al., 2017; Chaudhry et al., 2019b); and 2) which instances in memory need to be replayed at what training step (Aljundi et al., 2019a).\nIn this paper, we provide a new approach to solving the memory management problem in task-free continual learning by studying how to make gradient updates on stored memory examples. We develop a novel memory editing algorithm which complements existing memory-replay methods and data-sampling strategies for memory management (updates). The challenge is to propose a plausible and sound optimization objective of editing. We employ the same intuition as previous study (Toneva et al., 2019; Chaudhry et al., 2020; Aljundi et al., 2019a): examples that are likely to be forgotten should be prioritized. Our proposed method, named Gradient-based Memory EDiting (GMED), edits examples stored in the memory with gradient-based updates so that they are more likely to be forgotten. Specifically, we estimate the “forgetting” of a stored example by its loss increase in the upcoming one online model update. Finally, we perform gradient ascent on stored examples so that they are more likely to be forgotten.\nExperiments show that our algorithm consistently outperforms baselines on five benchmark datasets under various memory sizes. Our ablation study shows the proposed editing mechanism outperforms alternative editing strategies such as random editing. We demonstrate that the proposed algorithm is general enough to be used with other strong (more recent) memory-based CL methods to further enhance performance, thus allowing for improvements in many benchmark datasets." }, { "heading": "2 RELATED WORKS", "text": "Task-aware Continual Learning. Most of continual learning algorithms are studied under “taskaware” settings, where the model visits a sequence of clearly separated “tasks”. A great portion of algorithms make explicit use of task boundaries (Kirkpatrick et al., 2017; Rusu et al., 2016; Lopez-Paz & Ranzato, 2017), by learning separate parameters for each task, or discourage changes of parameters that are important to old tasks. Existing continual learning algorithms can be summarized into three categories: regularization-based, architecture-based and data-based approaches. Regularization based approaches (Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2018; Adel et al., 2020) discourage the change of parameters that are important to previous data. Model expansionbased approaches (Rusu et al., 2016; Serrà et al., 2018; Li et al., 2019) allows expansion of model architecture to separate parameters for previous and current data. Data-based approaches (Robins, 1995; Shin et al., 2017; Lopez-Paz & Ranzato, 2017) replay or constrain model updates with real or synthetic examples.\nTask-free Continual Learning. Recently, task-free continual learning (Aljundi et al., 2018) have drawn increasing interest, where we do not assume knowledge about task boundaries. To the best of our knowledge, only a handful number of regularization based (Zeno et al., 2018; Aljundi et al., 2018), model-expansion based (Lee et al., 2020), generative replay based (Rao et al., 2019), continual meta-learning and meta-continual learning (He et al., 2019; Caccia et al., 2020; Harrison et al., 2020) approaches are applicable in the task-free CL setting. Meanwhile, most memory based continual\nlearning algorithms are applicable to the task-free setting (Aljundi et al., 2019a;b). Memory-based CL algorithms such as Experience Replay (ER) (Robins, 1995) store a subset of examples in a fix-sized replay memory and utilize them later at training to alleviate forgetting. Recent research has studied online strategies to improve the performance gain when examples get replayed from two dimensions: in terms of which examples to store, and which examples to replay. For example, in terms of deciding which examples to store, Gradient based Sample Selection (GSS) (Aljundi et al., 2019b) proposes to store most diverse examples. In terms of deciding which examples to replay, maximally Interfering Retrieval (MIR) (Aljundi et al., 2019a) select examples with the largest estimated forgetting. In particular, a task-aware approach, Hindsight Anchor Learning (HAL) (Chaudhry et al., 2020), shares the same assumption that forgettable examples should be prioritized more. However, HAL only applies to task-aware settings and requires extra memory storage to keep track of the learned anchors. Figure 1 shows a categorization of memory-based task-free continual learning." }, { "heading": "3 PRELIMINARIES", "text": "In this section we first present the problem formulation of task-free continual learning and then introduce preliminaries on memory-based continual learning methods." }, { "heading": "3.1 PROBLEM FORMULATION", "text": "In task-free continual learning, we consider a (potentially infinite) stream of data examples D, which have a non-stationary data distribution, i.e., the data distribution P (x, y) over time.At each time step t, the model receives a single or a mini batch of labeled examples (xt, yt) from the data stream D. For simplicity, here we assume that example (xt, yt) from D is generated by: first sampling a latent “task” z ∼ P (z; t), followed by sampling a data example from a joint data distribution P (x, y|z; t) that is conditioned on task z, i.e., (xt, yt) ∼ P (x, y|z; t). Here P (z; t) is non-i.i.d and time-dependent. Similarly, P (x, y|z; t) also changes over time. The goal of task-free online continual learning is to seek a classification model f(x; θ), parameterized by θ, over new example(s) (x, y) from the data stream D that minimizes a predefined loss `(x, y; θ) while not increasing the loss on previously seen examples. This capability is evaluated by testing the model over a test set of all visited tasks." }, { "heading": "3.2 MEMORY-BASED CL METHODS", "text": "Briefly, memory-based CL algorithms maintain a fix-sized replay memory M which is used to store (subset of) previously seen examples (xt, yt) from the stream D. When the memory is full, the algorithm needs to either identify a memory example (x, y) to be replaced by new example, or to discard the new example it just received. Following the same setup in previous memory-based CL methods, our experiments use reservoir sampling (Vitter, 1985) to determine how the memory will be updated with new examples received from stream D. Every time the model receives a new example, it draws an integer j between 0 and N randomly, where N is the number of examples visited so far. If j < |M | (i.e., the memory size or budget), it replace the example at the j-th position in the memory with the new example; otherwise, this newly received example will be discarded. Reservoir sampling ensures at each time step each visited example is kept with an equal probability |M |/N .\nAt each time step t, the algorithm also needs to determine the memory examples to be used for replay. Similar to previous methods, we randomly sample one or a mini-batch of examples (x, y) from the memory M . As an alternative replay strategy, MIR (Aljundi et al., 2019a) identifies a subset of memory examples based on a predefined optimization objective (i.e, perform one step of training on (x, y)), and then replays the selected examples." }, { "heading": "4 GRADIENT BASED MEMORY EDITING", "text": "We propose Gradient based Memory Editing (GMED), a novel algorithm for updating stored memory examples in an online fashion. We state our hypothesis about which examples should be stored in Sec. 4.1. We then formulate an online optimization objective for example editing in Sec. 4.2. In Sec. 4.3, we introduce algorithmic details of GMED and its integration with MIR." }, { "heading": "4.1 HYPOTHESIS FOR MEMORY EDITING", "text": "As there is no prior knowledge about the forthcoming examples in a data stream D, previous task-free CL methods usually impose (implicit) assumptions regarding what kinds of examples may improve model’s test performance after replaying these examples. For example, GSS (Aljundi et al., 2019b) assumes that the diversity of memory examples contributes to the model performance; MIR (Aljundi et al., 2019a) and HAL (Chaudhry et al., 2020) assume that replaying examples that are likely to be “forgotten” can benefit the performance. Empirical study by Toneva et al. (2019) also shows that there are constantly forgotten examples, which benefit overall performance when they get replayed compared to other examples.\nOur work is based on a similar hypothesis: replaying examples that are likely to be forgotten by the current model helps retain its test performance. Specifically, suppose we train the model on D until a time step T , the “forgetting” measurement of an example (x, y) for the model at t, denoted by dt:T (x, y), is defined as the “loss increase” at time T compared to that at time t, shown as follows.\ndt:T (x, y) = `(x, y; θT )− `(x, y; θt), (1)\nwhere θt is the model parameters at the current time step t. A larger dt:T (x, y) indicates that the example suffers more forgetting at the end of training. The hypothesis is formally stated as follows.\nHypothesis 1. Given a budget of C examples to replay at time t, in order to minimize the loss over new examples from a (latent) task k (e.g., in test set), an ideal strategy is to replay the most forgettable examples, denoted as Sk, selected from the training examples of the (latent) task k, denoted as Dk.\nFollowing Hypothesis 1, the set of examples Sk can be obtained by solving the following optimization problem.\nSk = arg max Sk⊆Dk,|Sk|=C ∑ (x,y)∈Sk dt:T (x, y), (2)\nwhere Dk is the training examples of the latent task k. Unfortunately, the sample selection problem in Eq. 2 cannot be solved in an online setting, even when we know task identities, because the objective function in Eq. 2 can only be evaluated at a distant future time step T . Therefore, approximations are necessary to enable online optimization." }, { "heading": "4.2 ONLINE OPTIMIZATION FOR MEMORY EDITING", "text": "We propose an optimization problem that is tractable in an online fashion, which shares the same goal of making memory examples used for replay more likely to be forgotten. We modify Eq. 2 where we (1) relax the constraint Sk ⊆ Dk and edit individual examples stored in memory instead of selecting the most forgettable examples from Dk to store, and (2) estimate the forgetting measure in an online fashion.\nFormally, suppose that the model is at the t-th time step and that (x, y) is an example from a certain task k. In order to retain the performance on test examples from the same task k, we propose the following objective:\nx∗ = arg max x dt:t+1(x, y)− β`(x, y; θt), (3)\nwhere d(·) is the forgetting defined in Eq. 1 and `(x, y; θt) is the loss of the example (x, y) at the current time step t. β is a trade-off hyper-parameter deciding the regularization strength. The optimal (x∗, y) has the same label y as the original example (x, y).\nSpecifically, we discuss two main differences in Eq. 3 compared to Eq. 2 in the Hypothesis 1.\nEstimating Forgetting Online. We maximize the forgetting of the example (x, y) in the upcoming update (time step t+ 1, noted as dt:t+1(x, y)), instead of the forgetting when the model get evaluated, i.e., dt:T (x, y). The former can be evaluated online efficiently without any overhead on the replay memory.\nRelaxed Constraints. Eq. 3 do not constrain (x, y) ∈ Dk. It allows (x∗, y) to be an arbitrary example in the input space without being a real example from Dk; the sample selection problem is posed as an optimization problem in the continuous space. Nevertheless, the editing is made conservatively, so that the edited example is still likely to be an example from the original latent task. In practice, (x, y) is initialized as different input examples, and we perform only one or a few gradient updates each time it is drawn from memory for replay. We also add a regularization term β`(x, y; θt) to discourage the loss increase on the example.\nThe objective in Eq. 3 is differientiable with respect to x, allowing us to update x with gradient ascent. In the rest of this section, we introduce algorithmic details of GMED.\n4.3 THE GMED ALGORITHM\nAlgorithm 1: ER with Memory Editing Input: learning rate τ , edit stride α, regularization strength β, model parameters θ; Receives: stream example (x(D), y(D)); Initialize: replay memory M ; for t = 1 to T do\n(x, y) ∼M ; `before ← loss(x, y, θt); `stream ← loss(x(D), y(D), θt); //update model parameters with\nstream examples, discarded later;\nθ′t ← SGD(`stream, θt, τ); //evaluate forgetting of memory\nexamples; `after ← loss(x, y, θ′t); d← `after − `before; //edit memory examples;\nx′ ← x+ α∇x(d− β`before); ` = loss((x′, y) ∪ (x(D), y(D)), θt); θt+1 ← SGD(`, θt, τ); replace (x, y) with (x′, y) in M ;\nreservoir update(x(D), y(D),M);\nend\nWe start by introducing the algorithm of building GMED upon the ER (Experience Replay) idea. It introduces an additional “editing” step before replaying examples drawn from the memory.\nWe assume at time step t the model receives a stream example (x(D), y(D)) from the training stream D, and randomly draws a memory example (x, y) from the memory M . We first compute the forgetting (i.e., loss increase) on the memory example (x, y) when the model performs one gradient update on parameters with the stream example (x(D), y(D)).\nθ′t = θt −∇θ`(x(D), y(D); θt); (4) dt:t+1(x, y) = `(x, y; θ ′ t)− `(x, y; θt), (5)\nwhere θt and θ′t are model parameters before and after the gradient update respectively. Figure 2(a) visualize the steps to compute forgetting.\nFollowing the optimization objective proposed in Eq. 2, we perform a gradient update on x to increase its forgetting, while using a regularization term to discourage the loss increase on the example at the current time step.\nx′ = x+ α∇x[dt:t+1(x, y)− β`(x, y; θt)], (6)\nwhere α is a hyperparameter for the stride of the update. Figure 2(b) visualize the editing step.\nThe algorithm then discards the updated parameter θ′t, and updates model parameters θt with the updated memory example (x′, y) and the stream example (x(D), y(D)), in a similar way to ER.\nθt+1 = θt −∇θ`({(x′, y), (x(D), y(D))}; θt). (7)\nWe replace the original examples in the memory with the edited example. In this way, we continuously edit examples stored in the memory alongside training. Algorithm 1 summarize the proposed ER+GMED algorithm.\nGMED is studied from a complementary direction compared to most prior approaches. Therefore, we can combine GMED with existing memory-based CL algorithms without much effort. We illustrate the point by proposing a hybrid approach of GMED and MIR. We include the details of the algorithm in the Appendix." }, { "heading": "5 EXPERIMENTS", "text": "We compare the performance of GMED against state-of-the-art CL algorithms on five benchmark datasets. We introduce our experimental setup and discuss our results on comparisons with baselines and performance analysis." }, { "heading": "5.1 DATASETS", "text": "We consider six public CL datasets in our experiments.\nSplit / Permuted / Rotated MNIST are constructed from the MNIST (LeCun et al., 1998) dataset of handwritten digit classification. Split MNIST (Goodfellow et al., 2013) partitions the dataset into 5 disjoint subsets by their labels as different tasks. The goal is to classify over all 10 digits when the training ends. Permuted MNIST (Goodfellow et al., 2013) applies a fixed random pixel permutation to the MNIST dataset as different tasks. The dataset consists of 10 tasks. The models classify over 10 digits without knowing the permutation applied. Rotated MNIST (Lopez-Paz & Ranzato, 2017) applies a fixed image rotation between 0 to 180 degree to the MNIST dataset. Similarly, the goal is to classify over 10 digits without knowing the rotation applied. Following Aljundi et al. (2019a), for MNIST experiments, each task consists of 1,000 training examples.\nSplit CIFAR-10 and Split CIFAR- 100 (Zenke et al., 2017) are constructed by splitting the CIFAR10 or CIFAR-100 (Krizhevsky, 2009) of image classification into 5 or 20 disjoint subsets by their labels. The model classifies over all 10 or 100 classes when the training ends.\nSplit mini-ImageNet (Aljundi et al., 2019a) splits the mini-ImageNet (Deng et al., 2009; Vinyals et al., 2016) image classification dataset into 20 disjoint subsets by their labels. Similarly, the models classify over all 100 classes.\nWe do not provide information about task identities or boundaries to the model at both training and test time. Under the taxonomy of the prior literature (van de Ven & Tolias, 2019), our Split MNIST, Split CIFAR-10, and Split mini-ImageNet experiments are under the class-incremental setup, while Permuted MNIST and Rotated MNIST experiments are under the domain-incremental setup." }, { "heading": "5.2 COMPARED METHODS", "text": "For our methods, we report the performance of ER + GMED and MIR + GMED, where we build GMED upon ER or MIR as introduced in section 4.3. We compare with several task-free memory based continual learning methods: Experience Replay (ER) (Robins, 1995; Rolnick et al., 2019), Averaged Gradient Episodic Memory (AGEM) (Chaudhry et al., 2019a), Gradient based Sample Selection (GSS) (Aljundi et al., 2019b), Maximally Interfering Retrieval (MIR) (Aljundi et al., 2019a). We also compare with Bayesian Graident Descent (BGD) (Zeno et al., 2018) and Neural Dirichlet Process Mixture Model (CN-DPM) (Lee et al., 2020), which are regularization and model-expansion based approaches respectively. We also include Graident Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017) and Hindsight Anchor Learning (HAL) (Chaudhry et al., 2020), which are task-aware methods. We also reports the results of simple fine tuning, which performs online updates on model parameters without applying continual learning algorithms; and iid Online, where we randomly shuffle the data stream, so that the model visits an i.i.d. stream of examples; and also iid Offline, where we further allow multiple pass over the dataset. See appendix for detailed descriptions of compared methods and their implementation details.\nBy default, we set the size of replay memory as 10,000 for split CIFAR-100 and split mini-ImageNet, and 500 for all other datasets. We also report performance under various memory sizes. Following Chaudhry et al. (2019a), we tune the hyperparameters with only the training and validation set of first three tasks. We mostly follow the training setup of Aljundi et al. (2019a). For three MNIST datasets, we use a MLP classifier with 2 hidden layers with 400 hidden units each. For Split CIFAR-10, Split CIFAR-100 and Split mini-ImageNet datasets, we use a ResNet-18 classifier. See Appendix for the details of the hyperparameter tuning and model achitectures.\nMethods / Datasets SplitMNIST Permuted MNIST\nRotated MNIST\nSplit CIFAR-10 Split CIFAR-100 Split mini-ImageNet\nFine tuning 18.80 ± 0.6 66.34 ± 2.6 41.24 ± 1.5 18.49 ± 0.2 3.06 ± 0.2 2.84 ± 0.4 iid online 85.99 ± 0.3 73.58 ± 1.5 81.30 ± 1.3 62.23 ± 1.5 18.13 ± 0.8 17.53 ± 1.6 AGEM (Chaudhry et al., 2019a) 29.02 ± 5.3 72.17 ± 1.5 50.77 ± 1.9 18.49 ± 0.6 2.40 ± 0.2 2.92 ± 0.3 GEM (Lopez-Paz & Ranzato, 2017) 87.18 ± 1.3 78.23 ± 1.2 76.49 ± 0.8 20.05 ± 1.4 8.75 ± 0.4 11.27 ± 3.4 GSS-Greedy (Aljundi et al., 2019b) 84.16 ± 2.6 77.43 ± 1.4 73.66 ± 1.1 28.02 ± 1.3 19.53 ± 1.3 16.19 ± 0.7 BGD (Zeno et al., 2018) 13.54 ± 5.1 19.38 ± 3.0 77.94 ± 0.9 18.23 ± 0.5 3.11 ± 0.2 24.71 ± 0.8 HAL (Chaudhry et al., 2020) 77.92 ± 4.2 77.55 ± 4.2 78.48 ± 1.5 32.06 ± 1.5 21.11 ± 1.4 21.18 ± 2.1 ER (Robins, 1995) 80.96 ± 2.3 79.69 ± 1.0 76.95 ± 1.7 33.34 ± 1.5 20.65 ± 1.3 26.00 ± 1.0 MIR (Aljundi et al., 2019a) 84.88 ± 1.7 79.96 ± 1.3 78.30 ± 1.0 34.47 ± 2.0 20.18 ± 1.7 25.01 ± 1.3 ER + GMED 82.68∗∗ ± 2.1 79.70 ± 1.1 77.89∗ ± 0.9 35.01∗ ± 1.5 20.87 ± 1.4 27.79∗∗ ± 0.7 MIR + GMED 87.86∗∗ ± 1.1 80.11∗ ± 1.2 79.16∗∗ ± 0.9 35.54± 1.9 21.49∗ ± 0.6 26.29∗ ± 1.2 iid offline (upper bound) 93.87 ± 0.5 87.40 ± 1.1 91.38 ± 0.7 75.17 ± 0.7 41.45 ± 0.9 36.54 ± 1.4\nTable 1: Mean and standard deviation of final accuracy(%) in 10 runs. For Split mini-ImageNet and Split CIFAR-100 datasets, we set the memory size to 10,000 examples; we use 500 for other datasets. ∗ and ∗∗ indicate significant improvement over the counterparts without GMED with p-values less than 0.05 and 10−3 respectively in single-tailed paired t-tests.\n100 200 500 1000 Memory size\n70\n75\n80\n85\nAc cu\nra cy\n(% )\nER GMED-ER\n(a) Split MNIST\n100 200 500 1000 Memory size\n65\n70\n75\n80\nAc cu\nra cy\n(% )\nER GMED-ER\n(b) Rotated MNIST\n100 200 500 1000 Memory size\n20\n30\n40\nAc cu\nra cy\n(% )\nER GMED-ER\n(c) Split CIFAR-10\n1000 5000 10000 20000 Memory size\n10\n20\n30\nAc cu\nra cy\n(% )\nER GMED-ER\n(d) Split mini-ImgNet\nFigure 3: Performance of ER and GMED-ER in different memory sizes. For mini-ImageNet dataset, we use memory sizes of 1,000, 5,000, 10,000, and 20,000 examples; for other datasets, we use 100, 200, 500, and 1,000." }, { "heading": "5.3 RESULTS AND PERFORMANCE ANALYSIS", "text": "We report the final accuracy achieved by different methods in Table 1. We summarize the following findings.\nOverall Performance. From the results, we see MIR + GMED achieves best performance among memory based continual learning algorithms. The improvement of GMED differs by datasets. ER+GMED clearly improves over ER by an absolute margin of 1.72%, 1.67%, and 1.79% accuracy respectively in Split MNIST, Split CIFAR-10, and Split mini-ImageNet datasets. On Rotated MNIST and Split CIFAR-100 the improvement is less significant, with an absolute accuracy improvement of 0.94% and 0.22%, while we do not see a meaningful improvement on Permuted MNIST dataset. MIR+GMED improves performance on all the datasets. The improvement is clear on Split MNIST, Split CIFAR-10, Split CIFAR-100, and Split mini-ImageNet with an absolute accuracy improvement of 2.98%, 1.07%, 1.31%.\nPerformance Under Various Memory Sizes. Figure 3 shows the performance under various memory sizes. We see in Split MNIST, Rotated MNIST, Split CIFAR-10 and Split mini-ImageNet, the improvement of ER+GMED over ER is consistent under various memory sizes.\nComparison with Random Editing. As an ablation study, we show the result of a random editing baseline in Table 2. The baseline edits memory examples to a random direction with a fix stride, tuned in a similar way as GMED. We see GMED outperforms random editing in all cases. We further notice ER+Random Edit outperforms ER on split CIFAR-10 and CIFAR-100. We conjecture the reason is that the random editing alleviates the overfitting to memory examples.\nComparison with Model Expansion Approach (CN-DPM). Non-memory based continual learning approaches introduce extra overhead in storing model parameters — for example, CN-DPM introduces extra overhead by employing a generative model component and a short-term memory (STM) in addition to the classifier. Following Hsu et al. (2018), we set the memory size for GMED so that two methods introduces the same amount of the overhead. Table 3 show the results of ER, ER+GMED and the reported results of CN-DPM. We see ER+GMED outperforms CN-DPM." }, { "heading": "5.4 CASE STUDY AND DISCUSSION", "text": "Visualization of Edited Examples. Figure 5 visualize the editing on memory examples. We show examples from first two task (0/1, 2/3) in the Split MNIST dataset. The first and second rows show the original and edited examples, noted as xbefore and xafter. The third row shows the difference between two ∆x = xafter − xbefore. We see no significant visual differences between original and edited examples. However, by looking at the difference ∆x, we see there are examples whose contours get exaggerated, e.g., examples 1 and 12, and some get blurred, e.g., examples 2, 3, 5, and 6. Intuitively, to make an ambiguous example more forgettable, the editing should exaggerate its features; while to make a typical example more forgettable, the editing should blur its features. Our visualizations align with the intuition above: examples 1 and 12 are not typically written digits, while examples like 2, 3, 5, and 6 are typical.\nVisualization of Editing Directions. In Figure 4, we show the t-SNE Maaten & Hinton (2008) visualization of the editing vector ∆x = xafter−xbefore for examples from first 2 tasks in Split MNIST. We see the editing vectors cluster by the labels of the examples. It implies the editing performed is correlated with the labels and is clearly not random." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose Gradient based Memory Editing for task-free continual learning. The approach estimates forgetting of stored examples online and edit them so that they are more likely to be forgotten in upcoming updates. Experiments on benchmark datasets show our method can be combined with existing approaches to significantly improve over baselines on several benchmark\ndatasets. Our analysis further show the method is robust under various memory sizes, and outperforms alternative editing methods." }, { "heading": "A DETAILS OF COMPARED METHODS", "text": "We included detailed descriptions, and implementation details of some selected baselines in this section.\n• Experience Replay (ER) Robins (1995); Rolnick et al. (2019) stores examples in a fix-sized memory for future replay. We use reservoir sampling to decide which examples to store and replace. Following prior works Aljundi et al. (2018; 2019a); Chaudhry et al. (2020), at each time step we draw the same number of examples as the batch size from the memory to replay, which are both set to 10. The algorithm applies to the task-free scenario.\n• Gradient Episodic Memory (GEM) Lopez-Paz & Ranzato (2017) also stores examples in a memory. Before each model parameter update, GEM project gradients of model parameters so that the update does not incur loss increase on any previous task. The approach is not task-free.\n• Averaged Gradient Episodic Memory (AGEM) Chaudhry et al. (2019a) prevents the average loss increase on a randomly drawn subsets of examples from the memory. We draw 256 examples to compute the regularization at each iteration. The approach is task-free.\n• Bayesian Gradient Descent (BGD) Zeno et al. (2018) is a regularization-based continual learning algorithm. It adjust learning rate for parameters by estimating their certainty, which notes for their importance to previous data. The approach is task-free.\n• Gradient based Sample Selection (GSS) Aljundi et al. (2019b) builds upon ER by encouraging the diversity of stored examples. We use GSS-Greedy, which is the best performing variant in the paper. The approach is task-free.\n• Hindsight Anchor Learning (HAL) Chaudhry et al. (2020) learns an pseudo “anchor” example per task per class in addition to the replay memory by maximizing its estimated forgetting, and tries to fix model outputs on the anchors at training. However, unlike GMED, they estimate forgetting with loss increase on examples when the model train for a pass on the replay memory (and thus forgetting is estimated with “hindsight”). The approach is not task-free.\n• Maximally Interfering Retrieval (MIR) Aljundi et al. (2019a) improves ER by selecting top forgettable examples from the memory for replay. Following the official implementation, we evaluate forgetting on a candidate set of 25 examples for mini-ImageNet dataset, and 50 examples for others. While the approach is task-free, the official implementation filter out memory examples that belong to the same task as the current data stream, which assumes knowledge about tasks boundaries. We remove this operation to adapt the method to the task-free setup. Therefore, our results are not directly comparable to the official results.\n• Neural Dirichlet Process Model for Continual Learning (CN-DPM) Lee et al. (2020) is a task-free model-expansion based continual learning algorithm. We report the official results in the paper. In the comparison study between ER/ER+GMED with CN-DPM, for the base model in ER/ER+GMED, we use the full expanded model in CN-DPM (i.e., the model architecture when the training ends in CN-DPM). We use the same optimizer and the learning rate as CN-DPM in this set of experiments.\nB HYPERPARAMETER SETUP\nWe use a batch size of 10 throughout the experiment. We use SGD optimizer with a learning rate of 0.05 for MNIST datasets, 0.1 for Split CIFAR-10 and Split mini-ImageNet datasets, and 0.03 for the Split CIFAR-100 dataset. We perform three steps of model parameter updates for each example we visit in the Split mini-ImageNet dataset, following Aljundi et al. (2019a), and one step for others.\nGMED introduces two hyperparameters: the stride of the editing α, and the regularization strength β. As we assume no access to the full\ndata stream in the online learning setup, we cannot select hyperparameters according to validation performance after training on the full stream. Therefore, We tune the hyperparameters with only the training and validation set of first three tasks, following Chaudhry et al. (2019a). The models are trained until convergence before they proceed to the next task. The tasks used for hyperparameter search are included for reporting final accuracy, following Ebrahimi et al. (2020). We perform a grid search over all combinations of α and β and select the one with the best validation performance on the first three tasks. We select α from [0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0], and select β from [0, 10−3, 10−2, 10−1, 1]. We tune hyperparameters for ER-GMED and apply the same hyperparameters on MIR+GMED. Table 4 show the optimal hyperparameters selected for each dataset.\nC IMPLEMENTATION DETAILS OF THE HYBRID METHOD OF GMED AND MIR\nAs mentioned in Sec. 4.3, GMED can be built upon MIR to further improve the performance. At each time step, MIR retrieves the most forgettable examples from the memory with the forgetting defined as Eq. 1. We do not edit the selected examples directly; instead, we additionally draw another random mini-batch from the memory to apply editing. The motivation is that examples drawn by MIR are already most forgettable ones; if we directly perform editing on them, we would fall into a loop that letting forgettable examples more forgettable, which is not desired." } ]
2,020
null
SP:a692e1e43991839e08a02e9122757224e1582cfd
[ "Given one image, the paper first generates different views which are controlled by differentiable parameter \\alpha, and then minimizes the additional \"conditional variance\" term~(expectation of these views' squared differences). Therefore, the paper encourages representations of the same image remain similar under the augmentation. A testing strategy is further proposed by voting features with different augmentations. Results demonstrate the effectiveness." ]
We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a training objective for contrastive learning that uses a novel regularizer to control how the representation changes under transformation. We show that representations trained with this objective perform better on downstream tasks and are more robust to the introduction of nuisance transformations at test time. Second, we propose a change to how test time representations are generated by introducing a feature averaging approach that combines encodings from multiple transformations of the original input, finding that this leads to across the board performance gains. Finally, we introduce the novel Spirograph dataset to explore our ideas in the context of a differentiable generative process with multiple downstream tasks, showing that our techniques for learning invariance are highly beneficial.
[ { "affiliations": [], "name": "Adam Foster" }, { "affiliations": [], "name": "Rattana Pukdee" }, { "affiliations": [], "name": "Tom Rainforth" } ]
[ { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ben Barrett", "Alexander Camuto", "Matthew Willetts", "Tom Rainforth" ], "title": "Certifiably robust variational autoencoders", "venue": "arXiv preprint arXiv:2102.07559,", "year": 2021 }, { "authors": [ "Olivier Catoni" ], "title": "Pac-bayesian supervised classification: the thermodynamics of statistical learning", "venue": "arXiv preprint arXiv:0712.0248,", "year": 2007 }, { "authors": [ "Ken Chatfield", "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Return of the devil in the details: Delving deep into convolutional nets", "venue": "arXiv preprint arXiv:1405.3531,", "year": 2014 }, { "authors": [ "Shuxiao Chen", "Edgar Dobriban", "Jane H Lee" ], "title": "Invariance reduces variance: Understanding data augmentation in deep learning and beyond", "venue": null, "year": 1907 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big selfsupervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Moustapha Cisse", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "arXiv preprint arXiv:1704.08847,", "year": 2017 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Harris Drucker", "Yann Le Cun" ], "title": "Improving generalization performance using double backpropagation", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Wenchao Du", "Hu Chen", "Hongyu Yang" ], "title": "Learning invariant representation for unsupervised image restoration", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Benjamin Graham" ], "title": "Fractional max-pooling", "venue": "arXiv preprint arXiv:1412.6071,", "year": 2014 }, { "authors": [ "Karol Gregor", "Frederic Besse", "Danilo Jimenez Rezende", "Ivo Danihelka", "Daan Wierstra" ], "title": "Towards conceptual compression", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein GANs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "Olivier J Hénaff", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Ayush Jaiswal", "Daniel Moyer", "Greg Ver Steeg", "Wael AbdAlmageed", "Premkumar Natarajan" ], "title": "Invariant representations through adversarial forgetting", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "William B Johnson", "Joram Lindenstrauss" ], "title": "Extensions of lipschitz mappings into a hilbert space", "venue": "Contemporary mathematics,", "year": 1984 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Clare Lyle", "Mark van der Wilk", "Marta Kwiatkowska", "Yarin Gal", "Benjamin Bloem-Reddy" ], "title": "On the benefits of invariance in neural networks", "venue": "arXiv preprint arXiv:2005.00178,", "year": 2020 }, { "authors": [ "Sherjil Ozair", "Corey Lynch", "Yoshua Bengio", "Aaron Van den Oord", "Sergey Levine", "Pierre Sermanet" ], "title": "Wasserstein dependency measure for representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alexander A Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 1905 }, { "authors": [ "Tom Rainforth", "Rob Cornish", "Hongseok Yang", "Andrew Warrington", "Frank Wood" ], "title": "On nesting Monte Carlo estimators", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Salah Rifai", "Yann N Dauphin", "Pascal Vincent", "Yoshua Bengio", "Xavier Muller" ], "title": "The manifold tangent classifier", "venue": "Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Patrice Y Simard", "Yann A LeCun", "John S Denker", "Bernard Victorri" ], "title": "Transformation invariance in pattern recognitiontangent distance and tangent propagation", "venue": "In Neural networks: tricks of the trade,", "year": 1998 }, { "authors": [ "Jure Sokolić", "Raja Giryes", "Guillermo Sapiro", "Miguel RD Rodrigues" ], "title": "Robust large margin deep neural networks", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Nathaniel Thomas", "Tess Smidt", "Steven Kearnes", "Lusann Yang", "Li Li", "Kai Kohlhoff", "Patrick Riley" ], "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds", "venue": "arXiv preprint arXiv:1802.08219,", "year": 2018 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Laurens van der Maaten", "Eric Postma", "Jaap van den Herik" ], "title": "Dimensionality reduction: a comparative", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Tongzhou Wang", "Phillip Isola" ], "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "venue": "arXiv preprint arXiv:2005.10242,", "year": 2020 }, { "authors": [ "Jim Winkens", "Rudy Bunel", "Abhijit Guha Roy", "Robert Stanforth", "Vivek Natarajan", "Joseph R Ledsam", "Patricia MacWilliams", "Pushmeet Kohli", "Alan Karthikesalingam", "Simon Kohl" ], "title": "Contrastive training for improved out-of-distribution detection", "venue": "arXiv preprint arXiv:2007.05566,", "year": 2020 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Tian Xu", "Jiayu Zhan", "Oliver GB Garrod", "Philip HS Torr", "Song-Chun Zhu", "Robin AA Ince", "Philippe G Schyns" ], "title": "Deeper interpretability of deep networks", "venue": "arXiv preprint arXiv:1811.07807,", "year": 2018 }, { "authors": [ "Donggeun Yoo", "Sunggyun Park", "Joon-Young Lee", "In So Kweon" ], "title": "Multi-scale pyramid pooling for deep convolutional representation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2015 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Lyle" ], "title": "PAC-Bayesian approaches to analyzing the role of group invariance in generaliza", "venue": null, "year": 2020 }, { "authors": [ "Bachman" ], "title": "Recent work on contrastive learning, initiated by the development of Contrastive Predictive Coding (van den Oord et al., 2018; Hénaff et al., 2019), has progressively moved the transformations to a more central position in understanding and improving these approaches", "venue": null, "year": 2019 }, { "authors": [ "Tian" ], "title": "2019) obtained multiple views of images using Lab colour decomposition. In SimCLR (Chen et al., 2020a;b), the approach of applying multiple data augmentations (including flip and blur, as well as crops, colour jitter and random grayscale) and using an InfoNCE objective was simplified and streamlined, and the central role of the aug", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020c) were able to improve their contrastive learning approach and achieve excellent performance on downstream detection and segmentation tasks. Tian et al. (2020) studied what the best range of transformations for contrastive learning is. The authors found that there is a ‘sweet spot", "venue": null, "year": 2020 }, { "authors": [ "favourable. Winkens" ], "title": "2020) showed that contrastive methods can be successfully applied to out-of-distribution detection. We note that for tasks such as out-of-distribution detection, transformation covariance may be a more relevant property than invariance. D.2 GRADIENT REGULARIZATION TO ENFORCE LIPSCHITZ CONSTRAINTS Constraining a neural network to be Lipschitz continuous bounds how quickly its output can change", "venue": null, "year": 2020 }, { "authors": [ "Ozair" ], "title": "supervized learning, a small Lipschitz constant has been shown to lead to better generalization (Sokolić et al., 2017) and improved adversarial robustness (Cisse et al., 2017", "venue": "Tsuzuku et al.,", "year": 2018 }, { "authors": [ "Chatfield" ], "title": "down-scale representations in convolutional neural networks (CNNs) as part of a single forward pass through the network with a single input", "venue": "For example,", "year": 2012 }, { "authors": [ "Chen" ], "title": "where the matrices operate on the three colour channels of x and in parallel over all spatial dimensions. Each operation is followed by pointwise clipping of pixel values to the range", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020a) the transformation was performed in the preprocessing step, and we add the gradient penalty term in addition to the original loss in Chen et al. (2020a)", "venue": null, "year": 2020 }, { "authors": [ "xK). F" ], "title": "CONTRASTIVE LEARNING Similar to Chen et al. (2020a), we use the transformed", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "MoCo parameters K = 2048 and m = 0.99 for ResNet18 and K = 4096,m = 0.99 for ResNet50. We did not conduct extensive hyperparameter sweeps, but we did investigate larger values of K which did not lead to improved performance on CIFAR-100. (In particular, the original settings K = 65536,m", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning meaningful representations of data is a central endeavour in artificial intelligence. Such representations should retain important information about the original input whilst using fewer bits to store it (van der Maaten et al., 2009; Gregor et al., 2016). Semantically meaningful representations may discard a great deal of information about the input, whilst capturing what is relevant. Knowing what to discard, as well as what to keep, is key to obtaining powerful representations.\nBy defining transformations that are believed a priori to distort the original without altering semantic features of interest, we can learn representations that are (approximately) invariant to these transformations (Hadsell et al., 2006). Such representations may be more efficient and more generalizable than lossless encodings. Whilst less effective for reconstruction, these representations are useful in many downstream tasks that relate only to the semantic features of the input. Representation invariance is also a critically important task in of itself: it can lead to improved robustness and remove noise (Du et al., 2020), afford fairness in downstream predictions (Jaiswal et al., 2020), and enhance interpretability (Xu et al., 2018).\nContrastive learning is a recent and highly successful self-supervized approach to representation learning that has achieved state-of-the-art performance in tasks that rely on semantic features, rather than exact reconstruction (van den Oord et al., 2018; Hjelm et al., 2018; Bachman et al., 2019; He et al., 2019). These methods learn to match two different transformations of the same object in representation space, distinguishing them from contrasts that are representations of other objects.\nThe objective functions used for contrastive learning encourage representations to remain similar under transformation, whilst simultaneously requiring different inputs to be well spread out in representation space (Wang & Isola, 2020). As such, the choice of transformations is key to their success (Chen et al., 2020a). Typical choices include random cropping and colour distortion.\nHowever, representations are compared using a similarity function that can be maximized even for representations that are far apart, meaning that the invariance learned is relatively weak. Unfor∗Equal contribution\ntunately, directly changing the similarity measure hampers the algorithm (Wu et al., 2018; Chen et al., 2020a). We therefore investigate methods to improve contrastive representations by explicitly encouraging stronger invariance to the set of transformations, without changing the core selfsupervized objective; we look to extract more information about how representations are changing with respect to transformation, and use this to direct the encoder towards greater invariance.\nTo this end, we first develop a gradient regularization term that, when included in the training loss, forces the encoder to learn a representation function that varies slowly with continuous transformations. This can be seen as constraining the encoder to be approximately transformation invariant. We demonstrate empirically that while the parameters of the transformation can be recovered from standard contrastive learning representations using just linear regression, this is no longer the case when our regularization is used. Moreover, our representations perform better on downstream tasks and are robust to the introduction of nuisance transformations at test time.\nTest representations are conventionally produced using untransformed inputs (Hjelm et al., 2018; Kolesnikov et al., 2019), but this fails to combine information from different transformations and views of the object, or to emulate settings in which transformation noise cannot simply be removed at test time. Our second key proposal is to instead create test time representations by feature averaging over multiple, differently transformed, inputs to address these concerns and to more directly impose invariance. We show theoretically that this leads to improved performance under linear evaluation protocols, further confirming this result empirically.\nWe evaluate our approaches first on CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), using transformations appropriate to natural images and evaluating on a downstream classification task. To validate that our ideas transfer to other settings, and to use our gradient regularizer within a fully differentiable generative process, we further introduce a new synthetic dataset called Spirograph. This provides a greater variety of downstream regression tasks, and allows us to explore the interplay between nuisance transformations and generative factors of interest. We confirm that using our regularizer during training and our feature averaging at test time both improve performance in terms of transformation invariance, downstream tasks, and robustness to train–test distributional shift.\nIn summary, the contributions of this paper are as follows:\n• We derive a novel contrastive learning objective that leads to more invariant representations. • We propose test time feature averaging to enforce further invariance. • We introduce the Spirograph dataset. • We show empirically that our approaches lead to more invariant representations and achieve\nstate-of-the-art performance for existing downstream task benchmarks." }, { "heading": "2 PROBABILISTIC FORMULATION OF CONTRASTIVE LEARNING", "text": "The goal of unsupervized representation learning is to encode high-dimensional data, such as images, retaining information that may be pertinent to downstream tasks and discarding information that is not. To formalize this, we consider a data distribution p(x) on X and an encoder fθ : X → Z which is a parametrized function mapping from data space to representation space.\nContrastive learning is a self-supervized approach to representation learning that learns to make representations of differently transformed versions of the same input more similar than representations of other inputs. Of central importance is the set of transformations, also called augmentations (Chen et al., 2020a) or views (Tian et al., 2019), used to distort the data input x. In the common application of computer vision, it is typical to include resized cropping; brightness, contrast, saturation and hue distortion; greyscale conversion; and horizontal flipping. We will later introduce the Spirograph dataset which uses quite different transformations. In general, transformations are assumed to change the input only cosmetically, so all semantic features such as the class label are preserved; the set of transformations indicates changes which can be safely ignored by the encoder.\nFormally, we consider a transformation set T ⊆ {t : X → X} and a probability distribution p(t) on this set. A representation z of x is obtained by applying a random transformation t to x and then encoding the result using fθ. Therefore, we do not have one representation of x, but an implicit distribution p(z|x). A sample of p(z|x) is obtained by sampling t ∼ p(t) and setting z = fθ(t(x)).\nIf the encoder is to discard irrelevant information, we would expect different encodings of x formed with different transformations t to be close in representation space. Altering the transformation should not lead to big changes in the representations of the same input. In other words, the distribution p(z|x) should place most probability mass in a small region. However, this does not provide a sufficient training signal for the encoder fθ as it fails to penalize trivial solutions in which all x are mapped to the same z. To preserve meaningful information about the input x whilst discarding purely cosmetic features, we should require p(z|x) to be focused around a single z whilst simultaneously requiring the representations of different inputs not to be close. That is, the marginal p(z) = Ep(x)[p(z|x)] should distribute probability mass over representation space. This intuition is directly reflected in contrastive learning. Most state-of-the-art contrastive learning methods utilize the InfoNCE objective (van den Oord et al., 2018), or close variants of it (Chen et al., 2020a). InfoNCE uses a batch x1, ...,xK of inputs, from which we form pairs of representations (z1, z′1), ..., (zK , z ′ K) by applying two random transformations to each input followed by the encoder fθ. In probabilistic language\nxi ∼ p(x) for i = 1, ...,K (1) zi, z ′ i ∼ p(z|x = xi) conditionally independently given xi, for i = 1, ...,K, (2)\nsuch that zi, z′i = fθ(t(x)), fθ(t ′(x)) for i.i.d. transformations t, t′ ∼ p(t). Given a learnable similarity score sφ : Z × Z → R, contrastive learning methods minimize the following loss\nL(θ, φ) = − 1 K K∑ i=1 sφ(zi, z ′ i) + 1 K K∑ i=1 log K∑ j=1 exp [ sφ(zi, z ′ j) ] . (3)\nWritten in this way, we see that the loss will be minimized when sφ(zi, z′i) is large, but sφ(zi, z ′ j) is small for i 6= j. In other words, InfoNCE makes the two samples zi, z′i of p(z|x = xi) similar, whilst making samples zi, z′j of p(z) dissimilar. This can also be understood through the lens of mutual information, for more details see Appendix A.\nIn practice, the similarity measure used generally takes the form (Chen et al., 2020a)\nsφ(z, z ′) =\ngφ(z) >gφ(z ′)\nτ‖gφ(z)‖2‖gφ(z′)‖2 (4)\nwhere gφ is a small neural network and τ is a temperature hyperparameter. If the encoder fθ is perfectly invariant to the transformations, then zi = z′i and sφ(zi, z ′ i) will be maximal. However, there are many ways to maximize the InfoNCE objective without encouraging strong invariance in the encoder.1 In this paper, we show how we can learn stronger invariances, above and beyond what is learned through the above approach, and that this benefits downstream task performance." }, { "heading": "3 INVARIANCE BY GRADIENT REGULARIZATION", "text": "Contrastive learning with InfoNCE can gently encourage invariance by maximizing sφ(z, z′), but does not provide a strong signal to ensure this invariance. Our first core contribution is to show how we can use gradient methods to directly regulate how the representation changes with the transformation and thus ensure the desired invariance. The key underlying idea is to differentiate the representation with respect to the transformation, and then encourage this gradient to be small so that the representation changes slowly as the transformation is varied.\nTo formalize this, we begin by looking more closely at the transformations T which are used to define the distribution p(z|x). Many transformations, such as brightness adjustment, are controlled by a transformation parameter. We can include these parameters in our set-up by writing the transformation t as a map from both input space X and transformation parameter space U , i.e. t : X ×U → X . In this formulation, we sample a random transformation parameter from u ∼ p(u) which is a distribution on U . A sample from p(z|x) is then obtained by taking z = fθ(t(x,u)), with t now regarded as a fixed function.\n1This is because the function gφ is not an injection, so we may have gφ(z) = gφ(z′) but z 6= z′. Johnson & Lindenstrauss (1984) gives conditions under which a projection of this form will preserve approximate distances, in particular, the required projection dimension is much larger than the typical value 128.\nThe advantage of this change of perspective is that it opens up additional ways to learn stronger invariance of the encoder. In particular, it may make sense to consider the gradient ∇uz, which describes the rate of change of z with respect to the transformation. This only makes sense for some transformation parameters—we can differentiate with respect to the brightness scaling but not with respect to a horizontal flip.\nTo separate out differentiable and non-differentiable parameters we write u = α,β where α are the parameters for which it makes sense to consider the derivative ∇αz. Intuitively, this gradient should be small to ensure that representations change only slowly as the transformation parameterα is varied. For clarity of exposition, and for implementation practicalities, it is important to consider gradients of a scalar function, so we introduce an arbitrary direction vector e ∈ Z and define\nF (α, β, x, e) = e · fθ(t(x,α,β)) ‖fθ(t(x,α,β))‖2\n(5)\nso that F : A× B × X × Z → R calculates the scalar projection of the normalized representation z/‖z‖2 in the e direction. To encourage an encoder that is invariant to changes in α, we would like to minimize the expected conditional variance of F with respect to α:\nV = Ep(x)p(β)p(e) [ Varp(α)[F (α,β,x, e) | x,β, e] ] , (6)\nwhere we have exploited independence to write p(x,β, e) = p(x)p(β)p(e). Defining V requires a distribution for e to be specified. For this, we make components of e independent Rademacher random variables, justification for which is included in Appendix B.\nA naive estimator of V can be formed using a direct nested Monte Carlo estimator (Rainforth et al., 2018) of sample variances, which, including Bessel’s correction, is given by\nV ≈ 1 K K∑ i=1 1 L− 1 L∑ j=1 F (αij ,βi,xi, ei) 2 − 1 L(L− 1) [ L∑ k=1 F (αik,βi,xi, ei) ]2 (7) where xi,βi, ei ∼ p(x)p(β)p(e) and αij ∼ p(α). However, this estimator requires LK forward passes through the encoder fθ to evaluate. As an alternative to this computationally prohibitive approach, we consider a first-order approximation2 to F F (α′,β,x, e)− F (α,β,x, e) = ∇αF (α,β,x, e) · (α′ −α) + o(‖α′ −α‖) (8) and the following alternative form for the conditional variance (see Appendix B for a derivation) Varp(α) [F (α,β,x, e) | x,β, e] = 12Ep(α)p(α′) [ (F (α,β,x, e)− F (α′,β,x, e))2 | x,β, e ] (9) Combining these two ideas, we have V = Ep(x)p(β)p(e) [ 1 2Ep(α)p(α′) [ (F (α,β,x, e)− F (α′,β,x, e))2 | x,β, e ]] (10)\n≈ Ep(x)p(β)p(e) [ 1 2Ep(α)p(α′) [ (∇αF (α,β,x, e) · (α′ −α)) 2 | x,β, e ]] . (11)\nHere we have an approximation of the conditional variance V that uses gradient information. Including this as a regularizer within contrastive learning will encourage the encoder to reduce the magnitude of the conditional variance V , forcing the representation to change slowly as the transformation is varied and thus inducing approximate invariance to the transformations.\nAn unbiased estimator of equation 11 using a batch x1, ...,xK is\nV̂regularizer = 1\nK K∑ i=1 1 2L L∑ j=1 [ ∇αF (αi,βi,xi, ei) · (α′ij −αi) ]2 (12) where xi,αi,βi, ei,∼ p(x)p(α)p(β)p(e), α′ij ∼ p(α). We can cheaply use a large number of samples for α′ without having to take any additional forward passes through the encoder: we only require K evaluations of F . Our final loss function is\nL(θ, φ) =− 1 K K∑ i=1 sφ(zi, z ′ i) + 1 K K∑ i=1 log K∑ j=1 exp [ sφ(zi, z ′ j) ]\n+ λ\nLK K∑ i=1 L∑ j=1 [ ∇αF (αi,βi,xi, ei) · (α′ij −αi) ]2 (13) 2We use the notation a(x) = o(b(x)) to mean a(x)/b(x) → 0 as x → ∞.\nwhere λ is a hyperparameter controlling the regularization strength. This loss does not require us to encode a larger number of differently transformed inputs. Instead, it uses the gradient at (x,α,β, e) to control properties of the encoder in a neighbourhood ofα. This can effectively reduce the representation gradient along the directions corresponding to many different transformations. This, in turn, creates an encoder that is approximately invariant to the transformations." }, { "heading": "4 BETTER TEST TIME REPRESENTATIONS WITH FEATURE AVERAGING", "text": "At test time, standard practice (Hjelm et al., 2018; Kolesnikov et al., 2019) dictates that test representations be produced by applying the encoder to untransformed inputs (possibly using a central crop). It may be beneficial, however, to aggregate information from differently transformed versions of inputs to enforce invariance more directly, particularly when our previously introduced gradient regularization can only be applied to a subset of the transformation parameters. Furthermore, in real-world applications, it may not be possible to remove nuisance transformations at test time or, as in our Spirograph dataset, there may not be only one unique ‘untransformed’ version of x.\nTo this end, we propose combining representations from different transformations using feature averaging. This approach, akin to ensembling, does not directly use one encoding from the network fθ as a representation for an input x. Instead, we sample transformation parameters α1, ...,αM ∼ p(α),β1, ...,βM ∼ p(β) independently, and average the encodings of these differently transformed versions of x to give a single feature averaged representation\nz(M)(x) = 1\nM M∑ m=1 fθ(t(x,αm,βm)). (14)\nUsing z(M) aggregates information about x by averaging over a range of possible transformations, thereby directly encouraging invariance. Indeed, the resulting representation has lower conditional variance than the single-sample alternative, since\nVarp(α1:M )p(β1:M )\n[ e · z(M)(x) ∣∣∣x, e] = 1 M Varp(α1)p(β1) [ e · z(1)(x) ∣∣∣x, e] . (15) Further, unlike gradient regularization, this approach takes account of all transformations, including those which we cannot differentiate with respect to (e.g. left–right flip). It therefore forms a natural test time counterpart to our training methodology to promote invariance.\nWe do not recommend using feature averaged representations during training. During training, we need a training signal to recognize similar and dissimilar representations, and feature averaging will weaken this training signal. Furthermore, the computational cost of additional encoder passes is modest when used once at test time, but more significant when used at every training iteration.\nAs a test time tool though, feature averaging is powerful. In Theorem 1 below, we show that for certain downstream tasks the feature averaged representation will always perform better than the single-sample transformed alternative. The proof is presented in Appendix C. Theorem 1. Consider evaluation on a downstream task by fitting a linear classification model with softmax loss or a linear regression model with square error loss on with representations as features. For a fixed classifier or regressor and M ′ ≥M we have\nEp(x,y)p(α1:M′ )p(β1:M′ ) [ ` ( z(M ′), y )] ≤ Ep(x,y)p(α1:M )p(β1:M ) [ ` ( z(M), y )] . (16)\nEmpirically we find that, using the same encoder and the same linear classification model, feature averaging can outperform evaluation using untransformed inputs. That is, even when it is possible to remove the transformations at test time, it is beneficial to retain them and use feature averaging." }, { "heading": "5 RELATED WORK", "text": "Contrastive learning (van den Oord et al., 2018; Hénaff et al., 2019) has progressively refined the role of transformations in learning representations, with Bachman et al. (2019) applying repeated data augmentation and Tian et al. (2019) using Lab colour decomposition to define powerful selfsupervized tasks. The range of transformations has progressively increased (Chen et al., 2020a;b),\nwhilst changing transformations can markedly improve performance (Chen et al., 2020c). Recent work has attempted to further understand and refine the role of transformations (Tian et al., 2020).\nThe idea of differentiating with respect to transformation parameters dates back to the tangent propagation algorithm (Simard et al., 1998; Rifai et al., 2011). Using the notation of this paper, tangent propagation penalizes the norm of the gradient of a neural network evaluated at α = 0, encouraging local transformation invariance near the original input. In our work, we target the conditional variance (Equation 6), leading to gradient evaluations across the α parameter space with random α ∼ p(α) and a regularizer that is not a gradient norm (Equation 12). Our gradient regularization approach also connects to work on gradient regularization for Lipschitz constraints. A small Lipschitz constant has been shown to lead to better generalization (Sokolić et al., 2017) and improved adversarial robustness (Cisse et al., 2017; Tsuzuku et al., 2018; Barrett et al., 2021). Previous work focuses on constraining the mapping x 7→ z to have a small Lipschitz constant which is beneficial for adversarial robustness. In our work we focus on the influence of α on z, which gives rise to transformation robustness. Appendix D provides a more comprehensive discussion of related work." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 DATASETS AND SET-UP", "text": "The methods proposed in this paper learn representations that discard some information, whilst retaining what is relevant. To more deeply explore this idea, we construct a dataset from a generative process controlled by both generative factors of interest and nuisance transformations. Representations should be able to recover the factors of interest, whilst being approximately invariant to transformation. To aid direct evaluation of this, we introduce a new dataset, which we refer to as the Spirograph dataset. Its samples are created using four generative factors and six nuisance transformation parameters. Figure 1 shows two sets of four samples with the generative factors fixed in each set. Every Spirograph sample is based on a hypotrochoid—one of a parametric family of curves that describe the path traced out by a point on one circle rolling around inside another. This generative process is fully differentiable in the parameters, meaning that our gradient regularization can be applied to every transformation. We define four downstream tasks for this dataset, each corresponding to the recovery of one of the four generative factors of interest using linear regression. The final dataset consists of 100k training and 20k test images of size 32× 32. For full details of this dataset, see Appendix E.\nAs well as the Spirograph dataset, we apply our ideas to CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009). We base our contrastive learning set-up on SimCLR (Chen et al., 2020a). To use our gradient regularization, we adapt colour distortion (brightness, contrast, saturation and hue adjustment) as a fully differentiable transformation giving a four dimensional α; we also included random cropping and flipping but we did not apply a gradient regularization to these. We used ResNet50 (He et al., 2016) encoders for CIFAR and ResNet18 for Spirograph, and regularization parameters λ = 0.1 for CIFAR and λ = 0.01 for Spirograph. For comprehensive details of our set-up and additional plots, see Appendix F. For an open source implementation of our methods, see https://github.com/ae-foster/invclr." }, { "heading": "6.2 GRADIENT REGULARIZATION LEADS TO STRONGLY INVARIANT REPRESENTATIONS", "text": "We first show that our gradient penalty successfuly learns representations that are more invariant to transformation than standard contrastive learning. First, we estimate the conditional variance\nof the representation that was used as the starting point for motivating our approach, i.e. Equation 6, using the slower, but more exact, nested Monte Carlo estimator of Equation 7 to evaluate this. In Figure 2 we see that the gradient penalty strikingly reduces the conditional variance on CIFAR-10 compared to standard contrastive learning.\nAs an additional measure of representation invariance, we fit a linear regression model that predicts α from z, for which higher loss indicates a greater degree of invariance. We also compute a reference loss: the loss that would be obtained when predicting α using only a constant. In Table 1, we see that unlike standard contrastive learning, after training with gradient regularization the linear regression model cannot predict α from z any better than using a constant prediction. The loss is actually higher than the reference value because the former is obtained by training a regressor for a finite number of steps, whilst the latter is a theoretical optimum value. Similar results for other datasets are in Appendix F." }, { "heading": "6.3 GRADIENT REGULARIZATION FOR DOWNSTREAM TASKS AND TEST TIME DATASET SHIFT", "text": "We now show that these more invariant representations perform better on downstream tasks. For CIFAR, we produce representations for each element of the training and test set (by applying the encoder fθ to untransformed inputs). We then fit a linear classifier on the training set, using different fractions of the class labels. This allows us to assess our representations at different levels of supervision. We use the entire test set to evaluate each of these classifiers. In Figures 3(a) and (b), we see that the test accuracy improves across the board with gradient regularization.\nFor Spirograph, we take a similar approach to evaluation: we create representations for the training and test sets and fit linear regression models with representations as features for each of the four downstream tasks. In Figure 3(c), we see the test loss on each task with the baseline scaled to 1. Here we see huge improvements across all tasks, presumably due to the ability to apply gradient regularization to all transformations (unlike for CIFAR).\nWe further study the effect of transformation at test time, showing that gradient penalized representations can be more robust to shifts in the transformation distribution. For CIFAR-10, we apply colour distortion transformations at test time with different levels of variance. By focusing on colour distortion at test time, we isolate the transformations that the gradient regularization targetted. In Figure 4(a) we see that when the test time distribution is shifted to have higher variance than the training regime, our gradient penalized representations perform better than using contrastive learn-\ning alone. For Spirograph, we investigate changing both the mean of the transformation distribution, moving the entire test distribution away from the training regime, and increasing the variance of transformations to add noise. Results are shown in Figure 4(b) and (c). In 4(c) in particular, we see that gradient regularized representations are robust to a greater level of distortion at test time." }, { "heading": "6.4 FEATURE AVERAGING FURTHER IMPROVES PERFORMANCE", "text": "We now assess the impact of feature averaging on test time performance. For CIFAR, we apply feature averaging using all transformations, including random crops etc., and compare with the standard protocol of using untransformed inputs to form the test representations. Figures 5(a) and (b) show that feature averaging leads to significant improvements. This adds to the result of Theorem 1, which implies that test loss decreases asM is increased. In Figure 5(c), we see that feature averaging has an equally beneficial impact on Spirograph. It is interesting to note that in both cases there is still significant residual benefit from gradient regularization, even with a large value of M ." }, { "heading": "6.5 OUR METHODS COMPARE FAVOURABLY WITH OTHER PUBLISHED BASELINES", "text": "Our primary aim was to show that both gradient regularization and feature averaging lead to improvements compared to baselines that are in other respects identical. Our methods are applicable to almost any base contrastive learning approach, and we would expect them to deliver improvements across this range of different base methods. In Table 2, we present published baselines on\nCIFAR datasets, along with the results that we obtain using our gradient regularization and feature averaging with SimCLR as a base method. This is the default base method that we recommend, and that was used in our previous experiments. Interestingly, the best ResNet50 encoder from our experiments achieves an accuracy of 94.9% on CIFAR-10, which outperforms the next best published result from the contrastive learning literature by almost 1%, and 75.1% on CIFAR-100, an almost 5% improvement over a significantly larger encoder architecture. As such, we see our results actually provide performance that is state-of-the-art for contrastive learning on these benchmarks. In fact, our performance increases almost entirely close the gap to the state-of-the-art performance for fully supervized training with the same architecture on CIFAR-10 (95.1%, Chen et al. (2020a)).\nTo demonstrate that our ideas generalize to other contrastive learning base methods, we apply our ideas to MoCo v2 (Chen et al., 2020c). Table 3 shows that, whilst MoCo v2 itself does not perform as well as SimCLR on CIFAR-100, the addition of gradient regularization and feature averaging still leads to significant improvements in its performance. Table 3 further illustrates that both gradient regularization and feature averaging contribute to the performance improvements offered by our approach and that our techniques generalize across diffrent encoder architectures." }, { "heading": "6.6 HYPERPARAMETER SENSITIVITY", "text": "As a further ablation study, we investigated the sensitivity of our method to changes in the gradient regularization hyperparameter λ (as defined in Equation 13). In Figure 6(a) we see that, as expected, the conditional variance of representations decreases as λ is increased. The downstream task performance Figure 6(b) similarly improves as we increase λ, reaching an optimum around λ = 10−3, before beginning to increase due to over-regularization. We see that a wide range of values of λ deliver good performance and the method is not overly sensitive to careful tuning of λ." }, { "heading": "7 CONCLUSION", "text": "Viewing contrastive representation learning through the lens of representation invariance to transformation, we derived a gradient regularizer that controls how quickly representations can change with transformation, and proposed feature averaging at test time to pull in information from multiple transformations. These approaches led to representations that performed better on downstream tasks. Therefore, our work provides evidence that invariance is highly relevant to the success of contrastive learning methods, and that there is scope to further improve upon these methods by using invariance as a guiding principle." }, { "heading": "ACKNOWLEDGMENTS", "text": "AF gratefully acknowledges funding from EPSRC grant no. EP/N509711/1. AF would also like to thank Benjamin Bloem-Reddy for helpful discussions about theoretical aspects of this work." }, { "heading": "A MUTUAL INFORMATION", "text": "In Section 2, we saw that the InfoNCE objective equation 3 fulfills the need to make p(z|x) tightly focused on a single point whilst simultaneously requiring p(z) to be well spread out over representation space. In this appendix, we show that the same general principle of making p(z|x) tightly focused on a single point whilst simultaneously requiring p(z) to be well spread out over representation space connects to mutual information maximization.\nTo establish the connection to mutual information, we take the differential entropy as our measure of ‘spread’. Recall the differential entropy of a random variable w is\nH[p(w)] := Ep(w)[− log p(w)]. (17)\nWe then translate our intuition to make p(z|x) tightly focused on a single point whilst simultaneously requiring p(z) to be well spread out over representation space into requiring Ep(x)[H[p(z|x)]] to be minimized whilst H[p(z)] should be simultaneously maximized. This suggests the following loss function\nLentropy = Ep(x)[H[p(z|x)]]−H[p(z)] = −I(x; z) (18)\nwhich is the (negative) mutual information between x and z. Note that in this formulation, it is the distribution p(z|x) as much as the InfoMax principle which determines how this loss will behave. Finally, there is a clear connection between the InfoNCE loss and mutual information, specifically the InfoNCE loss is, in expectation and up to an additive constant, a lower bound on I(x; z) van den Oord et al. (2018); Poole et al. (2019)." }, { "heading": "B METHOD", "text": "" }, { "heading": "B.1 AN ALTERNATIVE VARIANCE FORMULA", "text": "We present a derivation of our alternative formula for the variance (dropping the conditioning from the notation for conciseness) 1 2Ep(α)p(α′) [ (F (α,β,x, e)− F (α′,β,x, e))2\n] = 12Ep(α)p(α′) [ (F (α,β,x, e)− Eα[F (α,β,x, e)] + Eα[F (α,β,x, e)]− F (α′,β,x, e))2\n] = 12Ep(α)p(α′) [ (F (α,β,x, e)− Eα[F (α,β,x, e)])2 + (Eα[F (α,β,x, e)]− F (α′,β,x, e))2\n] + Ep(α)p(α′) [(F (α,β,x, e)− Eα[F (α,β,x, e)])(Eα[F (α,β,x, e)]− F (α′,β,x, e))]\n= 12Ep(α)p(α′) [ (F (α,β,x, e)− Eα[F (α,β,x, e)])2 + (F (α′,β,x, e)]− Eα[F (α,β,x, e)])2 ] = Varp(α) [F (α,β,x, e)] ." }, { "heading": "B.2 MOTIVATING THE RADEMACHER DISTRIBUTION", "text": "We are interested in the conditional variance of z with respect toα, but as z is a vector valued random variable we properly need to consider the conditional covariance matrix Σ = Covα(z|x,β). We henceforth consider x,β to be fixed. To reduce conditional variance in all directions, it makes sense to reduce the trace Tr Σ. Due to computational limitations, we cannot directly estimate this trace at each iteration, instead we must estimate Var(e · z) = e>Σe. However, by carefully selecting the distribution for e we can effectively target the trace of the covariance matrix by taking the expectation over e. Specifically, suppose that the components of e are independent Rademacher random variables (±1 with equal probability). Then\nEp(e) [ e>Σe ] = Ep(e) ∑ ij eiΣijej = ∑ ij ΣijEp(e) [eiej ] = ∑ ij Σijδij = Tr Σ. (19)" }, { "heading": "C THEORY", "text": "We present the proof of Theorem 1 which is restated for convenience.\nTheorem 1. Consider evaluation on a downstream task by fitting a linear classification model with softmax loss or a linear regression model with square error loss on with representations as features. For a fixed classifier or regressor and M ′ ≥M we have\nEp(x,y)p(α1:M′ )p(β1:M′ ) [ ` ( z(M ′), y )] ≤ Ep(x,y)p(α1:M )p(β1:M ) [ ` ( z(M), y )] . (16)\nProof. We have the softmax loss\n`(z, y) = −w>y z + log ∑ j exp ( w>j z ) (20) or the square error loss\n`(z, y) = ∣∣y −w>z∣∣2 . (21)\nWe first show that both loss functions considered are convex in the argument z. To show this, we fix 0 ≤ p = 1− q ≤ 1. For softmax loss, we have\n` (pz1 + qz2, y) (22)\n= −w>y (pz1 + qz2) + log ∑ j exp ( w>j (pz1 + qz2) ) (23) = −pw>y z1 − qw>y z2 + log ∑ j exp ( w>j z1 )p exp ( w>j z2\n)q (24) ≤ −pw>y z1 − qw>y z2 + log ∑ j exp ( w>j z1 )p∑ j exp ( w>j z2\n)q by Hölder’s Inequality\n(25)\n= −pw>y z1 − qw>y z2 + p log ∑ j exp ( w>j z1 )+ q log ∑ j exp ( w>j z2 ) (26) = p` (z1, y) + q` (z2, y) (27)\nand for square error loss we have\n` (pz1 + qz2, y) = |y −w>(pz1 + qz2)|2 (28) = |p(y −w>z1) + q(y −w>z2)|2 (29)\n= p|y −w>z1|2 + q|y −w>z2|2 + (p2 − p) ∣∣w>z1 −w>z2∣∣2 (30)\n≤ p|y −w>z1|2 + q|y −w>z2|2 (31) = p` (z1, y) + q` (z2, y) (32)\nFor the inequality in the Theorem, we consider drawing M ′ ≥M samples, and randomly choosing an M -subset. Let S represent this subset and let z(M)S represent the feature averaged representation that uses the subset S. We have\nEp(x,y)p(α1:M )p(β1:M ) [ ` ( z(M), y )] = Ep(x,y)p(t1:M′ )p(α1:M′ )p(β1:M′ )p(S) [ ` ( z (M) S , y )] (33)\n≥ Ep(x,y)p(α1:M′ )p(β1:M′ ) [ ` ( Ep(S) [ z (M) S ] , y )]\n(34) = Ep(x,y)p(α1:M′ )p(β1:M′ ) [ ` ( z(M ′), y )] (35)\nwhere the inequality at equation 34 is by Jensen’s Inequality. This completes the proof.\nWe provide an informal discussion of other theoretical results that relate to our work. Lyle et al. (2020) explored PAC-Bayesian approaches to analyzing the role of group invariance in generalization of supervized neural network models. The central bound based on Catoni (2007) is given in\nTheorem 1 of Lyle et al. (2020) and depends on the empirical risk R̂`(Q,Dn) and the KL term KL(Q||P ) which represents the PAC-Bayesian KL divergence between distributions on hypothesis space. Theorem 7 of Lyle et al. (2020) shows KL(Q◦||P ◦) ≤ KL(Q||P ), where Q◦ and P ◦ are formed by symmetrization such as feature averaging over the group of transformations. In our context, although the transformations do not form a group, we could still consider a symmetrization operation with feature averaging. If the symmetrization does not affect the empirical risk, then Theorem 9 of Lyle et al. (2020) would apply to our setting and we would be able to obtain a tighter generalization bound for our suggested approach of feature averaging." }, { "heading": "D RELATED WORK", "text": "" }, { "heading": "D.1 THE ROLE OF TRANSFORMATIONS IN CONTRASTIVE LEARNING", "text": "Recent work on contrastive learning, initiated by the development of Contrastive Predictive Coding (van den Oord et al., 2018; Hénaff et al., 2019), has progressively moved the transformations to a more central position in understanding and improving these approaches. In Bachman et al. (2019), multiple views of a context are extracted, on images this utilizes repeated data augmentation such as random resized crop, random colour jitter, and random conversion to grayscale, and the model is trained to maximize information between these views using an InfoNCE style objective. Other approaches are possible, for instance Tian et al. (2019) obtained multiple views of images using Lab colour decomposition. In SimCLR (Chen et al., 2020a;b), the approach of applying multiple data augmentations (including flip and blur, as well as crops, colour jitter and random grayscale) and using an InfoNCE objective was simplified and streamlined, and the central role of the augmentations was emphasized. By changing the set of transformation operations used, Chen et al. (2020c) were able to improve their contrastive learning approach and achieve excellent performance on downstream detection and segmentation tasks. Tian et al. (2020) studied what the best range of transformations for contrastive learning is. The authors found that there is a ‘sweet spot’ in the strength of transformations applied in contrastive learning, with transformations that are too strong or weak being less favourable. Winkens et al. (2020) showed that contrastive methods can be successfully applied to out-of-distribution detection. We note that for tasks such as out-of-distribution detection, transformation covariance may be a more relevant property than invariance." }, { "heading": "D.2 GRADIENT REGULARIZATION TO ENFORCE LIPSCHITZ CONSTRAINTS", "text": "Constraining a neural network to be Lipschitz continuous bounds how quickly its output can change as the input changes. In supervized learning, a small Lipschitz constant has been shown to lead to better generalization (Sokolić et al., 2017) and improved adversarial robustness (Cisse et al., 2017; Tsuzuku et al., 2018). One practical method for constraining the Lipschitz constant is gradient regularization (Drucker & Le Cun, 1992; Gulrajani et al., 2017). Lipschitz constraints have also been applied in a self-supervized context: in Ozair et al. (2019), the authors used a Wasserstein dependency measure in a contrastive learning setting by using gradient penalization to ensure that the function x,x′ 7→ sφ(fθ(x), fθ(x′)) is 1-Lipschitz. Our work uses a gradient regularizer to control how quickly representations can change, but unlike existing work we focus on how representations change with α as x is fixed, instead of how they change with x." }, { "heading": "D.3 GROUP INVARIANT NEURAL NETWORKS", "text": "A large body of recent work has focused on designing neural network architectures that are perfectly invariant, or equivariant, to a set of transformations T in the case when T forms a group. Cohen & Welling (2016) showed how convolutional neural networks can be generalized to have equivariance to arbitrary group transformations applied to their inputs. This can apply, for instance, to rotation groups on the sphere (Cohen et al., 2018), rotation and translation groups on point clouds (Thomas et al., 2018), and permutation groups on sets (Zaheer et al., 2017). Transformations that form a group cannot remove information from the input (because they must be invertible) and can be composed in any order. This means that the more general transformations considered in our work cannot form a group—they cannot be composed (repeated decreasing of brightness to zero is not allowed) nor inverted (crops are not invertible). We have therefore considered methods that improve invariance under much more general transformations." }, { "heading": "D.4 FEATURE AVERAGING AND POOLING", "text": "The concepts of sum-, max- and mean-pooling have a rich history in deep learning (Krizhevsky et al., 2012; Graham, 2014). For example, pooling can be used to down-scale representations in convolutional neural networks (CNNs) as part of a single forward pass through the network with a single input. In our work, however, we apply feature averaging, or mean-pooling, using multiple, differently transformed versions of the same input. This is more similar to Chatfield et al. (2014), who considered pooling or stacking augmented inputs as part of a CNN, and Yoo et al. (2015) who proposed a multi-scale pyramid pooling approach. Unlike these works, we apply pooling in an unsupervized contrastive representation learning context. Our feature averaging occurs on the final representations, rather than in a pyramid, and not on intermediate layers of the network. We also use the transformation distribution that is used to define the self-supervized task itself. Other work has explored theoretical aspects of feature averaging (Chen et al., 2019; Lyle et al., 2020) in the supervized learning setting, showing conditions on the invariance properties of the underlying data distribution that can be exploited to obtain improved generalization using feature averaging. For a detailed discussion of Lyle et al. (2020) and its connections with our own work, see Section C." }, { "heading": "E SPIROGRAPH DATASET", "text": "We propose a new dataset that allows the separation of generative factors of interest from nuisance transformation factors and that is formed from a fully differentiable generative process. A standalone implementation of this dataset can be found at https://github.com/rattaoup/ spirograph. Our dataset is inspired by the beautiful spirograph patterns some of us drew as children, which are mathematically hypotrochoids given by the following equations\nx = (m− h) cos(t) + h cos (\n(m− h)t b\n) (36)\ny = (m− h) sin(t)− h sin (\n(m− h)t b\n) (37)\nFigure 7(a) shows an example. To create an image dataset from such curves, we choose 40 equally spaced points ti with t1 = 0 and t40 = 2π, giving a sequence of points (x1, y1), ..., (x40, y40) on the chosen hypotrochoid. For smoothing parameter σ, the pixel intensity at a point (u, v) is given by\ni(u, v) = 1\n40 40∑ i=1 exp ( −(u− xi)2 − (v − yi)2 σ ) . (38)\nFor a grid of pixel, the intensity values are normalized so that the maximum intensity is equal to 1. Figure 7(b) shows the pixel intensity with σ = 0.5. Finally, for a foreground colour with RGB values (fr, fg, fb) and background colour (br, bg, bb), the final RGB values at a point (u, v) is\nc(u, v) = i(u, v) ( fr fg fb ) + (1− i(u, v)) ( br bg bb ) (39)\nThe final coloured sample image is shown in Figure 7(c).\nThe Spirograph sample is fully specified by the parameters m, b, h, σ, fr, fg, fb, br, bg, bb. In our experiments, we treat m, b, σ, fr as parameters of interest. We treat h and the remaining colour parameters as nuisance parameters. That is, we take x = (m, b, σ, fr) and α = (h, fg, fb, br, bg, bb) and the transformation t(x,α) is the full generative process described above. There are no additional parameters β for this dataset. Figure 1 shows two sets of four samples from the Spirograph dataset, in each set the generative factors of interest are fixed and the nuisance parameters are varied. In general for the Spirograph dataset, the distinction between generative factors of interest and nuisance parameters can be changed to attempt to learn different aspects of the data. The transformation t is fully differentiable, meaning that we can apply gradient penalization to all the nuisance parameters of the generative process. In our experiments, we took the following distributions to sample random values of the parameters: m ∼ U(2, 5), b ∼ U(0.1, 1.1), h ∼ U(0.5, 2.5), σ ∼ U(0.25, 1), fr, fg, fb ∼ U(0.4, 1), br, bg, bb ∼ U(0, 0.6). We synthesized 100,000 training images and 20,000 test images with dimension 32× 32." }, { "heading": "F EXPERIMENT DETAILS", "text": "Our experiments were implemented in PyTorch (Paszke et al., 2019) and ran on 8 Nvidia GeForce GTX 1080Ti GPUs. See https://github.com/ae-foster/invclr for an implementation of our approaches." }, { "heading": "F.1 DIFFERENTIABLE COLOUR DISTORTION", "text": "We want to improve the representations learned from contrastive methods by explicitly encouraging stronger invariance to the set of transformations. Our method is to restrict gradients of the representations with respect to certain transformations. Ensuring that the transformations are practically differentiable within PyTorch (Paszke et al., 2019) required a thorough study of the transformations. The subset of transformations we apply gradient regularization to includes colour distortions which are conventionally treated as a part of data preprocessing. Rewriting this as a differentiable module within the computational graph allows us to practically compute the gradient regularizer of equation 11. We will consider adjusting brightness, contrast, saturation, hue of an image. In fact, most of these transformations are simply linear transformations of the original image. First, the brightness adjustment is simply defined as xbrt = xαbrt (40) when αbrt is a scale factor. If we write x = r,g,b, for the three colour channels of x, then greyscale conversion of x is given by xgs = 0.299r + 0.587g + 0.114b. (41) Adjusting the saturation of x is a linear combination of x and xgs, the greyscale version of x\nxsat = xαsat + xgs(1− αsat) (42)\nwhen αsat is a scale factor. Adjusting the contrast of x is a linear combination of x and mean(xgs), which the mean over all spatial dimensions of xgs. With a scaling parameter αcon we have\nxcon = xαcon + mean(xgs)(1− αcon). (43)\nWe utilize a linear approximation for hue adjustment. We perform hue adjustment by converting to the YIQ colour space, and then applying rotation on the IQ components. The transformation between RGB and YIQ colour space is given by the following linear transformation(\nY I Q\n) = ( 0.299 0.587 0.114 0.5959 −0.2746 −0.3213 0.2115 −0.5227 0.3112 )( R G B ) = TY IQ ( R G B ) (44)\nNote that the Y component is exactly the greyscale version xgs defined above. We transform YIQ back to RGB by (\nR G B\n) = ( 1 0.956 0.619 1 −0.272 −0.647 1 −1.106 1.703 )( Y I Q ) = TRGB ( Y I Q ) (45)\nIn YIQ format, we can adjust hue of an image by θ = 2παhue by multiplying with a rotation matrix\nRθ = ( 1 0 0 0 cos θ − sin θ 0 sin θ cos θ ) (46)\nTherefore, our hue adjustment is given by\nxhue = TRGBRαhueTY IQx (47)\nwhere the matrices operate on the three colour channels of x and in parallel over all spatial dimensions. Each operation is followed by pointwise clipping of pixel values to the range [0, 1]." }, { "heading": "F.2 SET-UP", "text": "Our set-up is quite similar to the setup in Chen et al. (2020a) with two main differences: we treat colour distortions as a differentiable module while in Chen et al. (2020a) the transformation was performed in the preprocessing step, and we add the gradient penalty term in addition to the original loss in Chen et al. (2020a)." }, { "heading": "F.2.1 TRANSFORMATIONS", "text": "First, for a batch x1, ...,xK of inputs, we form a pair of (x1,x′1), ..., (xK ,x ′ K) by applying two random transformations: random resized crop and random horizontal flip for each input. We then apply our differentiable colour distortion function which is composed of random colour jitter with probability p = 0.8 and random greyscale with probability p = 0.2. (Colour jitter is the composition of adjusting brightness, adjusting contrast, adjusting saturation, adjusting hue in this order.) We sample α, the parameter that controls how strong the adjustment is for each image from the following distributions: brightness, contrast and saturation adjustment parameters from U(0.6, 1.4) and hue adjustment parameter from U(−0.1, 0.1). We call the resultant pairs (x1,x′1), ..., (xK ,x′K)." }, { "heading": "F.2.2 CONTRASTIVE LEARNING", "text": "Similar to Chen et al. (2020a), we use the transformed (x1,x′1), ..., (xK ,x ′ K) as an input to an encoder to learn a pair of representations (z1, z′1), ..., (zK , z ′ K). The final loss function that we use for training is equation 13. Table 4 shows all hyperparameters that were used for training. The small neural network gφ is a MLP with the two layers consisting of a fully connected linear map, ReLU activation and batch normalization. We use LARS optimizer (You et al., 2017) and apply cosine annealing (Loshchilov & Hutter, 2016) to the learning rate." }, { "heading": "F.2.3 GRADIENT REGULARIZATION", "text": "In this part, we explain our setup for calculating the gradient penalty as in equation 12. We sample a random vector e with independent Rademacher components and independently for each sample in the batch. We generate L samples of α for each element of the batch to compute the regularizer. Finally, we clip the penalty from above to prevent instability at the onset of training. In practice, this meant the gradient regularization was not enforced for about the first epoch of training. Table 5 shows hyperparameters that we used within gradient penalty calculation." }, { "heading": "F.2.4 EVALUATION", "text": "We use our representations as features in linear classification and regression tasks. We train these linear models with L-BFGS with hyperparameters as shown in Table 6 on the training set and evaluate performance on the test set.\nF.2.5 MOCO V2\nTo empirically demonstrate that our ideas transfer to alternative base contrastive learning methods, we applied both gradient regularization and feature averaging to the MoCo v2 (Chen et al., 2020c) base set-up. We also explored two different ResNet (He et al., 2016) architectures. We closely followed the MoCo v2 implementation at https://github.com/facebookresearch/moco. As for SimCLR, we adapted the transformations to be a differentiable module. We also made adaptations for CIFAR-100 in an identical way as in our previous experiments. As in MoCo v2, we removed batch normalization in the projection head gφ; we used SGD optimization with learning rate 0.06 for a batch size of 512, and used the MoCo parameters K = 2048 and m = 0.99 for ResNet18 and K = 4096,m = 0.99 for ResNet50. We did not conduct extensive hyperparameter sweeps, but we did investigate larger values of K which did not lead to improved performance on CIFAR-100. (In particular, the original settings K = 65536,m = 0.999 appeared to perform less well on this dataset.) Other hyperparameters and settings were identical to Chen et al. (2020c). We did 3 independent runs with a ResNet18 and 2 runs with a ResNet50. We conducted linear classification evaluation with fixed representations in exactly the same way as for our other experiments. Feature averaging results used M = 40." }, { "heading": "F.2.6 COMPUTATIONAL COST", "text": "We found that gradient regularization increased the total time to train encoders by a factor of at most 2. For feature averaging at test time with a fixed dataset, the computation of features z(M) is an O(M) operation, whilst the training and testing of the linear classifier is O(1). Training time remained by far the larger in all experiments by orders of magnitude." }, { "heading": "F.3 ADDITIONAL EXPERIMENTAL RESULTS", "text": "" }, { "heading": "F.3.1 COMPARISON WITH ENSEMBLING", "text": "Feature averaging is an approach that bears much similarity with ensembling. To experimentally compare these two approaches, we applied both approaches to encoders trained on CIFAR-10. To provide a suitable comparison with feature averaging using z(M) we first a trained a linear classifier p(y|z) using an M -fold augmented dataset of representations with a standard cross-entropy loss using L-BFGS optimization using the same weight decay as for feature averaging. For CIFAR-10, which has a training set of length 50000, the feature averaging classifier was trained using 50000 averaged representations, whereas the ensembling classifier was trained with 50000M examples using data augmentation. At test time, we averaged prediction probabilities using M different rep-\nresentations of each test image. Specifically, if p(y|z) is the classifier trained by the aforementioned procedure and α1, ...,αM ∼ p(α),β1, ...,βM ∼ p(β) are independent transformation parameters, the probability of assigning label y to input x is given by\npensemble(y|x) = 1\nM M∑ m=1 p(y|fθ(t(x,αm,βm))). (48)\nThe results outlined in Figure 8 show that ensembling gives very similar performance to feature averaging in terms of accuracy, but is significantly worse in terms of loss. We can understand this result intuitively because ensembling includes probabilities from every transformed version of the input (including where the classifier is uncertain or incorrect) whereas feature averaging combines transformations in representation space and uses only one forward pass of the classifier. More formally, the difference in test loss makes sense in light of Theorem 1. Figure 9 shows additional results obtained using representations trained with standard SimCLR on CIFAR-10. We see the same pattern—a similar test accuracy but worse test loss when using augmentation ensembling." }, { "heading": "F.3.2 GRADIENT REGULARIZATION LEADS TO STRONGLY INVARIANT REPRESENTATIONS", "text": "We first show that our gradient penalty successfully learns representations that have greater invariance to transformation than their counterparts generated by contrastive learning. We consider two metrics: the conditional variance targetted directly by the gradient regularizer, and the loss when z is used to predict α with linear regression. Table 7 and Figure 10 are the equivalents of Table 1 and Figure 2 for CIFAR-100, showing the conditional variance and the regression loss for predictingα respectively. In Table 8 we present the same results for Spirograph. We see very similar results to CIFAR-10 in both cases—the gradient penalty dramatically reduces conditional variances, and prediction of α by linear regression gives a\nloss that is better than a constant prediction only for standard contrastive representations." }, { "heading": "F.3.3 GRADIENT REGULARIZED REPRESENTATIONS PERFORM BETTER ON DOWNSTREAM TASKS AND ARE ROBUST TO TEST TIME TRANSFORMATION", "text": "For downstream performance on Spirograph, we evaluate the performance of encoders trained with gradient regularization and without gradient regularization on the task of predicting the generative\nparameters of interest. In our set-up, we use 100,000 train images and 20,000 test images and train the encoders on the training set for 50 epochs. For evaluation, we train a linear regressor on the representations from encoders to predict the actual generative parameters. Setting for the linear regressor is shown in the Table 6. To accompany the main results in Figure 3(c), we include the exact values used in this figure in Table 9.\nWe now turn to our experiments used to investigate robustness—we investigate scenarios when we change the distribution of transformation parameters α at test time, but use encoders that were trained with the original distribution. We investigate on both CIFAR and Spirograph datasets.\nFor CIFAR, we chose to vary the distribution of parameters for colour distortions at test time. We could write the distribution of parameter of brightness, saturation, contrast as U(1 − 0.8S, 1 + 0.8S) and the distribution of hue as U(−0.2S, 0.2S) where S is a parameter controlling the strength of the distortion. In the original setup, we have S = 0.5. By varying the value of S used at test time, we can increase the variance of the nuisance transformations, including stronger transformations than those that were present when encoders were trained. This is visualized in Figure 11. Figure 14 is a companion plot for Figure 4(a) applied on CIFAR-100. We see broadly similar trends—our representations outperform those from standard contrastive learning across a range of test time distributions.\nFor robustness on Spirograph recall α = (h, fg, fb, br, bg, bb) when h ∼ U(0.5, 2.5), fg, fb ∼ U(0.4, 1), br, bg, bb ∼ U(0, 0.6). We chose to vary the distribution of the background colour (br, bg, bb) and the distribution of h which is a structure-related transformation parameter. We consider two approaches, mean shifting where we shift the uniform distribution by S, for example U(a, b)→ U(a+ S, b+ S) and we consider changing variance where we increase the range of the\nuniform distribution by 2S, for example, U(a, b) → U(a − S, b + S). We compare performance of the trained encoders at epoch 50 on predicting the generative parameters (m, b, σ, fr) and we use the same setting for linear regressors as in Table 6.\nFigure 12 is a visualization of the effect of varying h from 0.5 to 2.0 while other parameters are kept constant. The figure 13 shows the effect of varying the background colour of an image by adding S = 0.15, 0.30, 0.45 to each of the background RGB channels.\nFor varying the distribution of h, we consider shifting the mean of h ∼ U(0.5, 2.5) by S = ±0.1,±0.3,±0.5 and increasing the variance of h by S = 0.1, 0.3, 0.5. For the distribution of the background colour (br, bg, bb), we consider shifting the distribution of (br, bg, bb) by S = 0.1, 0.2, 0.3, 0.4 and increasing variance by the same amount. We note that (br, bg, bb) controls the background colour of an image, so we are varying the 3 distributions at the same time. Since, the foreground colour has the distribution fr, fg, fb ∼ U(0.4, 1), we are shifting the distribution of (br, bg, bb) toward (fr, fg, fb) and this will make the background and foreground colours more similar. For example, with S = 0.4, when we apply a mean shift we change the distribution of (br, bg, bb) to br, bg, bb ∼ U(0.4, 1), and when we increase the variance the distribution becomes br, bg, bb ∼ U(0, 1)." } ]
2,021
CONTRASTIVE REPRESENTATION LEARNING
SP:a24603a5dbc07070aeba98e1206511799111bec6
[ "This paper studies the potential bias in deep semi-supervised anomaly detection. The bias is evaluated in terms of TPR rate given a fixed FPR rate. It uses the anomaly scores output by unsupervised anomaly detectors as a benchmark to examine the relative scoring bias in deep semi-supervised anomaly detectors. It further studies the finite sample rate for this type of scoring bias. This type of bias is verified using some synthetic and real-world datasets. The empirical results also show the potential impact of this bias on several anomaly detectors." ]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data. Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples. However, the labeled data often does not align with the target distribution and introduces harmful bias to the trained model. In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection. We formally state the anomaly detection problem as a supervised learning task, and focus on the anomaly detector’s recall at a given false positive rate as the main performance metric. Given two different anomaly score functions, we formally define their difference in performance as the relative scoring bias of the anomaly detectors. Along this line, our work provides two key contributions. We establish the first finite sample rates for estimating the relative scoring bias for deep anomaly detection, and empirically validate our theoretical results on both synthetic and real-world datasets. We also provide extensive empirical study on how a biased training anomaly set affects the anomaly score function and therefore the detection performance on different anomaly classes. Our study demonstrates scenarios in which the biased anomaly set can be useful or problematic, and provides a solid benchmark for future research.
[]
[ { "authors": [ "Charu C Aggarwal", "Saket Sathe" ], "title": "Theoretical foundations and algorithms for outlier ensembles", "venue": "Acm Sigkdd Explorations Newsletter,", "year": 2015 }, { "authors": [ "Varun Chandola", "Arindam Banerjee", "Vipin Kumar" ], "title": "Anomaly detection: A survey", "venue": "ACM Comput. Surv.,", "year": 2009 }, { "authors": [ "Tal Daniel", "Thanard Kurutach", "Aviv Tamar" ], "title": "Deep variational semi-supervised novelty detection", "venue": "arXiv preprint arXiv:1911.04971,", "year": 2019 }, { "authors": [ "Andrew Emmott", "Shubhomoy Das", "Thomas Dietterich", "Alan Fern", "Weng-Keen Wong" ], "title": "A metaanalysis of the anomaly detection problem", "venue": "arXiv preprint arXiv:1503.01158,", "year": 2015 }, { "authors": [ "Izhak Golan", "Ran El-Yaniv" ], "title": "Deep anomaly detection using geometric transformations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Steve Hanneke", "Samory Kpotufe" ], "title": "On the value of target data in transfer learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "Proceedings of the International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yoshinao Ishii", "Satoshi Koide", "Keiichiro Hayakawa" ], "title": "L0-norm constrained autoencoders for unsupervised outlier detection", "venue": "In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2020 }, { "authors": [ "Michael J Kearns", "Umesh Virkumar Vazirani", "Umesh Vazirani" ], "title": "An introduction to computational learning theory", "venue": "MIT press,", "year": 1994 }, { "authors": [ "Samory Kpotufe", "Guillaume Martinet" ], "title": "Marginal singularity, and the benefits of labels in covariate-shift", "venue": "arXiv preprint arXiv:1803.01833,", "year": 2018 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zhijing Li", "Zhujun Xiao", "Bolun Wang", "Ben Y. Zhao", "Haitao Zheng" ], "title": "Scaling deep learning models for spectrum anomaly detection", "venue": "In Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing,", "year": 2019 }, { "authors": [ "Kun Liu", "Huadong Ma" ], "title": "Exploring background-bias for anomaly detection in surveillance videos", "venue": "In Proceedings of the 27th ACM International Conference on Multimedia,", "year": 2019 }, { "authors": [ "Si Liu", "Risheek Garrepalli", "Thomas Dietterich", "Alan Fern", "Dan Hendrycks" ], "title": "Open category detection with PAC guarantees", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "P. Massart" ], "title": "The tight constant in the dvoretzky-kiefer-wolfowitz inequality", "venue": "Ann. Probab.,", "year": 1990 }, { "authors": [ "Guansong Pang", "Chunhua Shen", "Anton van den Hengel" ], "title": "Deep anomaly detection with deviation networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Emanuel Parzen" ], "title": "Quantile functions, convergence in quantile, and extreme value distribution theory", "venue": "Technical report, Texas A & M University,", "year": 1980 }, { "authors": [ "Marco AF Pimentel", "David A Clifton", "Lei Clifton", "Lionel Tarassenko" ], "title": "A review of novelty detection", "venue": "Signal Processing,", "year": 2014 }, { "authors": [ "Shebuti Rayana", "Wen Zhong", "Leman Akoglu" ], "title": "Sequential ensemble learning for outlier detection: A bias-variance perspective", "venue": "In Proceedings of the IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "Lukas Ruff", "Robert A. Vandermeulen", "Nico Görnitz", "Lucas Deecke", "Shoaib A. Siddiqui", "Alexander Binder", "Emmanuel Müller", "Marius Kloft" ], "title": "Deep one-class classification", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Lukas Ruff", "Robert A. Vandermeulen", "Billy Joe Franks", "Klaus-Robert Müller", "Marius Kloft" ], "title": "Rethinking assumptions in deep anomaly detection", "venue": "arXiv preprint arXiv:2006.00339,", "year": 2020 }, { "authors": [ "Lukas Ruff", "Robert A. Vandermeulen", "Nico Görnitz", "Alexander Binder", "Emmanuel Müller", "KlausRobert Müller", "Marius Kloft" ], "title": "Deep semi-supervised anomaly detection", "venue": "In Proc. of ICLR,", "year": 2020 }, { "authors": [ "Mayu Sakurada", "Takehisa Yairi" ], "title": "Anomaly detection using autoencoders with nonlinear dimensionality reduction", "venue": "In Proceedings of the 2nd Workshop on Machine Learning for Sensory Data Analysis,", "year": 2014 }, { "authors": [ "Bernhard Schölkopf", "Robert Williamson", "Alex Smola", "John Shawe-Taylor", "John Platt" ], "title": "Support vector method for novelty detection", "venue": "In Proceedings of the 12th International Conference on Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "Md Amran Siddiqui", "Alan Fern", "Thomas G Dietterich", "Shubhomoy Das" ], "title": "Finite sample complexity of rare pattern anomaly detection", "venue": null, "year": 2016 }, { "authors": [ "Ashwin Srinivasan" ], "title": "StatLog (Landsat Satellite) Data Set", "venue": "https://archive.ics.uci.edu/ ml/datasets/Statlog+(Landsat+Satellite),", "year": 1993 }, { "authors": [ "Alexander Tong", "Roozbah Yousefzadeh", "Guy Wolf", "Smita Krishnaswamy" ], "title": "Fixing bias in reconstruction-based anomaly detection with lipschitz discriminators", "venue": null, "year": 1905 }, { "authors": [ "Leslie G Valiant" ], "title": "A theory of the learnable", "venue": "Communications of the ACM,", "year": 1984 }, { "authors": [ "Siqi Wang", "Yijie Zeng", "Xinwang Liu", "En Zhu", "Jianping Yin", "Chuanfu Xu", "Marius Kloft" ], "title": "Effective end-to-end unsupervised outlier detection via inlier priority of discriminative network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zirui Wang", "Zihang Dai", "Barnabás Póczos", "Jaime Carbonell" ], "title": "Characterizing and avoiding negative transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sen Wu", "Hongyang R Zhang", "Christopher Ré" ], "title": "Understanding and improving information transfer in multi-task learning", "venue": "arXiv preprint arXiv:2005.00944,", "year": 2020 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Yuki Yamanaka", "Tomoharu Iwata", "Hiroshi Takahashi", "Masanori Yamada", "Sekitoshi Kanai" ], "title": "Autoencoding binary classifiers for supervised anomaly detection", "venue": null, "year": 1903 }, { "authors": [ "Chong Zhou", "Randy C. Paffenroth" ], "title": "Anomaly detection with robust deep autoencoders", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [], "title": "PROOF OF THEOREM 3 Proof of Theorem 3. Our proof builds upon and extends the analysis framework of Liu et al", "venue": null, "year": 2018 }, { "authors": [ "Li" ], "title": "The dataset includes a large set (100K", "venue": null, "year": 2019 }, { "authors": [ "Ruff" ], "title": "2020b): For the anomaly class, we consider a loss function which takes the form", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Anomaly detection (Chandola et al., 2009; Pimentel et al., 2014) trains a formal model to identify unexpected or anomalous instances in incoming data, whose behaviors differ from normal instances. It is particularly useful for detecting problematic events such as digital fraud, structural defects, and system malfunctions. Building accurate anomaly detection models is a well-known challenge in machine learning, due to the scarcity of labeled anomaly data. The classical and most common approach is to train anomaly detection models using only normal data1, i.e., first train a model using a corpus of normal data to capture normal behaviors, then configure the model to flag instances with large deviations as anomalies. Researchers have also developed deep learning methods to better capture the complex structure in the data (Ruff et al. (2018); Wang et al. (2019a); Zhou & Paffenroth (2017)). Following the terminology introduced by Chandola et al. (2009), we refer to these models as semi-supervised anomaly detection.\nRecently, a new line of anomaly detection models proposes to leverage available labeled anomalies during model training, i.e., train an anomaly detection model using both normal data and additional labeled anomaly samples as they become available (Ruff et al. (2020b); Yamanaka et al. (2019); Ruff et al. (2020a); Hendrycks et al. (2019a)). Existing works show that these new models achieve considerable performance improvements beyond the models trained using only normal data. We hereby refer to these models as deep supervised2 anomaly detection (Chandola et al., 2009).\nWhen exploring these models, we found that when the labeled anomalies (used to train the model) do not align with the target distribution, they could introduce harmful bias to the trained model. Specifically, when comparing the performance of a supervised anomaly detector to its semi-supervised\n1Existing literature has used different terms to describe this type of models: some using semi-supervised anomaly detection (Chandola et al., 2009) and others using unsupervised anomaly detection (Ruff et al., 2018).\n2Some works termed these models as semi-supervised anomaly detection (Ruff et al., 2020b; Yamanaka et al., 2019; Ruff et al., 2020a; Hendrycks et al., 2019a) while others termed them as supervised anomaly detection (Chandola et al., 2009).\nversion, the performance difference varies significantly across test anomaly data, some better and some worse. That is, using labeled anomalies during model training does not always improve model performance; instead, it may introduce large variance (or bias) in anomaly detection outcomes.\nIn this paper, we aim to understand the effect of a biased training set on deep anomaly detection models. We formally state the anomaly detection problem, focusing on the anomaly detector’s recall at a given false positive rate as the main performance metric. We factor the contribution of the labeled anomalies by the detector’s anomaly scoring function, and show that different types of labeled anomalies produce different anomaly scoring functions. Next, given any two different anomaly scoring functions, we formally define their difference in performance as the relative scoring bias of the anomaly detectors. Our novel notion of scoring bias for anomaly detection aligns with the notion of bias in the classical supervised learning setting, with the key difference being the different performance metric—we target recall at a given false positive rate, the metric used by real-world anomaly detection tasks (Li et al., 2019; Liu et al., 2018).\nAlong this line, we establish the first finite sample rates for estimating the relative scoring bias for deep anomaly detection. We empirically validate our assumptions and theoretical results on both synthetic and three real-world datasets (Fashion-MNIST, Statlog (Landsat Satellite), and Cellular Spectrum Misuse (Li et al., 2019)).\nFurthermore, we provide an empirical study on how a biased training anomaly set affects the anomaly score function and therefore the resulting detection performance. We consider the above three real-world datasets and six deep-learning based anomaly detection models. Our study demonstrates scenarios in which the biased anomaly set can be useful or problematic, and provides a solid benchmark for future research.\nIn this paper, we introduce a formal analysis on the effect of a biased training set on deep anomaly detection. Our main contributions are the following:\n• We discover the issue of large performance variance in deep anomaly detectors, caused by the use of the biased anomaly set as training data. • We model the effect of biased training as relative scoring bias, and establish the first finite sample rates for estimating the relative scoring bias of the trained models. • We conduct empirical experiments to verify and characterize the impact of the relative scoring bias on six popular anomaly detection models, and three real-world datasets.\nTo the best of our knowledge, our work is the first to formally study the effect of a biased anomaly training set on deep anomaly detection. Our results show both significant positive and negative impacts of these biases, and suggest that model trainers must treat anomalies with additional care. We believe this leads to new opportunities for improving deep anomaly detectors and deserves more attention from the research community." }, { "heading": "2 RELATED WORK", "text": "Anomaly Detection Models. While the literature on anomaly detection models is extensive, the most relevant to our work are deep learning based models. Following the terminology used by Chandola et al. (2009), we consider two types of models:\n• Semi-supervised anomaly detection refers to models trained on only normal data, e.g., Ruff et al. (2018); Sakurada & Yairi (2014); Zhou & Paffenroth (2017); • Supervised anomaly detection refers to models trained on normal data and a small set of labeled anomalies, e.g., Pang et al. (2019); Daniel et al. (2019); Yamanaka et al. (2019); Ruff et al. (2020a;b).\nOne can also categorize models by their architecture: hypersphere (Ruff et al., 2018; 2020a;b) and autoencoder (or reconstruction) based models (Zhou & Paffenroth, 2017; Yamanaka et al., 2019).\nAnother line of recent work proposes to use synthetic or auxiliary anomalies to train anomaly detection models (Golan & El-Yaniv (2018); Hendrycks et al. (2019c); Lee et al. (2018); Hendrycks et al. (2019b)), “forcing” the model to learn a more compact representation of the normal data. While the existing work has shown empirically that the choice of abnormal data in training can help detect some unseen abnormal distributions, it does not offer any theoretical explanation for the phe-\nnomenon, nor does it consider the counter-cases when additional abnormal data in training hurt the detection performance.\nBias in Anomaly Detection. To the best of our knowledge, we are the first to identify the presence of bias caused by an additional labeled anomaly set in deep anomaly detection models, especially when there exists a mismatch between the anomalies present in training and those encountered in testing (as shown in Section 5). Existing work has explored the presence of bias in semi-supervised anomaly detection models when there exists defective normal data in training, like outliers and simple-to-reconstruct examples (Tong et al., 2019), or examples with background noise (Liu & Ma, 2019). There is also literature on the bias-variance tradeoff for ensembles of semi-supervised anomaly detection models (Aggarwal & Sathe, 2015; Rayana et al., 2016). But little or no work has been done on the bias of anomaly detection in the supervised setting (i.e., models trained on both normal data and some labeled anomalies). Finally, another line of work in transfer learning has identified the value of additional labeled data in training (Kpotufe & Martinet, 2018; Hanneke & Kpotufe, 2019) and the performance bias on target data by transferring knowledge from a less related source (Wang et al., 2019b; Wu et al., 2020). Yet most work only considered the cases of classification models.\nPAC guarantees for Anomaly Detection. Despite significant progress on developing theoretical guarantees for classification tasks (Valiant (1984); Kearns et al. (1994)), little has been done for anomaly detection tasks. Siddiqui et al. (2016) first establishes a PAC framework for anomaly detection models using the notion of pattern space; however, it is hard to apply such pattern spaces to deep learning models with complex latent spaces. Liu et al. (2018) proposes a model-agnostic approach to provide the PAC guarantee for anomaly detection performance, by analyzing the convergence for the cumulative distribution of anomaly scores. We follow the basic setting from this line of work to address the convergence of the relative scoring bias. In contrast to prior work, our proof relies on a novel adaption of the key theoretical tool from Massart (1990), which allows us to extend our theory to characterize the notion of scoring bias as defined in Section 3.2." }, { "heading": "3 PROBLEM FORMULATION", "text": "We now formally state the anomaly detection problem. Consider a model class Θ for anomaly detection, and a (labeled) training set D sampled from a mixture distribution D over the normal and anomalous instances. In the context of anomaly detection, a model θ maps each input instance x to a continuous output, which corresponds to anomaly score sθ(x). The model further uses a threshold τθ on the score function to produce a binary label for input x.\nFor a given threshold value τθ, we can define the False Positive Rate (FPR) of the model θ on the input data distribution as FPR(sθ, τθ) = P [sθ(x) > τθ | y = 0], and the True Positive Rate (TPR, a.k.a. Recall) as TPR(sθ, τθ) = P [sθ(x) > τθ | y = 1]. The FPR and TPR are competing objectives—therefore, a key challenge for anomaly detection algorithms is to identify a configuration of the score, threshold pair (sθ, τθ) that strikes a balance between the two performance metrics. W.l.o.g.3, in this paper we focus on the following scenario, where the objective is to maximize TPR subject to achieving a target FPR. Formally, let q be the target FPR; we define the optimal anomaly detector as4\n(s∗θ, τ∗θ ) ∈ arg max (sθ,τθ):θ∈Θ TPR(sθ, τθ) s.t. FPR(sθ, τθ) ≤ q (3.1)" }, { "heading": "3.1 A GENERAL ANOMALY DETECTION FRAMEWORK", "text": "Note that the performance metric (namely TPR) in Problem 3.1 is statistics that depends on the entire predictive distribution, and can not be easily evaluated on any single data point. Therefore, rather than directly solving Problem 3.1, practical anomaly detection algorithms (such as OCSVM (Schölkopf et al., 1999), Deep SAD (Ruff et al., 2020b), etc) often rely on a two-stage process: (1)\n3Our results can be easily extended to the setting where the goal is to minimize FPR subject to a given TPR. 4This formulation aligns with many contemporary works in deep anomaly detection. For example, Li et al. (2019) show that in real-world anomaly detection problems, it is desirable to detect anomalies with a prefixed low false alarm rate; Liu et al. (2018) formulate the anomaly detection in a similar way, where the goal is to minimize FPR for a fixed TPR.\nlearning the score function sθ from training data via a surrogate loss, and (2) given sθ from the previous step, computing the threshold function τθ on the training data. Formally, given a model class Θ, a training set D, a loss function `, and a target FPR q, a two-staged anomaly detection algorithm outputs {\nŝθ ∈ arg minsθ:θ∈Θ `(sθ, D) τ̂θ ∈ arg maxτθ:θ∈Θ TPR(ŝθ, τθ) s.t. FPR(ŝθ, τθ) ≤ q\n(3.2)\nNote that the first part of Equation 3.2 amounts to solving a supervised learning problem. Here, the loss function ` could be instantiated into latent-space-based losses (e.g., Deep SAD), marginbased losses (e.g., OCSVM), or reconstruction-based losses (e.g., ABC (Yamanaka et al., 2019)); therefore, many contemporary anomaly detection models fall into this framework. To set the threshold τ̂θ, we consider using the distribution of the anomaly scores ŝθ(·) from a labeled validation set Dval ∼ D. Let Dval := Dval0 ∪ Dvala where Dval0 and Dvala denote the subset of normal data and the subset of abnormal data of Dval. Denote the empirical CDFs for anomaly scores assigned to x in Dval0 and D val a as F̂0 and F̂a, respectively. Then, given a target FPR value q, following a similar argument as Liu et al. (2018), one can compute the threshold as τ̂θ = max{u ∈ R : F̂0(u) ≤ q}. The steps for solving the second part of Equation 3.2 is summarized in Algorithm 1.\nAlgorithm 1: Computing the anomaly detection threshold for Problem 3.2 Data: A validation dataset Dval and a scoring function s(·). Result: A score threshold achieving a target FPR and the corresponding recall on Dval.\n1 Get anomaly score s(x) for each x in Dval. 2 Compute empirical CDF F̂0(x) and F̂a(x) for anomaly scores of x in Dval0 and Dvala . 3 Output detection threshold τ̂ = max{u ∈ R : F̂0(u) ≤ q}. 4 Output TPR (recall) on Dvala as r̂ = 1− F̂a(τ̂)." }, { "heading": "3.2 SCORING BIAS", "text": "Given a model class Θ and a training set D, we define the scoring bias of a detector (ŝθ, τ̂θ) to be the difference in TPR between (ŝθ, τ̂θ) and (s∗θ, τ∗θ ):\nbias(ŝθ, τ̂θ) := arg max (sθ,τθ):θ∈Θ TPR(sθ, τθ)− TPR(ŝθ, τ̂θ) (3.3)\nWe call (ŝθ, τ̂θ) a biased detector if bias(ŝθ, τ̂θ) > 0. In practice, due to biased training distribution, and the fact that the two-stage process in Equation 3.2 is not directly optimizing TPR, the resulting anomaly detectors are often biased by construction. Therefore, one practically significant performance measure is the relative bias, defined as the difference in TPR between two anomaly detectors, subject to the constraints in Equation 3.2. It captures the relative strength of two algorithms in detecting anomalies, and therefore is an important indicator for model evaluation and model selection. Formally, given two arbitrary anomaly score functions s, s′ and the corresponding threshold function τ, τ ′ obtained from Algorithm 1, we define the relative scoring bias between s and s′ as:\nξ(s, s′) := bias(s, τ)− bias(s′, τ ′) = TPR(s′, τ ′)− TPR(s, τ) (3.4) Note that when s′ = s∗θ , the relative scoring bias (equation 3.4) reduces to the scoring bias (equation 3.3). We further define the empirical relative scoring bias between s and s′ as\nξ̂(s, s′) := T̂PR(s′, τ ′)− T̂PR(s, τ) (3.5) where T̂PR(s, τ) = 1n ∑n j=1 1s(xj)>τ ;yj=1 denotes the TPR (recall) estimated on a finite validation set of size n. In the following sections, we will investigate both the theoretical properties and the empirical behavior of the empirical relative scoring bias for contemporary anomaly detectors." }, { "heading": "4 FINITE SAMPLE ANALYSIS FOR EMPIRICAL RELATIVE SCORING BIAS", "text": "In this section, we show how one can estimate the relative scoring bias (Equation 3.4) given any two scoring functions s, s′ learned in Section 3.1. As an example, s could be a scoring function induced\nby a semi-unsupervised anomaly detector trained on normal data only, and s′ could be a scoring function induced by a supervised anomaly detector trained on biased anomaly set. In the following, we provide a finite sample analysis of the convergence rate of the empirical relative scoring bias, and validate our theoretical analysis via a case study." }, { "heading": "4.1 FINITE SAMPLE GUARANTEE", "text": "Notations. Concretely, we assume that when determining the threshold in Line 3 of Algorithm 1, both scoring functions s, s′ are evaluated on the unbiased (marginal) empirical distribution of the normal data. Furthermore, the empirical TPR in Line 4 are estimated on the unbiased empirical distribution of the abnormal data. Let {si := s(xi) | xi, yi = 0}n0i=1 denote a set of anomaly scores evaluated by s(·) on n0 i.i.d. random normal data points. Following the notation in Section 3.1, we use F0(t) := P [s(x) ≤ t | y = 0] to denote the CDF of s(x), and use F̂0(t) := 1n0 ∑n0 i=1 1si≤t;yi=0 to denote the corresponding empirical CDF. For n1 i.i.d. samples {sj := s(xj) | xj , yj = 1}n1j=0 with CDF Fa(t) := P [s(x) ≤ t | y = 1], the corresponding emprical CDF is F̂a(t) := 1n1 ∑n1 j=1 1sj≤t;yj=1. Similarly, we denote the CDF and emiprical CDF for {s′i | yi = 0} n0 i=0 as F ′ 0(t) and F̂ ′0(t), and for {s′j | yj = 1} n1 j=0 as as F ′ a(t) and F̂ ′a(t), respectively.\nInfinite sample case. In the limit of infinite data (both normal and abnormal), F̂0, F̂a, F̂ ′0, F̂ ′a will converge to the true CDFs (cf. Skorokhod’s representation theorem and Theorem 2A of Parzen (1980)), and hence the empirical relative scoring bias will also converge. The following Proposition establishes a connection between the CDFs and the scoring bias. Proposition 1. Given two scoring functions s, s′ and a target FPR q, the relative scoring bias is ξ(s, s′) = Fa(F−10 (q))− F ′a(F ′0 −1(q)).\nHere, F−1(·) is the quantile function. The proof of Proposition 1 follows from the fact that for corresponding choice of τ, τ ′ in Algorithm 1, TPR(s, τ) = 1 − Fa(F−10 (q)), and TPR(s′, τ ′) = 1− F ′a(F ′0\n−1(q)). Next, a direct corollary of the above result shows that, for the special cases where both the scores for normal and abnormal data are Gaussian distributed, one can directly compute the relative scoring bias. The proof is listed in Appendix A. Corollary 2. Let q be a fixed target FPR. Given two scoring functions s, s′, assume that s(x) | (y = 0) ∼ N (µ0, σ0), s(x) | (y = 1) ∼ N (µa, σa), s′(x) | (y = 0) ∼ N (µ′0, σ′0), s′(x) | (y = 1) ∼ N (µ′a, σ′a). Then, the relative scoring bias\nξ(s, s′) = Φ ( σ0Φ−1(q)\nσa + µ0 − µa σa\n) − Φ ( σ′0Φ−1(q)\nσ′a + µ ′ 0 − µ′a σ′a ) where Φ denotes the CDF of the standard Gaussian.\nFinite sample case. In practical scenarios, when comparing the performance of two scoring functions s and s′, we would only have access to finite samples from the validation set, and hence it is crucial to bound the estimation error due to insufficient samples. We now establish a finite sample guarantee for the estimating the relative scoring bias. Our result extends the analysis of Liu et al. (2018), where we follow the convention to assume that the anomaly data amounts to an α fraction of the mixture distribution. The validation set contains a mixture of n = n0 + n1 i.i.d. samples, with n0 normal samples and n1 abnormal samples where n1n = α. The following result shows that under mild assumptions of the continuity of the CDFs and quantile functions Fa, F ′a, F −1 0 , F ′ 0 −1, the sample complexity for achieving |ξ̂ − ξ| ≤ : Theorem 3. Assume that Fa, F ′a, F −1 0 , F ′ 0 −1 are Lipschitz continuous with Lipschitz constant `a, ` ′ a, ` − 0 , ` ′ 0 −, respectively. Let α be the fraction of abnormal data among n i.i.d. samples from the mixture distribution. Then, w.p. at least 1− δ, with\nn ≥ 8 2 ·\n( log 2\n1− √ 1− δ · ( 2− α α )2 + log 2 δ · 11− α (( `a\n`−0 )2 + ( `′a `′0 − )2)) the empirical relative scoring bias satisfies |ξ̂ − ξ| ≤ .\nWe defer the proof of Theorem 3 to Appendix B. Similarly with the open category alien detection setting as discussed in Liu et al. (2018), the sample complexity for estimating the relative scoring bias n grows as O ( 1 α2 2 log 1 δ ) . Note the analysis of our bound involves a novel two-step process which first bounds the estimation of the threshold for the given FPR, and then leverages the Lipschitz continuity condition to derive the final bound." }, { "heading": "4.2 CASE STUDY", "text": "We conduct a case study to validate our main results above, by training anomaly detection models using a synthetic dataset (Liu et al., 2018) and three real-world datasets. We consider six anomaly detection models listed in Table 1, and they lead to similar results. For brevity, we show results when using Deep SVDD (Ruff et al., 2018) as the baseline model (i.e. trained on normal data only) and Deep SAD (Ruff et al., 2020b) as the semi-supervised model trained on normal and some abnormal data. Later in Appendix D, we include results of other model pairs, including Deep SVDD vs. Hypersphere Classifier (HSC) (Ruff et al., 2020a), Autoencoder (AE) vs. Semi-supervised Autoencoder (SAE)5, and AE vs. ABC (Yamanaka et al., 2019).\nOur synthetic dataset. Similar to Liu et al. (2018), we generate our synthetic dataset by sampling data from a mixture data distribution S, w.p. 1− α generating the normal data distribution S0 and w.p. α generating the abnormal data distribution Sa. Data points in S0 are sampled randomly from a 9-dimensional Gaussian distribution, where each dimension is independently distributed as N (0, 1). Data points in Sa are sampled from another 9-dimensional distribution, which w.p. 0.4 have 3 dimensions (uniformly chosen at random) distributed asN (1.6, 0.8), w.p. 0.6 have 4 dimensions (uniformly chosen at random) distributed as N (1.6, 0.8), and have the remaining dimensions distributed as N (0, 1). This ensures meaningful feature relevance, point difficulty and variation for the abnormal data distribution as discussed in Emmott et al. (2015).\nWe obtain two score functions s and s′ by training Deep SVDD and Deep SAD respectively on samples from the synthetic dataset (10K data from S0, 1K data from Sa). We configure the training, validation and test set so there is no data overlap in them. Thus the training procedure will not affect the sample complexity for estimating the relative scoring bias. To set the anomaly threshold, we fix the target FPR to be 0.05, and vary the number of normal data in the validation set n from {100, 1K, 10K}. We then test the score function and threshold on a fixed test dataset with a large number (20K) of normal data and α×20K of abnormal data. We vary α from {0.01, 0.05, 0.1, 0.2}. Real-world datasets. We consider three real-world datasets targeting disjoint subjects: FashionMNIST (Xiao et al., 2017) is a collection of images of fashion objects, where we choose some objects as normal and the rest as abnormal; StatLog (Srinivasan, 1993) is a collection of satellite images on various soil types; and Cellular Spectrum Misuse (Li et al., 2019) is a real-world anomaly dataset on cellular spectrum usage, including normal usage and those under four types of attacks. Detailed descriptions of these datasets and training configurations are listed in Appendix C. Like the above, we obtain s, s′, and anomaly threshold (at a target FPR of 0.05) from these datasets, and test the score function and threshold on their corresponding test datasets and different α values.\nDistribution of anomaly scores. We first study the distribution of anomaly scores. Figure 1 is a sample plot of score distributions on the test set with α = 0.1. We plot the scores for normal and abnormal test data separately, for both scoring functions (derived from Deep SVDD and Deep SAD models respectively). We make two key observations. First, all the distribution curves follow a rough bell shape. Second and more importantly, while the abnormal score distribution closely mimics the normal score distribution under the unsupervised model, it deviates largely from the normal\n5We design SAE by forcing the reconstruction errors to be maximized for additional labeled anomalies encountered in training the autoencoder.\nscore distribution after semi-supervised learning (i.e., similar mean but much higher variance). This confirms that semi-supervised learning does introduce additional bias in anomaly scores.\nWe also examine the anomaly score distributions for models trained on real-world anomaly detection data sets, including Fashion-MNIST and Cellular Spectrum Misuse. While the score distributions are less close to Gaussian, we do observe the same trend where normal and abnormal score distributions become significantly different after applying semi-supervised learning. The results are shown in Figure 7 and 8 in Appendix D.\nConvergence of relative scoring bias (ξ̂) and FPR. Next we examine the convergence of the empirical FPR obtained from s′ (semi-supervised model) and the empirical relative scoring bias ξ̂ (computed as the difference of the empirical TPR according to Equation 3.5) obtained from s (semi-supervised model with normal only) and s′ (supervised model with biased anomaly). Here we present the convergence results in Figure 2 for the synthetic dataset in terms of the quantile distribution of ξ̂ between Deep SVDD (semi-supervised) and Deep SAD (supervised) and the quantile distribution of Deep SAD’s FPR. Results for other models and three real-world datasets are in Appendix D, and show consistent trends.\nSimilar to our theoretical results, we observe a consistent trend of convergence in FPR and ξ̂ as the sample complexity goes up. More specifically, for FPR, as n goes up, it converges to the prefixed value of 0.05; for ξ̂, it also converges to a certain level. We also examine the rate of convergence w.r.t to n. Section 4.1 shows that n required for estimating ξ̂ grows in the same order as 1α2 2 log 1 δ . That is, the estimation error decreases at the rate of 1√ n ;\nfurthermore, as α increases, n required for estimating ξ̂ decreases. This can be seen from Figure 2 (top figure) where at n = 10000, the variation of ξ̂ at α = 0.2 is 50% less than that at α = 0.01." }, { "heading": "5 IMPACT OF SCORING BIAS ON ANOMALY DETECTION PERFORMANCE", "text": "We perform empirical experiments to study the end-to-end impact of relative scoring bias on deep anomaly detection models. Our goal is to understand the type and severity of performance variations introduced by different anomaly training sets.\nExperiment setup. We consider six deep anomaly detection models previously listed in Table 1, and three real-world datasets: Fashion-MNIST, Statlog (Landsat Satellite) and Cellular Spectrum Misuse. For each dataset, we build normal data by choosing a single class (e.g., top in FashionMNIST, normal in Cellular Spectrum Misuse), and treat the other classes as the abnormal classes. Note that Cellular Spectrum Misuse is a real-world anomaly dataset where the abnormal classes are attacks against today’s cellular networks (Li et al., 2019).\nFrom those abnormal classes, we pick a single class as the abnormal training data, and the rest as the abnormal test data on which we test separately. Given the data, we train θ0 := (sθ0 , τθ0), a semisupervised anomaly detector using normal training data, and θs := (sθs , τθs) a supervised anomaly detector using both normal and abnormal training data (with a 10:1 normal vs. anomaly ratio). We follow the original paper of each model to implement the model and its training. For each trained model, we configure the anomaly score threshold to reach a target false positive rate (FPR) of 0.05. We then test these trained models against various abnormal test classes, and record the recall (TPR) value for each abnormal test class. We repeat the above by selecting different abnormal training data. Detailed descriptions of these datasets and training configurations are listed in Appendix C.\nWe evaluate the potential bias introduced by different abnormal training data by comparing the model recall (TPR) value of both θ0 and θs against different abnormal test data. We define the bias to be upward (↑) if TPR(θs) > TPR(θ0), and downward (↓) if TPR(θs) < TPR(θ0). We group our experiments into three scenarios: (1) when abnormal training data is visually similar to normal training data; (2) when abnormal training data is visually dissimilar to normal training data; and (3) when abnormal training data is a weighted combination of (1) and (2). Here we compute visual similarity as the L2 distance. The similarity results are listed in Appendix E.\nWe observe similar trends across all three datasets and all six anomaly detection models. For brevity, we summarize our observations below, and further illustrate them using examples that consider two models (Deep SVDD as θ0 and Deep SAD as θs), and two datasets (Fashion-MNIST, Cellular Spectrum Misuse). We list full results (mean/std) on all the models and datasets in Appendix E.\nScenario 1: Abnormal training data visually similar to normal training data. In this scenario, the use of abnormal data in model training (or supervised model) does improve detection of abnormal data in the training class, but also creates considerable performance changes, both upward and downward, for other classes of abnormal test data. The direction of change depends heavily on the similarity of the abnormal test data to the training abnormal data. The model performance on test data similar to the training abnormal data moves upward significantly while that on test data dissimilar to the training abnormal moves downward significantly.\nWe illustrate this observation using examples of Fashion-MNIST and Cellular Spectrum Misuse. For Fashion-MNIST, the normal and abnormal training classes are top and shirt, respectively, which are similar to each other. Figure 3(a) plots the recalls of model θ0 and θs for all abnormal test classes, arranged by a decreasing similarity to the training abnormal class (shirt). We see that TPR(θs) on classes similar to shirt (including itself) is significantly higher than TPR(θ0) (e.g. increased from <0.2 to 0.9 for pullover). But for classes dissimilar from shirt, TPR(θs) is either similar or significantly lower (e.g., reduced from 0.9 to 0.4 for boot). For Cellular Spectrum Misuse, the normal and abnormal training classes are normal and NB-10ms, respectively. The effect of training bias is highly visible in Figure 3(c), where TPR(θs) on NB-10ms and NB-5ms rises from almost zero to >93% while TPR(θs) on WB-nlos and WB-los drops by 50% or more. Scenario 2: Abnormal training data visually dissimilar to normal training data. Like Scenario 1, the use of abnormal training does improve the detection of abnormal data belonging to the training class and those similar to the training class. But different from Scenario 1, we observe very little downward changes at abnormal classes dissimilar to the training abnormal.\nThis is illustrated using another Fashion-MNIST example in Figure 3(b). While the normal training class is still top, we use a new abnormal training class of sneaker that is quite dissimilar from top. We see that TPR(θs) on sneaker, sandal, boot and bag are largely elevated to 0.8 and higher, while TPR(θs) on other classes are relatively stable (except for trouser which more than doubles). Finally, the same applies to another example of Cellular Spectrum Misuse in Figure 3(d) where the abnormal training class is WB-los, which is quite different from the normal data. In this case, we observe little change to the model recall.\nScenario 3: Mixed abnormal training data. We run three configurations of group training on Fashion-MNIST (normal: top; abnormal: shirt & sneaker) by varying the weights of the two abnormal classes in training (0.5/0.5, 0.9/0.1, 0.1/0.9). The detailed results for each weight configuration are listed in Appendix E. Overall, the use of group training does improve the model performance. However, under all three weight configurations, we observe a consistent pattern of downward bias for an abnormal test class (trouser) and upward bias for most other abnormal classes. Note that trouser is relatively more dissimilar to both training abnormal classes.\nSummary of observations. Our empirical study shows that a biased (anomaly) training set can introduce significant impact on deep anomaly detection, especially on whether the use of labeled anomalies in training would help detect unseen anomalies. When the labeled anomalies are similar to the normal instances, the trained model will likely face large performance degradation on unseen anomalies different from the labeled anomalies, but improvement on those similar to the labeled anomalies. Yet when the labeled anomalies are dissimilar to the normal instances, the supervised model is more useful than its semi-supervised version. Such difference in model behavior is likely because different types of abnormal training data affect the training distribution (thus the scoring function) differently. In particular, when the labeled anomalies are similar to the normal data, they lead to large changes to the scoring function and affect the detection of unseen anomalies “unevenly”. Overall, our results suggest that model trainers must treat labeled anomalies with care." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "To the best of our knowledge, our work provides the first formal analysis on how a biased anomaly training set affects deep anomaly detection. We define and formulate its impact on anomaly detector’s recall (or TPR) as the relative scoring bias of the detector when comparing to its semisupervised baseline trained on only normal data. We then establish the first finite sample rates for estimating the relative scoring bias for supervised anomaly detection, and empirically validate our theoretical results on both synthetic and real-world datasets. We also empirically study how such relative scoring bias translates into variance in detector performance against different unseen anomalies, and demonstrate scenarios in which the biased anomaly set can be useful or harmful. Our work exposes a new challenge in training deep anomaly detection models, especially when labeled abnormal data becomes available. An open question is how to construct an unbiased anomaly detector, even when having access to the true anomaly distribution. As future work, we plan to develop new training procedures that can leverage labeled anomalies to exploit upward scoring bias while avoiding downward scoring bias." }, { "heading": "A PROOF OF COROLLARY 2", "text": "Proof of Corollary 2. Assuming the score functions are Gaussian distributed, we can denoted F0(s) as Φ( s−µ0σ0 ), F̃0(s) as Φ( s−µ̃0 σ̃0 ), Fa(s) as Φ( s−µaσa ), and F̃a(s) as Φ( s−µ̃a σ̃a ).\nTherefore, we have ∆0 = |(σ0Φ−1(q) + µ0)− (σ̃0Φ−1(q) + µ̃0)|. Thus,\nξN := r̃ − r\n= Fa(F−10 (q))− F̃a( ˜F−10 (q)) = Φ(σ0Φ −1(q) σa + µ0 − µa σa )− Φ( σ̃0Φ −1(q) σ̃a + µ̃0 − µ̃a σ̃a )" }, { "heading": "B PROOF OF THEOREM 3", "text": "Proof of Theorem 3. Our proof builds upon and extends the analysis framework of Liu et al. (2018), which relies on one key result from Massart (1990),\nP [√\nn sup x |F̂ (x)− F (x)| > λ\n] ≤ 2 exp(−2λ2). (B.1)\nHere, F̂ (x) is the empirical CDF calculated from n samples. Given a fixed threshold function, Liu et al. (2018) showed that it required\nn > 1 2 21 log 2 1− √ 1− δ · ( 2− α α )2 (B.2)\nexamples, in order to guarantee |F̂a(x) − Fa(x)| ≤ 1 with probability at least 1 − δ (recall that α denotes the fraction of abnormal data among the n samples).\nHere we note that our proof relies on a novel adaption of Equation B.1, which allows us to extend our analysis to the convergence of quantile functions.\nTo achieve this goal, we further assume the Lipschitz continuity for the CDFs/quantile functions:\n|Fa(x)− Fa(x′)| ≤ `a|x− x′| (B.3) |F ′a(x)− F ′a(x′)| ≤ `′a|x− x′| (B.4) |F0−(x)− F0−(x′)| ≤ `−0 |x− x′| (B.5) |F ′0 −(x)− F ′0 −(x′)| ≤ `′0 −|x− x′| (B.6)\nCombining the above inequalities equation B.5 with equation B.1, we obtain P [ sup q ∣∣∣F̂−10 (q)− F−10 (q)∣∣∣ ≥ λ√n0 ] ≤ P [ sup q ∣∣∣F0 (F̂−10 (q))− F0 (F−10 (q))∣∣∣ ≥ λ√n0`−0 ]\n(B.7)\nLet q = F̂0(x), then equation B.7 becomes P [∣∣∣F0 (F̂−10 (F̂0(x)))− F0 (F−10 (F̂0(x)))∣∣∣ ≥ λ√n0`−0 ] =P [∣∣∣F0 (x)− F̂0(x))∣∣∣ ≥ λ√\nn0` − 0\n]\n≤2e − 2n0\n( λ\n` − 0 )2 (B.8)\nTherefore, in order for P [ supq ∣∣∣F̂−10 (q)− F−10 (q)∣∣∣ ≥ 2] ≤ δ to hold, it suffices to set n0 = n(1− α) >\n1 2 1 22 1 `−0 2 log 2 δ\n(B.9)\nFurthermore, combining equation B.1, equation B.2, and equation B.3, we get\nF̂a (τn) ≤ Fa (τ) + (τ − τn)`a + 1 (B.10) Subtitute τn = F̂−10 (q), and τ = F −1 0 (q) in the above inequality, and set 1 = 4 , 2 =\n4`a , we get∣∣∣F̂a (F̂−10 (q))− Fa (F−10 (q))∣∣∣ ≤ /2. (B.11)\nSimilary, we repeat the same procedure for s′, and can get∣∣∣F̂ ′a (F̂ ′−10 (q))− F ′a (F ′0−1(q))∣∣∣ ≤ /2. (B.12) with probability at least 1− δ.\nTherefore, with n ≥ 8 2 · ( log 21−√1−δ · ( 2−α α )2 + log 2δ · 11−α (( `a`−0 )2 + ( `′a`′0−)2 )) examples\nwe can get |ξ̂ − ξ| ≤ with probability at least 1− δ." }, { "heading": "C THREE REAL-WORLD DATASETS AND TRAINING CONFIGURATIONS", "text": "Fashion-MNIST. This dataset (Xiao et al. (2017)) is a collection of 70K grayscale images on fashion objects (a training set of 60K examples and a test set of 10K examples), evenly divided into 10 classes (7000 images per class). Each image is of 28 pixels in height and 28 pixels in width, and each pixel-value is an integer between 0 and 255. The 10 classes are denoted as top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, boot.\nTo train the anomaly detection models, we pick one class as the normal training class, another class as the abnormal training class, and the rest as the abnormal testing class. We use the full training set of the normal class (6K), and a random 10% of the training set of the abnormal training class (600) to train the deep anomaly detection models. We use the test data of the normal class (1K) to configure the anomaly scoring thresholds to meet a 5% false positive rate (FPR). We then test the models on the full data of each abnormal testing class, as well as on the untrained fraction of the abnormal training class.\nStatLog (Landsat Satellite). This dataset (Srinivasan (1993)) is a collection of 6,435 NASA satellite images, each of 82 × 100 pixels, valued between 0 and 255. The six labeled classes are denoted as red soil, cotton crop, grey soil, damp grey soil, soil with vegetation stubble, and very damp grey soil. Unless specified otherwise, we follow the same procedure to train the models. The normal training data includes 80% data of the designated class, and the abnormal training data is 10% of the normal training data in size. Due to the limited amount of the data, we use the full data of the normal data to configure the anomaly scoring thresholds to meet a 5% FPR. We then test the models on the full data of each abnormal testing class, as well as the untrained fraction of the abnormal training class.\nCellular Spectrum Misuse. This real-world anomaly dataset measures cellular spectrum usage under both normal scenarios and in the presence of misuses (or attacks) (Li et al., 2019). We obtained the dataset from the authors of Li et al. (2019). The dataset includes a large set (100K instances) of real cellular spectrum measurements in the form of spectrogram (or time-frequency pattern of the received signal). Each spectrogram (instance) is a 125×128 matrix, representing the signal measured over 125 time steps and 128 frequency subcarriers. The dataset includes five classes: normal (normal usage in the absence of misuse) and four misuse classes: WB-los (wideband attack w/o blockage), WB-nlos (wideband attack w/ blockage), NB-10ms (narrowband attack) and NB-5ms (narrowband attack with a different signal). The sample size is 60K for normal and 10K for each abnormal class. To train the models, we randomly sample 20K instances from normal, 2K instances from one abnormal class, and configure the anomaly score thresholds to meet a 5% FPR." }, { "heading": "D ADDITIONAL EXPERIMENT RESULTS OF SECTION 4.2", "text": "Anomaly score distributions for models trained on the synthetic dataset. Figure 4–6 plot the anomaly score distributions for semi-supervised (left figure) and supervised (right figure) models, trained on the synthetic dataset (α = 0.1, n = 10000), estimated by Kernel Density Estimation.\nAnomaly score distributions for models trained on real-world datasets. Figure 7 and 8 plot the anomaly score distributions for semi-supervised and supervised models, when trained on FashionMNIST and Cellular Spectrum Misuse, respectively.\nConvergence of relative scoring bias ξ̂ and FRP on the synthetic dataset. We plot in Figure 9– 11 the additional results on (Deep SVDD vs. HSC), (AE vs. SAE), and (AE vs. ABC). Experiment settings are described in Section 4.2. Overall, they show a consistent trend on convergence.\nConvergence of relative scoring bias ξ̂ and FRP on Cellular Spectrum Misuse. We plot in\nFigure 13: The quantile distribution of (top) relative scoring bias ξ̂ and (bottom) FPR, computed on the test set over 1000 runs, for Deep SVDD and HSC trained on Cellular Spectrum Misuse.\nFigure 14: The quantile distribution of (top) relative scoring bias ξ̂ and (bottom) FPR, computed on the test set over 1000 runs, for AE and SAE trained on Cellular Spectrum Misuse.\nConvergence of relative scoring bias ξ̂ and FRP on FashionMNIST. Figure 16 plots the convergence of ξ̂ and FRP on Deep SVDD vs. Deep SAD models trained on FashionMNIST. The results for other model combinations are consistent and thus omitted. Here we set the normal class as top and the abnormal class as shirt, and configure the sample size for the training set as 3K and for the test set as 2K, and vary the sample size for the validation set n from 100, 200, 500, 1K. Overall, the plots show a consistent trend on convergence.\nConvergence of relative scoring bias ξ̂ and FRP on StatLog. Figure 17 plots ξ̂ and FRP of Deep SVDD vs. Deep SAD trained on the StatLog dataset. The results for other model combinations are consistent and thus omitted. To maintain a reasonable sample size, we set the normal class to be a combination of grey soil, damp grey soil and very damp grey soil, and the abnormal class to be a combination of red soil and cotton crop. The sample size for the training is 1.2K and the test set is 1K. We vary the sample size for the validation set n from 100, 200, 500, 1K. Overall, the plots show a consistent trend on convergence." }, { "heading": "E ADDITIONAL RESULTS OF SECTION 5", "text": "Scenario 1. Here the training normal set is visually similar to the training abnormal set. We include the detailed recall result (mean/std over 100 runs) of all six models, and three real-life datasets in Table 2 – 4. Across the six models, the two semi-supervised models (trained on normal data) are Deep SVDD and AE; and the rest are supervised models trained on both normal data and the specified abnormal training data.\nIn each table, we report the model recall on all abnormal test classes. These abnormal test classes are sorted by decreasing similarity to the abnormal training class (measured by L2, small value = visually similar). Also, ↑ indicates that the supervised model has a higher recall than the semisupervised model; ↓ indicates the other direction. Overall we observe both upward and downward bias across the test abnormal classes, and the direction depends on the test abnormal class’ similarity to the train abnormal class.\nWe also observe that when using the reconstruction based models (AE, SAE, ABC), the performance for StatLog is much worse than the hypersphere based models. This result is in fact consistent with what has been reported in the literature—Ishii et al. (2020) reports a similar low performance of reconstruction based model on StatLog, which was trained on normal data (cf Table 4 of Ishii et al. (2020)). We consider this as a result potentially arising from specific latent spaces of data on the reconstruction based models, and leave improvement of these reconstruction models to future work.\nIt is worth highlighting that although these reconstruction based models demonstrate inferior performance on StatLog when training only on normal data, adding training abnormal set under Scenario 1 does demonstrate a similar behavior to that of the hypersphere based models. When training the SAE model with (biased) anomaly data, we handle the exploding loss issue in a similar way as Ruff et al. (2020b): For the anomaly class, we consider a loss function which takes the form of 1 reconstruction error . We found that this design of loss function easily converges in practice with few loss explosion issues on reconstruction based models.\nScenario 2. We consider scenario 2 where the training normal set is visually dissimilar to the training abnormal set. The detailed TPR result of all six models, and three real-life datasets are in Table 5 – 7. Like the above, in each table, the abnormal test classes are sorted by decreasing similarity to the abnormal training class. Like the above, ↑ indicates that the supervised model has a higher recall than the semi-supervised model; ↓ indicates the other direction. Different from Scenario 1, here we observe mostly upward changes. Again we observe poorer performance of AE, SAE, ABC on StatLog compared to the hypersphere-based models.\nScenario 3. We run three configurations of grouped abnormal training on Fashion-MNIST (training normal: top; training abnormal: shirt & sneaker) by varying the weights of the two abnormal classes in training (0.5/0.5, 0.9/0.1, 0.1/0.9). Again ↑ indicates that the supervised model has a higher recall than the semi-supervised model; ↓ indicates the other direction. Under these settings, we observe downward bias (↓) for one abnormal test class trouser and upward bias for most other classes." } ]
2,020
null
SP:cf6c9061542bf9c43a968faa574ce03ad71a859a
[ "The authors present an approach for testing calibration in conditional probability estimation models. They build on a line of work in the kernel estimation literature assessing whether the conditional distributions are well calibrated (i.e. P(Y | f(X)) = f(X), where f is some predictive model). They develop an MMD kernel estimator and expand on practical choices of kernels that are computationally tractable. They then derive an asymptotic null distribution for calibrated models, enabling control over the error rate when labeling a model uncalibrated. A few simulation studies are done with neural networks to show the applicability of the method." ]
Most supervised machine learning tasks are subject to irreducible prediction errors. Probabilistic predictive models address this limitation by providing probability distributions that represent a belief over plausible targets, rather than point estimates. Such models can be a valuable tool in decision-making under uncertainty, provided that the model output is meaningful and interpretable. Calibrated models guarantee that the probabilistic predictions are neither overnor under-confident. In the machine learning literature, different measures and statistical tests have been proposed and studied for evaluating the calibration of classification models. For regression problems, however, research has been focused on a weaker condition of calibration based on predicted quantiles for real-valued targets. In this paper, we propose the first framework that unifies calibration evaluation and tests for general probabilistic predictive models. It applies to any such model, including classification and regression models of arbitrary dimension. Furthermore, the framework generalizes existing measures and provides a more intuitive reformulation of a recently proposed framework for calibration in multi-class classification. In particular, we reformulate and generalize the kernel calibration error, its estimators, and hypothesis tests using scalar-valued kernels, and evaluate the calibration of real-valued regression problems.1
[ { "affiliations": [], "name": "David Widmann" }, { "affiliations": [], "name": "Fredrik Lindsten" } ]
[ { "authors": [ "M.A. Arcones", "E. Giné" ], "title": "On the bootstrap of U and V statistics", "venue": "The Annals of Statistics,", "year": 1992 }, { "authors": [ "C. Berg", "J.P.R. Christensen", "P. Ressel" ], "title": "Harmonic Analysis on Semigroups", "venue": null, "year": 1984 }, { "authors": [ "J. Bröcker", "L.A. Smith" ], "title": "Increasing the reliability of reliability diagrams", "venue": "Weather and Forecasting,", "year": 2007 }, { "authors": [ "Jochen Bröcker" ], "title": "Reliability, sufficiency, and the decomposition of proper scores", "venue": "Quarterly Journal of the Royal Meteorological Society,", "year": 2009 }, { "authors": [ "Y. Chen", "T.T. Georgiou", "A. Tannenbaum" ], "title": "Optimal transport for Gaussian mixture models", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Y. Chen", "J. Ye", "J. Li" ], "title": "Aggregated Wasserstein distance and state registration for hidden Markov models", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "K. Chwialkowski", "A. Ramdas", "D. Sejdinovic", "A. Gretton" ], "title": "Fast two-sample testing with analytic representations of probability measures", "venue": "In Proceedings of the 28th International Conference on Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "M.H. DeGroot", "S.E. Fienberg" ], "title": "The comparison and evaluation of forecasters", "venue": "The Statistician,", "year": 1983 }, { "authors": [ "C. Deledalle", "S. Parameswaran", "T.Q. Nguyen" ], "title": "Image denoising with generalized Gaussian mixture model patch priors", "venue": "SIAM Journal on Imaging Sciences,", "year": 2018 }, { "authors": [ "J. Delon", "A. Desolneux" ], "title": "A Wasserstein-type distance in the space of Gaussian mixture models", "venue": "SIAM Journal on Imaging Sciences,", "year": 2020 }, { "authors": [ "R.M. Dudley" ], "title": "Real analysis and probability", "venue": "Wadsworth & Brooks/Cole Pub. Co, Pacific Grove, Calif,", "year": 1989 }, { "authors": [ "M. Fasiolo", "S.N. Wood", "M. Zaffran", "R. Nedellec", "Y. Goude" ], "title": "Fast calibrated additive quantile regression", "venue": "Journal of the American Statistical Association,", "year": 2020 }, { "authors": [ "J.H. Friedman" ], "title": "A tree-structured approach to nonparametric multiple regression", "venue": "In Lecture Notes in Mathematics,", "year": 1979 }, { "authors": [ "J.H. Friedman" ], "title": "Multivariate adaptive regression splines", "venue": "The Annals of Statistics,", "year": 1991 }, { "authors": [ "J.H. Friedman", "E. Grosse", "W. Stuetzle" ], "title": "Multidimensional additive spline approximation", "venue": "SIAM Journal on Scientific and Statistical Computing,", "year": 1983 }, { "authors": [ "K. Fukumizu", "F.R. Bach", "M.I. Jordan" ], "title": "Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "K. Fukumizu", "A. Gretton", "X. Sun", "B. Schölkopf" ], "title": "Kernel measures of conditional dependence", "venue": "In Advances in Neural Information Processing Systems", "year": 2008 }, { "authors": [ "K. Fukumizu", "L. Song", "A. Gretton" ], "title": "Kernel Bayes’ rule: Bayesian inference with positive definite kernels", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "A. Genevay", "M. Cuturi", "G. Peyré", "F.R. Bach" ], "title": "Stochastic optimization for large-scale optimal transport", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "X. Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "E. Gómez", "M.A. Gómez-Viilegas", "J.M. Marín" ], "title": "A multivariate generalization of the power exponential family of distributions", "venue": "Communications in Statistics - Theory and Methods,", "year": 1998 }, { "authors": [ "E. Gómez-Sánchez-Manzano", "M.A. Gómez-Villegas", "J.M. Marín" ], "title": "Multivariate exponential power distributions as mixtures of normal distributions with Bayesian applications", "venue": "Communications in Statistics - Theory and Methods,", "year": 2008 }, { "authors": [ "A. Gretton", "K. Borgwardt", "M. Rasch", "B. Schölkopf", "A.J. Smola" ], "title": "A kernel method for the two-sample-problem", "venue": "In Advances in Neural Information Processing Systems", "year": 2007 }, { "authors": [ "A. Gretton", "K. Fukumizu", "Z. Harchaoui", "B.K. Sriperumbudur" ], "title": "A fast, consistent kernel twosample test", "venue": "In Advances in Neural Information Processing Systems", "year": 2009 }, { "authors": [ "C. Guo", "G. Pleiss", "Y. Sun", "K.Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "F.K. Gustafsson", "M. Danelljan", "T.B. Schön" ], "title": "Evaluating scalable Bayesian deep learning methods for robust computer vision", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2020 }, { "authors": [ "Y.H.S. Ho", "S.M.S. Lee" ], "title": "Calibrated interpolated confidence intervals for population quantiles", "venue": null, "year": 2005 }, { "authors": [ "W. Hoeffding" ], "title": "A class of statistics with asymptotically normal distribution", "venue": "The Annals of Mathematical Statistics,", "year": 1948 }, { "authors": [ "H. Hotelling" ], "title": "The generalization of student’s ratio", "venue": "The Annals of Mathematical Statistics,", "year": 1931 }, { "authors": [ "M. Innes" ], "title": "Flux: Elegant machine learning with Julia", "venue": "Journal of Open Source Software,", "year": 2018 }, { "authors": [ "M. Innes", "E. Saba", "K. Fischer", "D. Gandhi", "M.C. Rudilosso", "N.M. Joy", "T. Karmali", "A. Pal", "V. Shah" ], "title": "Fashionable modelling with Flux, 2018", "venue": null, "year": 2018 }, { "authors": [ "W. Jitkrittum", "Z. Szabó", "K.P. Chwialkowski", "A. Gretton" ], "title": "Interpretable distribution features with maximum testing power", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "N.L. Johnson", "S. Kotz", "N. Balakrishnan" ], "title": "Continuous univariate distributions: Vol", "venue": null, "year": 1994 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR (Poster),", "year": 2015 }, { "authors": [ "V. Kuleshov", "N. Fenner", "S. Ermon" ], "title": "Accurate uncertainties for deep learning using calibrated regression", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "M. Kull", "T. Silva Filho", "P. Flach" ], "title": "Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers", "venue": "In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "M. Kull", "M. Perello Nieto", "M. Kängsepp", "T. Silva Filho", "H. Song", "P. Flach" ], "title": "Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with Dirichlet calibration", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "A. Kumar", "S. Sarawagi", "U. Jain" ], "title": "Trainable calibration measures for neural networks from kernel mean embeddings", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "A.M. Mathai", "S.B. Provost" ], "title": "Quadratic forms in random variables: Theory and applications, volume 126", "venue": null, "year": 1992 }, { "authors": [ "C.A. Micchelli", "M. Pontil" ], "title": "On learning vector-valued functions", "venue": "Neural Computation,", "year": 2005 }, { "authors": [ "A. Müller" ], "title": "Integral probability metrics and their generating classes of functions", "venue": "Advances in Applied Probability,", "year": 1997 }, { "authors": [ "A.H. Murphy", "R.L. Winkler" ], "title": "Reliability of subjective probability forecasts of precipitation and temperature", "venue": "Applied Statistics,", "year": 1977 }, { "authors": [ "M.P. Naeini", "G. Cooper", "M. Hauskrecht" ], "title": "Obtaining well calibrated probabilities using Bayesian binning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "J. Park", "K. Muandet" ], "title": "A measure-theoretic approach to kernel conditional mean embeddings", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "G. Peyré", "M. Cuturi" ], "title": "Computational optimal transport", "venue": "Foundations and Trends in Machine Learning,", "year": 2019 }, { "authors": [ "J. Platt" ], "title": "Probabilities for SV Machines, pp. 61–73", "venue": null, "year": 2000 }, { "authors": [ "Y. Ren", "J. Zhu", "J. Li", "Y. Luo" ], "title": "Conditional generative moment-matching networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "M. Rueda", "S. Martínez-Puertas", "H. Martínez-Puertas", "A. Arcos" ], "title": "Calibration methods for estimating quantiles", "venue": "Metrika,", "year": 2006 }, { "authors": [ "R.J. Serfling (ed" ], "title": "Approximation Theorems of Mathematical Statistics", "venue": null, "year": 1980 }, { "authors": [ "H. Song", "T. Diethe", "M. Kull", "P. Flach" ], "title": "Distribution calibration for regression", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "L. Song", "J. Huang", "A.J. Smola", "K. Fukumizu" ], "title": "Hilbert space embeddings of conditional distributions with applications to dynamical systems", "venue": "In Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "L. Song", "K. Fukumizu", "A. Gretton" ], "title": "Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models", "venue": "IEEE Signal Processing Magazine,", "year": 2013 }, { "authors": [ "B.K. Sriperumbudur", "K. Fukumizu", "A. Gretton", "B. Schölkopf", "G.R.G. Lanckriet" ], "title": "On integral probability metrics, φ-divergences and binary classification", "venue": null, "year": 2009 }, { "authors": [ "B.K. Sriperumbudur", "K. Fukumizu", "G.R.G. Lanckriet" ], "title": "Universality, characteristic kernels and RKHS embedding of measures", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "B.K. Sriperumbudur", "K. Fukumizu", "A. Gretton", "B. Schölkopf", "G.R.G. Lanckriet" ], "title": "On the empirical estimation of integral probability metrics", "venue": "Electronic Journal of Statistics,", "year": 2012 }, { "authors": [ "Z. Szabó", "B.K. Sriperumbudur" ], "title": "Characteristic and universal tensor product kernels", "venue": "Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "M. Taillardat", "O. Mestre", "M. Zamo", "P. Naveau" ], "title": "Calibrated ensemble forecasts using quantile regression forests and ensemble model output statistics", "venue": "Monthly Weather Review,", "year": 2016 }, { "authors": [ "J. Vaicenavicius", "D. Widmann", "C. Andersson", "F. Lindsten", "J. Roll", "T.B. Schön" ], "title": "Evaluating model calibration in classification", "venue": "In Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "A.W. van der Vaart" ], "title": "Asymptotic Statistics", "venue": null, "year": 1998 }, { "authors": [ "C. Villani" ], "title": "Optimal Transport", "venue": null, "year": 2009 }, { "authors": [ "D. Widmann", "F. Lindsten", "D. Zachariah" ], "title": "Calibration tests in multi-class classification: A unifying framework", "venue": "In Proceedings of the 32th International Conference on Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "S.J. Yakowitz", "J.D. Spragins" ], "title": "On the identifiability of finite mixtures", "venue": "The Annals of Mathematical Statistics,", "year": 1968 }, { "authors": [ "Bianca Zadrozny" ], "title": "Reducing multiclass to binary by coupling probability estimates", "venue": "In Advances in Neural Information Processing Systems", "year": 2002 }, { "authors": [ "W. Zaremba", "A. Gretton", "M. Blaschko" ], "title": "B-test: A non-parametric, low variance kernel two-sample test", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Chwialkowski" ], "title": "UCMEk,J = 0 almost surely if and only if the two distributions are equal. C.2 ESTIMATION Again we assume (PX1", "venue": null, "year": 2015 }, { "authors": [ "P × P → R", "kY : Y × Y" ], "title": "R are kernels on the spaces of predicted distributions and targets, respectively. As discussed in Section 3.1, if kernel k is characteristic, then the kernel calibration error KCEk of model P is zero if and only if P is calibrated", "venue": null, "year": 2018 }, { "authors": [ "Widmann" ], "title": "Their formulation of the calibration error", "venue": null, "year": 2019 }, { "authors": [ "− vPX" ], "title": "2019) for matrix-valued kernels. As a concrete example, Widmann et al. (2019) used a matrix-valued kernel of the form (p", "venue": "(−γ‖p−", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "We consider the general problem of modelling the relationship between a featureX and a target Y in a probabilistic setting, i.e., we focus on models that approximate the conditional probability distribution P(Y |X) of target Y for given feature X . The use of probabilistic models that output a probability distribution instead of a point estimate demands guarantees on the predictions beyond accuracy, enabling meaningful and interpretable predicted uncertainties. One such statistical guarantee is calibration, which has been studied extensively in metereological and statistical literature (DeGroot & Fienberg, 1983; Murphy & Winkler, 1977).\nA calibrated model ensures that almost every prediction matches the conditional distribution of targets given this prediction. Loosely speaking, in a classification setting a predicted distribution of the model is called calibrated (or reliable), if the empirically observed frequencies of the different classes match the predictions in the long run, if the same class probabilities would be predicted repeatedly. A classical example is a weather forecaster who predicts each day if it is going to rain on the next day. If she predicts rain with probability 60% for a long series of days, her forecasting model is calibrated for predictions of 60% if it actually rains on 60% of these days.\nIf this property holds for almost every probability distribution that the model outputs, then the model is considered to be calibrated. Calibration is an appealing property of a probabilistic model since it\n1The source code of the experiments is available at https://github.com/devmotion/ Calibration_ICLR2021.\nprovides safety guarantees on the predicted distributions even in the common case when the model does not predict the true distributions P(Y |X). Calibration, however, does not guarantee accuracy (or refinement)—a model that always predicts the marginal probabilities of each class is calibrated but probably inaccurate and of limited use. On the other hand, accuracy does not imply calibration either since the predictions of an accurate model can be too over-confident and hence miscalibrated, as observed, e.g., for deep neural networks (Guo et al., 2017).\nIn the field of machine learning, calibration has been studied mainly for classification problems (Bröcker, 2009; Guo et al., 2017; Kull et al., 2017; 2019; Kumar et al., 2018; Platt, 2000; Vaicenavicius et al., 2019; Widmann et al., 2019; Zadrozny, 2002) and for quantiles and confidence intervals of models for regression problems with real-valued targets (Fasiolo et al., 2020; Ho & Lee, 2005; Kuleshov et al., 2018; Rueda et al., 2006; Taillardat et al., 2016). In our work, however, we do not restrict ourselves to these problem settings but instead consider calibration for arbitrary predictive models. Thus, we generalize the common notion of calibration as:\nDefinition 1. Consider a model PX := P (Y |X) of a conditional probability distribution P(Y |X). Then model P is said to be calibrated if and only if\nP(Y |PX) = PX almost surely. (1)\nIf P is a classification model, Definition 1 coincides with the notion of (multi-class) calibration by Bröcker (2009); Kull et al. (2019); Vaicenavicius et al. (2019). Alternatively, in classification some authors (Guo et al., 2017; Kumar et al., 2018; Naeini et al., 2015) study the strictly weaker property of confidence calibration (Kull et al., 2019), which only requires\nP (Y = arg maxPX |maxPX) = maxPX almost surely. (2)\nThis notion of calibration corresponds to calibration according to Definition 1 for a reduced problem with binary targets Ỹ := 1(Y = arg maxPX) and Bernoulli distributions P̃X := Ber(maxPX) as probabilistic models.\nFor real-valued targets, Definition 1 coincides with the so-called distribution-level calibration by Song et al. (2019). Distribution-level calibration implies that the predicted quantiles are calibrated, i.e., the outcomes for all real-valued predictions of the, e.g., 75% quantile are actually below the predicted quantile with 75% probability (Song et al., 2019, Theorem 1). Conversely, although quantile-based calibration is a common approach for real-valued regression problems (Fasiolo et al., 2020; Ho & Lee, 2005; Kuleshov et al., 2018; Rueda et al., 2006; Taillardat et al., 2016), it provides weaker guarantees on the predictions. For instance, the linear regression model in Fig. 1 empirically shows quantiles that appear close to being calibrated albeit being uncalibrated according to Definition 1.\nFigure 1 also raises the question of how to assess calibration for general target spaces in the sense of Definition 1, without having to rely on visual inspection. In classification, measures of calibration such as the commonly used expected calibration error (ECE) (Guo et al., 2017; Kull et al., 2019;\nNaeini et al., 2015; Vaicenavicius et al., 2019) and the maximum calibration error (MCE) (Naeini et al., 2015) try to capture the average and maximal discrepancy between the distributions on the left hand side and the right hand side of Eq. (1) or Eq. (2), respectively. These measures can be generalized to other target spaces (see Definition B.1), but unfortunately estimating these calibration errors from observations of features and corresponding targets is problematic. Typically, the predictions are different for (almost) all observations, and hence estimation of the conditional probability P (Y |PX), which is needed in the estimation of ECE and MCE, is challenging even for low-dimensional target spaces and usually leads to biased and inconsistent estimators (Vaicenavicius et al., 2019).\nKernel-based calibration errors such as the maximum mean calibration error (MMCE) (Kumar et al., 2018) and the kernel calibration error (KCE) (Widmann et al., 2019) for confidence and multi-class calibration, respectively, can be estimated without first estimating the conditional probability and hence avoid this issue. They are defined as the expected value of a weighted sum of the differences of the left and right hand side of Eq. (1) for each class, where the weights are given as a function of the predictions (of all classes) and chosen such that the calibration error is maximized. A reformulation with matrix-valued kernels (Widmann et al., 2019) yields unbiased and differentiable estimators without explicit dependence on P(Y |PX), which simplifies the estimation and allows to explicitly account for calibration in the training objective (Kumar et al., 2018). Additionally, the kernel-based framework allows the derivation of reliable statistical hypothesis tests for calibration in multi-class classification (Widmann et al., 2019).\nHowever, both the construction as a weighted difference of the class-wise distributions in Eq. (1) and the reformulation with matrix-valued kernels require finite target spaces and hence cannot be applied to regression problems. To be able to deal with general target spaces, we present a new and more general framework of calibration errors without these limitations.\nOur framework can be used to reason about and test for calibration of any probabilistic predictive model. As explained above, this is in stark contrast with existing methods that are restricted to simple output distributions, such as classification and scalar-valued regression problems. A key contribution of this paper is a new framework that is applicable to multivariate regression, as well as situations when the output is of a different (e.g., discrete ordinal) or more complex (e.g., graph-structured) type, with clear practical implications.\nWithin this framework a KCE for general target spaces is obtained. We want to highlight that for multi-class classification problems its formulation is more intuitive and simpler to use than the measure proposed by Widmann et al. (2019) based on matrix-valued kernels. To ease the application of the KCE we derive several estimators of the KCE with subquadratic sample complexity and their asymptotic properties in tests for calibrated models, which improve on existing estimators and tests in the two-sample test literature by exploiting the special structure of the calibration framework. Using the proposed framework, we numerically evaluate the calibration of neural network models and ensembles of such models." }, { "heading": "2 CALIBRATION ERROR: A GENERAL FRAMEWORK", "text": "In classification, the distributions on the left and right hand side of Eq. (1) can be interpreted as vectors in the probability simplex. Hence ultimately the distance measure for ECE and MCE (see Definition B.1) can be chosen as a distance measure of real-valued vectors. The total variation, Euclidean, and squared Euclidean distances are common choices (Guo et al., 2017; Kull et al., 2019; Vaicenavicius et al., 2019). However, in a general setting measuring the discrepancy between P(Y |PX) and PX cannot necessarily be reduced to measuring distances between vectors. The conditional distribution P(Y |PX) can be arbitrarily complex, even if the predicted distributions are restricted to a simple class of distributions that can be represented as real-valued vectors. Hence in general we have to resort to dedicated distance measures of probability distributions.\nAdditionally, the estimation of conditional distributions P(Y |PX) is challenging, even more so than in the restricted case of classification, since in general these distributions can be arbitrarily complex. To circumvent this problem, we propose to use the following construction: We define a random variable ZX ∼ PX obtained from the predictive model and study the discrepancy between the joint distributions of the two pairs of random variables (PX , Y ) and (PX , ZX), respectively, instead of\nthe discrepancy between the conditional distributions P(Y |PX) and PX . Since\n(PX , Y ) d = (PX , ZX) if and only if P(Y |PX) = PX almost surely,\nmodel P is calibrated if and only if the distributions of (PX , Y ) and (PX , ZX) are equal.\nThe random variable pairs (PX , Y ) and (PX , ZX) take values in the product space P×Y , where P is the space of predicted distributions PX and Y is the space of targets Y . For instance, in classification, P could be the probability simplex and Y the set of all class labels, whereas in the case of Gaussian predictive models for scalar targets P could be the space of normal distributions and Y be R. The study of the joint distributions of (PX , Y ) and (PX , ZX) motivates the definition of a generally applicable calibration error as an integral probability metric (Müller, 1997; Sriperumbudur et al., 2009; 2012) between these distributions. In contrast to common f -divergences such as the Kullback-Leibler divergence, integral probability metrics do not require that one distribution is absolutely continuous with respect to the other, which cannot be guaranteed in general.\nDefinition 2. Let Y denote the space of targets Y , and P the space of predicted distributions PX . We define the calibration error with respect to a space of functions F of the form f : P × Y → R as\nCEF := sup f∈F ∣∣EPX ,Y f(PX , Y )− EPX ,ZX f(PX , ZX)∣∣. (3) By construction, if model P is calibrated, then CEF = 0 regardless of the choice of F . However, the converse statement is not true for arbitrary function spaces F . From the theory of integral probability metrics (see, e.g., Müller, 1997; Sriperumbudur et al., 2009; 2012), we know that for certain choices of F the calibration error in Eq. (3) is a well-known metric on the product space P×Y , which implies that CEF = 0 if and only if model P is calibrated. Prominent examples include the maximum mean discrepancy2 (MMD) (Gretton et al., 2007), the total variation distance, the Kantorovich distance, and the Dudley metric (Dudley, 1989, p. 310).\nAs pointed out above, Definition 2 is a generalization of the definition for multi-class classification proposed by Widmann et al. (2019)—which is based on vector-valued functions and only applicable to finite target spaces—to any probabilistic predictive model. In Appendix E we show this explicitly and discuss the special case of classification problems in more detail. Previous results (Widmann et al., 2019) imply that in classification MMCE and, for common distance measures d(·, ·) such as the total variation and squared Euclidean distance, ECEd and MCEd are special cases of CEF . In Appendix G we show that our framework also covers natural extensions of ECEd and MCEd to countably infinite discrete target spaces, which to our knowledge have not been studied before and occur, e.g., in Poisson regression.\nThe literature of integral probability metrics suggests that we can resort to estimating CEF from i.i.d. samples from the distributions of (PX , Y ) and (PX , ZX). For the MMD, the Kantorovich distance, and the Dudley metric tractable strongly consistent empirical estimators exist (Sriperumbudur et al., 2012). Here the empirical estimator for the MMD is particularly appealing since compared with the other estimators “it is computationally cheaper, the empirical estimate converges at a faster rate to the population value, and the rate of convergence is independent of the dimension d of the space (for S = Rd)” (Sriperumbudur et al. (2012)).\nOur specific design of (PX , ZX) can be exploited to improve on these estimators. If EZx∼Pxf(Px, Zx) can be evaluated analytically for a fixed prediction Px, then CEF can be estimated empirically with reduced variance by marginalizing out ZX . Otherwise EZx∼Pxf(Px, Zx) has to be estimated, but in contrast to the common estimators of the integral probability metrics discussed above the artificial construction of ZX allows us to approximate it by numerical integration methods such as (quasi) Monte Carlo integration or quadrature rules with arbitrarily small error and variance. Monte Carlo integration preserves statistical properties of the estimators such as unbiasedness and consistency.\n2As we discuss in Section 3, the MMD is a metric if and only if the employed kernel is characteristic." }, { "heading": "3 KERNEL CALIBRATION ERROR", "text": "For the remaining parts of the paper we focus on the MMD formulation of CEF due to the appealing properties of the common empirical estimator mentioned above. We derive calibration-specific analogues of results for the MMD that exploit the special structure of the distribution of (PX , ZX) to improve on existing estimators and tests in the MMD literature. To the best of our knowledge these variance-reduced estimators and tests have not been discussed in the MMD literature.\nLet k : (P × Y) × (P × Y) → R be a measurable kernel with corresponding reproducing kernel Hilbert space (RKHS)H, and assume that\nEPX ,Y k1/2 ( (PX , Y ), (PX , Y ) ) <∞ and EPX ,ZX k1/2 ( (PX , ZX), (PX , ZX) ) <∞.\nWe discuss how such kernels can be constructed in a generic way in Section 3.1 below.\nDefinition 3. Let Fk denote the unit ball in H, i.e., F := {f ∈ H|‖f‖H ≤ 1}. Then the kernel calibration error (KCE) with respect to kernel k is defined as\nKCEk := CEFk = sup f∈Fk ∣∣EPX ,Y f(PX , Y )− EPX ,ZX f(PX , ZX)∣∣. As known from the MMD literature, a more explicit formulation can be given for the squared kernel calibration error SKCEk := KCE2k (see Lemma B.2). A similar explicit expression for SKCEk was obtained by Widmann et al. (2019) for the special case of classification problems. However, their expression relies on Y being finite and is based on matrix-valued kernels over the finite-dimensional probability simplex P . A key difference to the expression in Lemma B.2 is that we instead propose to use real-valued kernels defined on the product space of predictions and targets. This construction is applicable to arbitrary target spaces and does not require Y to be finite." }, { "heading": "3.1 CHOICE OF KERNEL", "text": "The construction of the product space P ×Y suggests the use of tensor product kernels k = kP ⊗ kY , where kP : P × P → R and kY : Y × Y → R are kernels on the spaces of predicted distributions and targets, respectively.3\nBy definition, so-called characteristic kernels guarantee that KCE = 0 if and only if the distributions of (PX , Y ) and (PX , ZX) are equal (Fukumizu et al., 2004; 2008). Many common kernels such as the Gaussian and Laplacian kernel on Rd are characteristic (Fukumizu et al., 2008).4 Szabó & Sriperumbudur (2018, Theorem 4) showed that a tensor product kernel kP ⊗ kY is characteristic if kP and kY are characteristic, continuous, bounded, and translation-invariant kernels on Rd, but the implication does not hold for general characteristic kernels (Szabó & Sriperumbudur, 2018, Example 1). For calibration evaluation, however, it is sufficient to be able to distinguish between the conditional distributions P(Y |PX) and P(ZX |PX) = PX . Therefore, in contrast to the regular MMD setting, it is sufficient that kernel kY is characteristic and kernel kP is non-zero almost surely, to guarantee that KCE = 0 if and only if model P is calibrated. Thus it is suggestive to construct kernels on general spaces of predicted distributions as\nkP(p, p ′) = exp ( − λdνP(p, p′) ) , (4)\nwhere dP(·, ·) is a metric on P and ν, λ > 0 are kernel hyperparameters. The Wasserstein distance is a widely used metric for distributions from optimal transport theory that allows to lift a ground metric on the target space and possesses many important properties (see, e.g., Peyré & Cuturi, 2019, Chapter 2.4). In general, however, it does not lead to valid kernels kP , apart from the notable exception of elliptically contoured distributions such as normal and Laplace distributions (Peyré & Cuturi, 2019, Chapter 8.3).\n3As mentioned above, our framework rephrases and generalizes the construction used by Widmann et al. (2019). The matrix-valued kernels that they employ can be recovered by setting kP to a Laplacian kernel on the probability simplex and kY(y, y′) = δy,y′ .\n4For a general discussion about characteristic kernels and their relation to universal kernels we refer to the paper by Sriperumbudur et al. (2011).\nIn machine learning, common probabilistic predictive models output parameters of distributions such as mean and variance of normal distributions. Naturally these parameterizations give rise to injective mappings φ : P → Rd that can be used to define a Hilbertian metric\ndP(p, p ′) = ‖φ(p)− φ(p′)‖2.\nFor such metrics, kP in Eq. (4) is a valid kernel for all λ > 0 and ν ∈ (0, 2] (Berg et al., 1984, Corollary 3.3.3, Proposition 3.2.7). In Appendix D.3 we show that for many mixture models, and hence model ensembles, Hilbertian metrics between model components can be lifted to Hilbertian metrics between mixture models. This construction is a generalization of the Wasserstein-like distance for Gaussian mixture models proposed by Chen et al. (2019; 2020); Delon & Desolneux (2020)." }, { "heading": "3.2 ESTIMATION", "text": "Let (X1, Y1), . . . , (Xn, Yn) be a data set of features and targets which are i.i.d. according to the law of (X,Y ). Moreover, for notational brevity, for (p, y), (p′, y′) ∈ P × Y we let\nh ( (p, y), (p′, y′) ) := k ( (p, y), (p′, y′) ) − EZ∼p k ( (p, Z), (p′, y′) ) − EZ′∼p′ k ( (p, y), (p′, Z ′) ) + EZ∼p,Z′∼p′ k ( (p, Z), (p′, Z ′) ) .\nNote that in contrast to the regular MMD we marginalize out Z and Z ′. Similar to the MMD, there exist consistent estimators of the SKCE, both biased and unbiased.\nLemma 1. The plug-in estimator of SKCEk is non-negatively biased. It is given by\nŜKCEk = 1\nn2 n∑ i,j=1 h ( (PXi , Yi), (PXj , Yj) ) .\nInspired by the block tests for the regular MMD (Zaremba et al., 2013), we define the following class of unbiased estimators. Note that in contrast to ŜKCEk they do not include terms of the form h ( (PXi , Yi), (PXi , Yi) ) .\nLemma 2. The block estimator of SKCEk with block size B ∈ {2, . . . , n}, given by\nŜKCEk,B :=\n⌊ n\nB ⌋−1 bn/Bc∑ b=1 ( B 2 )−1 ∑ (b−1)B<i<j≤bB h ( (PXi , Yi), (PXj , Yj) ) ,\nis an unbiased estimator of SKCEk.\nThe extremal estimator with B = n is a so-called U-statistic of SKCEk (Hoeffding, 1948; van der Vaart, 1998), and hence it is the minimum variance unbiased estimator. All presented estimators are consistent, i.e., they converge to SKCEk almost surely as the number n of data points goes to infinity. The sample complexity of ŜKCEk and ŜKCEk,B is O(n2) and O(Bn), respectively." }, { "heading": "3.3 CALIBRATION TESTS", "text": "A fundamental issue with calibration errors in general, including ECE, is that their empirical estimates do not provide an answer to the question if a model is actually calibrated. Even if the measure is guaranteed to be zero if and only if the model is calibrated, usually the estimates of calibrated models are non-zero due to randomness in the data and (possibly) the estimation procedure. In classification, statistical hypothesis tests of the null hypothesis\nH0 : model P is calibrated,\nso-called calibration tests, have been proposed as a tool for checking rigorously if P is calibrated (Bröcker & Smith, 2007; Vaicenavicius et al., 2019; Widmann et al., 2019). For multi-class classification, Widmann et al. (2019) suggested calibration tests based on the asymptotic distributions of estimators of the previously formulated KCE. Although for finite data sets the asymptotic distributions are only approximations of the actual distributions of these estimators, in their experiments with 10 classes the resulting p-value approximations seemed reliable whereas p-values obtained by\nso-called consistency resampling (Bröcker & Smith, 2007; Vaicenavicius et al., 2019) underestimated the p-value and hence rejected the null hypothesis too often (Widmann et al., 2019).\nFor fixed block sizes √ bn/Bc ( ŜKCEk,B − SKCEk ) d−→ N (0, σ2B) as n → ∞, and, under H0, nŜKCEk,n d−→ ∑∞ i=1 λi(Zi − 1) as n → ∞, where Zi are independent χ21 distributed random variables. See Appendix B for details and definitions of the involved constants. From these results one can derive calibration tests that extend and generalize the existing tests for classification problems, as explained in Remarks B.1 and B.2. Our formulation illustrates also the close connection of these tests to different two-sample tests (Gretton et al., 2007; Zaremba et al., 2013)." }, { "heading": "4 ALTERNATIVE APPROACHES", "text": "For two-sample tests, Chwialkowski et al. (2015) suggested the use of the so-called unnormalized mean embedding (UME) to overcome the quadratic sample complexity of the minimum variance unbiased estimator and its intractable asymptotic distribution. As we show in Appendix C, there exists an analogous measure of calibration, termed unnormalized calibration mean embedding (UCME), with a corresponding calibration mean embedding (CME) test.\nAs an alternative to our construction based on the joint distributions of (PX , Y ) and (PX , ZX), one could try to directly compare the conditional distributions P(Y |PX) and P(ZX |PX) = PX . For instance, Ren et al. (2016) proposed the conditional MMD based on the so-called conditional kernel mean embedding (Song et al., 2009; 2013). However, as noted by Park & Muandet (2020), its common definition as operator between two RKHS is based on very restrictive assumptions, which are violated in many situations (see, e.g., Fukumizu et al., 2013, Footnote 4) and typically require regularized estimates. Hence, even theoretically, often the conditional MMD is “not an exact measure of discrepancy between conditional distributions” (Park & Muandet (2020)). In contrast, the maximum conditional mean discrepancy (MCMD) proposed in a concurrent work by Park & Muandet (2020) is a random variable derived from much weaker measure-theoretical assumptions. The MCMD provides a local discrepancy conditional on random predictions whereas KCE is a global real-valued summary of these local discrepancies.5" }, { "heading": "5 EXPERIMENTS", "text": "In our experiments we evaluate the computational efficiency and empirical properties of the proposed calibration error estimators and calibration tests on both calibrated and uncalibrated models. By means of a classic regression problem from statistics literature, we demonstrate that the estimators and tests can be used for the evaluation of calibration of neural network models and ensembles of such models. This section contains only an high-level overview of these experiments to conserve space but all experimental details are provided in Appendix A." }, { "heading": "5.1 EMPIRICAL PROPERTIES AND COMPUTATIONAL EFFICIENCY", "text": "We evaluate error, variance, and computation time of calibration error estimators for calibrated and uncalibrated Gaussian predictive models in synthetic regression problems. The results empirically confirm the consistency of the estimators and the computational efficiency of the estimator with block size B = 2 which, however, comes at the cost of increased error and variance.\nAdditionally, we evaluate empirical test errors of calibration tests at a fixed significance level α = 0.05. The evaluations, visualized in Fig. 2 for models with ten-dimensional targets, demonstrate empirically that the percentage of incorrect rejections of H0 converges to the set significance level as the number of samples increases. Moreover, the results highlight the computational burden of the calibration test that estimates quantiles of the intractable asymptotic distribution of nŜKCEk,n by bootstrapping.\n5In our calibration setting, the MCMD is almost surely equal to supf∈FY ∣∣EY |PX (f(Y )|PX) −\nEZX |PX ( f(ZX)|PX )∣∣, where FY := {f : Y → R|‖f‖HY ≤ 1} for an RKHSHY with kernel kY : Y ×Y → R. If kernel kY is characteristic, MCMD = 0 almost surely if and only if model P is calibrated (Park & Muandet, 2020, Theorem 3.7). Although the definition of MCMD only requires a kernel kY on the target space, a kernel kP on the space of predictions has to be specified for the evaluation of its regularized estimates.\nAs expected, due to the larger variance of ŜKCEk,2 the test with fixed block size B = 2 shows a decreased test power although being computationally much more efficient." }, { "heading": "5.2 FRIEDMAN 1 REGRESSION PROBLEM", "text": "The Friedman 1 regression problem (Friedman, 1979; 1991; Friedman et al., 1983) is a classic non-linear regression problem with ten-dimensional features and real-valued targets with Gaussian noise. We train a Gaussian predictive model whose mean is modelled by a shallow neural network and a single scalar variance parameter (consistent with the data-generating model) ten times with different initial parameters. Figure 3 shows estimates of the mean squared error (MSE), the average negative log-likelihood (NLL), SKCEk, and a p-value approximation for these models and their ensemble on the training and a separate test data set. All estimates indicate consistently that the models are overfit after 1500 training iterations. The estimations of SKCEk and the p-values allow to focus on calibration specifically, whereas MSE indicates accuracy only and NLL, as any proper scoring rule (Bröcker, 2009), provides a summary of calibration and accuracy. The estimation of SKCEk in addition to NLL could serve as another source of information for early stopping and model selection." }, { "heading": "6 CONCLUSION", "text": "We presented a framework of calibration estimators and tests for any probabilistic model that captures both classification and regression problems of arbitrary dimension as well as other predictive models. We successfully applied it for measuring calibration of (ensembles of) neural network models.\nOur framework highlights connections of calibration to two-sample tests and optimal transport theory which we expect to be fruitful for future research. For instance, the power of calibration tests could be improved by heuristics and theoretical results about suitable kernel choices or hyperparameters (cf. Jitkrittum et al., 2016). It would also be interesting to investigate alternatives to KCE captured by our framework, e.g., by exploiting recent advances in optimal transport theory (cf. Genevay et al., 2016).\nSince the presented estimators of SKCEk are differentiable, we imagine that our framework could be helpful for improving calibration of predictive models, during training (cf. Kumar et al., 2018) or post-hoc. Currently, many calibration methods (see, e.g., Guo et al., 2017; Kull et al., 2019; Song et al., 2019) are based on optimizing the log-likelihood since it is a strictly proper scoring rule and thus encourages both accurate and reliable predictions. However, as for any proper scoring rule, “Per se, it is impossible to say how the score will rank unreliable forecast schemes [. . .]. The lack of reliability of one forecast scheme might be outbalanced by the lack of resolution of the other” (Bröcker (2009)). In other words, if one does not use a calibration method such as temperature scaling (Guo et al., 2017) that keeps accuracy invariant6, it is unclear if the resulting model is trading off calibration for accuracy when using log-likelihood for re-calibration. Thus hypothetically flexible calibration methods might benefit from using the presented calibration error estimators." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank the reviewers for all the constructive feedback on our paper. This research is financially supported by the Swedish Research Council via the projects Learning of Large-Scale Probabilistic Dynamical Models (contract number: 2016-04278), Counterfactual Prediction Methods for Heterogeneous Populations (contract number: 2018-05040), and Handling Uncertainty in Machine Learning Systems (contract number: 2020-04122), by the Swedish Foundation for Strategic Research via the project Probabilistic Modeling and Inference for Machine Learning (contract number: ICA16-0015), by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and by ELLIIT." }, { "heading": "A EXPERIMENTS", "text": "The source code of the experiments and instructions for reproducing the results are available at https://github.com/devmotion/Calibration_ICLR2021. Additional material such as automatically generated HTML output and Jupyter notebooks is available at https: //devmotion.github.io/Calibration_ICLR2021/." }, { "heading": "A.1 ORDINARY LEAST SQUARES", "text": "We consider a regression problem with scalar feature X and scalar target Y with input-dependent Gaussian noise that is inspired by a problem by Gustafsson et al. (2020). Feature X is distributed uniformly at random in [−1, 1], and target Y is distributed according to\nY ∼ sin(πX) + |1 +X| ,\nwhere ∼ N (0, 0.152). We train a linear regression model P with homoscedastic variance using ordinary least squares and a data set of 100 i.i.d. pairs of feature X and target Y (see Fig. 4).\nA validation data set of n = 50 i.i.d. pairs of X and Y is used to evaluate the empirical cumulative probability\nn−1 n∑ i=1 1[0,τ ] ( P (Y ≤ Yi|X = Xi) ) of model P for quantile levels τ ∈ [0, 1]. Model P would be quantile calibrated (Song et al., 2019) if\nτ = PX′,Y ′ ( P (Y ≤ Y ′|X = X ′) ≤ τ ) for all τ ∈ [0, 1], where (X,Y ) and (X ′, Y ′) are independent identically distributed pairs of random variables (see Fig. 5).\nAdditionally, we compute a p-value estimate of the null hypothesis H0 that model P is calibrated using an estimation of the quantile of the asymptotic distribution of nŜKCEk,n with 100000 bootstrap samples on the validation data set (see Remark B.2). Kernel k is chosen as the tensor product kernel\nk ( (p, y), (p′, y′) ) = exp ( −W2(p, p′) ) exp ( − (y − y′)2/2 ) = exp ( − √ (mp −mp′)2 + (σp − σp′)2 ) exp ( − (y − y′)2/2 ) ,\nwhere W2 is the 2-Wasserstein distance and mp,mp′ and σp, σp′ denote the mean and the standard deviation of the normal distributions p and p′ (see Appendix D.1). We obtain p < 0.05 in our experiment, and hence the calibration test rejects H0 at the significance level α = 0.05." }, { "heading": "A.2 EMPIRICAL PROPERTIES AND COMPUTATIONAL EFFICIENCY", "text": "We study two setups with d-dimensional targets Y and normal distributions PX of the form N (c1d, 0.12Id) as predictions, where c ∼ U(0, 1). Since calibration analysis is only based on the targets and predicted distributions, we neglect features X in these experiments and specify only the distributions of Y and PX .\nIn the first setup we simulate a calibrated model. We achieve this by sampling targets from the predicted distributions, i.e., by defining the conditional distribution of Y given PX as\nY |PX = N (µ,Σ) ∼ N (µ,Σ).\nIn the second setup we simulate an uncalibrated model of the form\nY |PX = N (µ,Σ) ∼ N ([0.1, µ2, . . . , µd]T,Σ).\nWe perform an evaluation of the convergence and computation time of the biased estimator ŜKCEk and the unbiased estimator ŜKCEk,B with blocks of size B ∈ {2, √ n, n}. We use the tensor product kernel\nk ( (p, y), (p′, y′) ) = exp ( −W2(p, p′) ) exp ( − (y − y′)2/2 ) = exp ( − √ (mp −mp′)2 + (σp − σp′)2 ) exp ( − (y − y′)2/2 ) ,\nwhere W2 is the 2-Wasserstein distance and mp,mp′ and σp, σp′ denote the mean and the standard deviation of the normal distributions p and p′.\nFigures 6 to 9 visualize the mean absolute error and the variance of the resulting estimates for the calibrated and the uncalibrated model with dimensions d = 1 and d = 10 for 500 independently drawn data sets of n ∈ {4, 16, 64, 256, 1024} samples of (PX , Y ). Computation time indicates the minimum time in the 500 evaluations on a computer with a 3.6 GHz processor. The ground truth values of the uncalibrated models were estimated by averaging the estimates of ŜKCEk,1000 for 1000 independently drawn data sets of 1000 samples of (PX , Y ) (independent from the data sets used for the evaluation of the estimates). Figures 6 and 7 illustrate that the computational efficiency of ŜKCEk,2 in comparison with the other estimators comes at the cost of increased error and variance for the calibrated models for fixed numbers of samples.\nWe compare calibration tests based on the (tractable) asymptotic distribution of √ bn/BcŜKCEk,B with fixed block size B ∈ {2, √ n} (see Remark B.1), the (intractable) asymptotic distribution of\nnŜKCEk,n which is approximated with 1000 bootstrap samples (see Remark B.2), and a Hotelling’s\nT 2-statistic for UCMEk,10 with 10 test locations (see Appendix C). We compute the empirical test errors (percentage of false rejections of the null hypothesis H0 that model P is calibrated if P is calibrated, and percentage of false non-rejections of H0 if P is not calibrated) at a fixed significance level α = 0.05 and the minimal computation time for the calibrated and the uncalibrated model with dimensions d = 1 and d = 10 for 500 independently drawn data sets of n ∈ {4, 16, 64, 256, 1024} samples of (PX , Y ). The 10 test predictions of the CME test are of the form N (m, 0.12Id) where m is distributed uniformly at random in the d-dimensional unit hypercube [0, 1]d, the corresponding 10 test targets are i.i.d. according to N (0, 0.12Id). Figures 10 and 11 show that all tests adhere to the set significance level asymptotically as the number of samples increases. The convergence of the CME test with 10 test locations is found to be much slower than the convergence of all other tests. The tests based on the tractable asymptotic distribution of √ bn/BcŜKCEk,B for fixed block size B are orders of magnitudes faster than the test based on\nthe intractable asymptotic distribution of nŜKCEk,n, approximated with 1000 bootstrap samples. We see that the efficiency gain comes at the cost of decreased test power for smaller number of samples, explained by the increasing variance of ŜKCEk,B for decreasing block sizes B. However, in our examples the test based on ŜKCEk,√n still achieves good test power for reasonably large number of samples (> 30)." }, { "heading": "A.3 FRIEDMAN 1 REGRESSION PROBLEM", "text": "We study the so-called Friedman 1 regression problem, which was initially described for 200 inputs in the six-dimensional unit hypercube (Friedman, 1979; Friedman et al., 1983) and later modified to 100 inputs in the 10-dimensional unit hypercube (Friedman, 1991). In this regression problem real-valued target Y depends on input X via\nY = 10 sin (πX1X2) + 20(X3 − 0.5)2 + 10X4 + 5X5 + , where noise is typically chosen to be independently standard normally distributed. We generate a training data set of 100 inputs distributed uniformly at random in the 10-dimensional unit hypercube and corresponding targets with identically and independently distributed noise following a standard normal distribution.\nWe consider models P (θ,σ 2) of normal distributions with fixed variance σ2\nP (θ,σ 2)\nx = N (fθ(x), σ2), where fθ(x), the model of the mean of the distribution P(Y |X = x), is given by a fully connected neural network with two hidden layers with 200 and 50 hidden units and ReLU activation functions. The parameters of the neural network are denoted by θ.\nWe use a maximum likelihood approach and train the parameters θ of the model for 5000 iterations by minimizing the mean squared error on the training data set using ADAM (Kingma & Ba, 2015) (default settings in the machine learning framework Flux.jl (Innes, 2018; Innes et al., 2018)). In each iteration, the variance σ2 is set to the maximizer of the likelihood of the training data set.\nWe train 10 models with different initializations of parameters θ. The initial values of the weight matrices of the neural networks are sampled from the uniform Glorot initialization (Glorot & Bengio, 2010) and the offset vectors are initialized with zeros. In Fig. 12, we visualize estimates of accuracy and calibration measures on the training and test data set with 100 and 50 samples, respectively, for 5000 training iterations. The pinball loss is a common measure and training objective for calibration of quantiles (Song et al., 2019). It is defined as\nEX,Y Lτ ( Y, quantile(PX , τ) ) ,\nwhere Lτ (y, ỹ) = (1 − τ)(ỹ − y)+ + τ(y − ỹ)+ and quantile(Px, τ) = infy{Px(Y ≤ y) ≥ τ} for quantile level τ ∈ [0, 1]. In Fig. 12 we plot the average pinball loss (pinball) for quantile levels τ ∈ {0.05, 0.1, . . . , 0.95}. We evaluate ŜKCEk,n (SKCE (unbiased)) and ŜKCEk (SKCE (biased)) for the tensor product kernel\nk ( (p, y), (p′, y′) ) = exp ( −W2(p, p′) ) exp ( − (y − y′)2/2 ) = exp ( − √ (mp −mp′)2 + (σp − σp′)2 ) exp ( − (y − y′)2/2 ) ,\nwhere W2 is the 2-Wasserstein distance and mp,mp′ and σp, σp′ denote the mean and the standard deviation of the normal distributions p and p′ (see Appendix D.1). The p-value estimate (p-value) is computed by estimating the quantile of the asymptotic distribution of nŜKCEk,n with 1000 bootstrap samples (see Remark B.2). The estimates of the mean squared error and the average negative loglikelihood are denoted by MSE and NLL. All estimators indicate consistently that the trained models suffer from overfitting after around 1000 training iterations.\nAdditionally, we form ensembles of the ten individual models at every training iteration. The evaluations for the ensembles are visualized in Fig. 12 as well. Apart from the unbiased estimates of SKCEk, the estimates of the ensembles are consistently better than the average estimates of the ensemble members. For the mean squared error and the negative log-likelihood this behaviour is guaranteed theoretically by the generalized mean inequality." }, { "heading": "B THEORY", "text": "" }, { "heading": "B.1 GENERAL SETTING", "text": "Let (Ω,A,P) be a probability space. Define the random variables X : (Ω,A) → (X ,ΣX) and Y : (Ω,A) → (Y,ΣY ) such that ΣX contains all singletons, and denote a version of the regular conditional distribution of Y given X = x by P(Y |X = x) for all x ∈ X .\nLet P : (X ,ΣX) → ( P,B(P) ) be a measurable function that maps features in X to probability measures in P on the target space Y . We call P a probabilistic model, and denote by Px := P (x) its output for feature x ∈ X . This gives rise to the random variable PX : (Ω,A) → ( P,B(P) ) as PX := P (X). We denote a version of the regular conditional distribution of Y given PX = Px by P(Y |PX = Px) for all Px ∈ P ." }, { "heading": "B.2 EXPECTED AND MAXIMUM CALIBRATION ERROR", "text": "The common definition of the expected and maximum calibration error (Guo et al., 2017; Kull et al., 2019; Naeini et al., 2015; Vaicenavicius et al., 2019) for classification models can be generalized to arbitrary predictive models.\nDefinition B.1. Let d(·, ·) be a distance measure of probability distributions of target Y , and let µ be the law of PX . Then we call\nECEd = E d ( P(Y |PX), PX ) and MCEd = µ- ess sup d ( P(Y |PX), PX ) the expected calibration error (ECE) and the maximum calibration error (MCE) of model P with respect to measure d, respectively." }, { "heading": "B.3 KERNEL CALIBRATION ERROR", "text": "Recall the general notation: Let k : (P×Y)×(P×Y)→ R be a kernel, amd denote its corresponding RKHS byH. If not stated otherwise, we assume that\n(K1) k(·, ·) is Borel-measurable. (K2) k is integrable with respect to the distributions of (PX , Y ) and (PX , ZX), i.e.,\nEPX ,Y k1/2 ( (PX , Y ), (PX , Y ) ) <∞\nand EPX ,ZX k1/2 ( (PX , ZX), (PX , ZX) ) <∞.\nLemma B.1. There exist kernel mean embeddings µPXY , µPXZX ∈ H such that for all f ∈ H\n〈f, µPXY 〉H = EPX ,Y f(PX , Y ) and 〈f, µPXZX 〉H = EPX ,ZX f(PX , ZX)." }, { "heading": "This implies that", "text": "µPXY = EPX ,Y k(·, (PX , Y )) and µPXZX = EPX ,ZX k(·, (PX , ZX)).\nProof. The linear operators TPXY f := EPX ,Y f(PX , Y ) and TPXZXf := EPX ,ZX f(PX , ZX) for all f ∈ H are bounded since\n|TPXY f | = |EPX ,Y f(PX , Y )| ≤ EPX ,Y |f(PX , Y )| = EPX ,Y |〈k((PX , Y ), ·), f〉H| ≤ EPX ,Y ‖k((PX , Y ), ·)‖H‖f‖H] = ‖f‖H EPX ,Y k1/2((PX , Y ), (PX , Y ))\nand similarly |TPXZXf | ≤ ‖f‖H EPX ,ZX k1/2((PX , ZX), (PX , ZX)). Thus Riesz representation theorem implies that there exist µPXY , µPXZX ∈ H such that TPXY f = 〈f, µPXY 〉H and TPXZXf = 〈f, µPXZX 〉H. The reproducing property ofH implies\nµPXY (p, y) = 〈k((p, y), ·), µPXY 〉H = EPX ,Y k((p, y), (PX , Y ))\nfor all (p, y) ∈ P × Y , and similarly µPXZX (p, y) = EPX ,ZX k((p, y), (PX , ZX)).\nLemma B.2. The squared kernel calibration error (SKCE) with respect to kernel k, defined as SKCEk := KCE 2 k, is given by\nSKCEk = EPX ,Y,PX′ ,Y ′ k ( (PX , Y ), (PX′ , Y ′) ) − 2EPX ,Y,PX′ ,ZX′ k ( (PX , Y ), (PX′ , ZX′) ) + EPX ,ZX ,PX′ ,ZX′ k ( (PX , ZX), (PX′ , ZX′) ) ,\nwhere (PX′ , Y ′, ZX′) is independently distributed according to the law of (PX , Y, ZX)\nProof. From Lemma B.1 we know that there exist kernel mean embeddings µPXY , µPXZX ∈ H that satisfy\n〈f, µPXY − µPXZX 〉H = 〈f, µPXY 〉H − 〈f, µPXZX 〉H = EPX ,Y f(PX , Y )− EPX ,ZX f(PX , ZX)\nfor all f ∈ H. Hence by the definition of the dual norm\nCEFk = sup f∈Fk ∣∣EPX ,Y f(PX , Y )− EPX ,ZX f(PX , ZX)∣∣ = sup f∈Fk\n∣∣〈f, µPX ,Y − µPX ,ZX 〉H∣∣ = ‖µPX ,Y − µPX ,ZX‖H, which implies SKCEk = 〈µPXY − µPXZX , µPXY − µPXZX 〉H. From Lemma B.1 we obtain\nSKCEk = EPX ,Y,PX′ ,Y ′ k ( (PX , Y ), (PX′ , Y ′) ) − 2EPX ,Y,PX′ ,ZX′ k ( (PX , Y ), (PX′ , Z ′ X) )\n+ EPX ,ZX ,PX′ ,Z′X k ( (PX , ZX), (PX′ , Z ′ X) ) ,\nwhich yields the desired result.\nRecall that (PX1 , Y1), . . . , (PXn , Yn) is a validation data set that is sampled i.i.d. according to the law of (PX , Y ) and that for all (p, y), (p′, y′) ∈ P × Y\nh((p, y), (p′, y′)) := k((p, y), (p′, y′))− EZ∼p k((p, Z), (p′, y′)) − EZ′∼p′ k((p, y), (p′, Z ′)) + EZ∼p,Z′∼p′ k((p, Z), (p′, Z ′)).\nLemma B.3. For all i, j = 1, . . . , n,∣∣h((PXi , Yi), (PXj , Yj))∣∣ <∞ almost surely.\nProof. Let i, j ∈ {1, . . . , n}. By assumption (K2) we know that∣∣k((PXi , Yi), (PXj , Yj))∣∣ ≤ k1/2((PXi , Yi), (PXi , Yi))k1/2((PXj , Yj), (PXj , Yj)) <∞ almost surely. Moreover,∣∣EZXi k((PXi , ZXi), (PXj , Yj))∣∣ ≤ EZXi ∣∣k((PXi , ZXi), (PXj , Yj))∣∣\n≤ EZXi\n( k1/2 ( (PXi , ZXi), (PXi , ZXi) ) k1/2 ( (PXj , Yj), (PXj , Yj) )) <∞\nalmost surely, and similarly ∣∣EZXi ,ZXj k((PXi , ZXi), (PXj , ZXj ))∣∣ <∞ almost surely. Thus∣∣h((PXi , Yi), (PXj , Yj))∣∣ ≤ ∣∣k((PXi , Yi), (PXj , Yj))∣∣+ ∣∣EZXi k((PXi , ZXi), (PXj , Yj))∣∣\n+ ∣∣EZXj k((PXi , Yi), (PXj , ZXj ))∣∣+ ∣∣EZXi ,ZXj k((PXi , ZXi), (PXj , ZXj ))∣∣ <∞\nalmost surely.\nLemma 1. The plug-in estimator of SKCEk is non-negatively biased. It is given by\nŜKCEk = 1\nn2 n∑ i,j=1 h ( (PXi , Yi), (PXj , Yj) ) .\nProof. From Lemma B.2 we know that KCEk < ∞, and Lemma B.3 implies that ŜKCEk < ∞ almost surely.\nFor i = 1, . . . , n, the linear operators Tif := EZXi f(PXi , ZXi) for f ∈ H are bounded almost surely since\n|Tif | = ∣∣EZXi f(PXi , ZXi)∣∣ ≤ EZXi ∣∣f(PXi , ZXi)∣∣ = EZXi ∣∣〈k((PXi , ZXi), ·), f〉H∣∣\n≤ EZXi (∥∥k((PXi , ZXi), ·)∥∥H‖f‖H) = ‖f‖H EZXi k1/2((PXi , ZXi), (PXi , ZXi)). Hence Riesz representation theorem implies that there exist ρi ∈ H such that Tif = 〈f, ρi〉H almost surely. From the reproducing property ofH we deduce that ρi(p, y) = 〈k ( (p, y), · ) , ρi〉H =\nEZXi k ( (p, y), (PXi , ZXi) ) for all (p, y) ∈ P × Y almost surely.\nThus by the definition of the dual norm the plug-in estimator K̂CEk satisfies\nK̂CEk = sup f∈Fk\n1\nn ∣∣∣∣∣ n∑ i=1 ( f(PXi , Yi)− EZXi f(PXi , ZXi) )∣∣∣∣∣ = sup f∈Fk 1 n ∣∣∣∣∣ n∑ i=1 〈 k ( (PXi , Yi), · ) − ρi, f 〉 H\n∣∣∣∣∣ = sup f∈Fk 1 n ∣∣∣∣∣ 〈 n∑ i=1 ( k ( (PXi , Yi), · ) − ρi ) , f 〉 H\n∣∣∣∣∣ = 1\nn ∥∥∥∥∥ n∑ i=1 ( k ( (Gi, Yi), · ) − ρi )∥∥∥∥∥ H\n= 1\nn (〈 n∑ i=1 k ( (PXi , Yi), · ) − ρi, n∑ i=1 k ( (PXi , Yi), · ) − ρi 〉 H )1/2\n= 1\nn\n( n∑\ni,j=1\nh ( (PXi , Yi), (PXj , Yj) ))1/2 = ŜKCE 1/2 k <∞\nalmost surely, and hence indeed ŜKCE 1/2\nk is the plug-in estimator of KCEk.\nSince (PX , Y ), (PX′ , Y ′), (PX1 , Y1), . . . , (PXn , Yn) are identically distributed and pairwise independent, we obtain\nn2 E ŜKCEk = n∑\ni,j=1, i 6=j\nEPXi ,Yi,PXj ,Yj h ( (PXi , Yi), (PXj , Yj) )\n+ n∑ i=1 EPXi ,Yi h ( (PXi , Yi), (PXi , Yi) ) = n(n− 1)EPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) + nEPX ,Y h ( (PX , Y ), (PX , Y )\n) = n(n− 1)SKCEk + nEPX ,Y h ( (PX , Y ), (PX , Y ) ) .\n(B.1)\nWith the same reasoning as above, there exist ρ, ρ′ ∈ H such that for all f ∈ H EZX f(PX , ZX) = 〈f, ρ〉H and EZX′ f(PX′ , ZX′) = 〈f, ρ ′〉H almost surely. Thus we obtain\nh ( (PX , Y ), (PX′ , Y ′) ) = 〈k ( (PX , Y ), · ) − ρ, k ( (PX′ , Y ′), · ) − ρ′〉H\nalmost surely, and therefore by Lemma B.2 and the Cauchy-Schwarz inequality SKCEk = EPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) )\n= EPX ,Y,PX′ ,Y ′〈k ( (PX , Y ), · ) − ρ, k ( (G′, Y ′), · ) − ρ′〉H\n≤ EPX ,Y,PX′ ,Y ′ ∣∣〈k((PX , Y ), ·)− ρ, k((PX′ , Y ′), ·)− ρ′〉H∣∣\n≤ EPX ,Y,PX′ ,Y ′ ∥∥k((PX , Y ), ·)− ρ∥∥H∥∥k((PX′ , Y ′), ·)− ρ′∥∥H\n≤ E1/2PX ,Y ∥∥k((PX , Y ), ·)− ρ∥∥2H E1/2PX′ ,Y ′ ∥∥k((PX′ , Y ′), ·)− ρ′∥∥2H.\nSince (PX , Y ) and (PX′ , Y ′) are identically distributed, we obtain\nSKCEk ≤ EPX ,Y ∥∥k((PX , Y ), ·)− ρ∥∥2H = EPX ,Y h((PX , Y ), (PX , Y )).\nThus together with Eq. (B.1) we get\nn2 E ŜKCEk ≥ n(n− 1)SKCEk + nSKCEk = n2SKCEk,\nand hence ŜKCEk has a non-negative bias.\nLemma 2. The block estimator of SKCEk with block size B ∈ {2, . . . , n}, given by\nŜKCEk,B :=\n⌊ n\nB ⌋−1 bn/Bc∑ b=1 ( B 2 )−1 ∑ (b−1)B<i<j≤bB h ( (PXi , Yi), (PXj , Yj) ) ,\nis an unbiased estimator of SKCEk.\nProof. From Lemma B.2 we know that SKCEk <∞, and Lemma B.3 implies that ŜKCEk,B <∞ almost surely.\nFor b ∈ {1, . . . , bn/Bc}, let\nη̂b :=\n( B\n2 )−1 ∑ (b−1)B<i<j≤bB h ( (PXi , Yi), (PXj , Yj) ) (B.2)\nbe the estimator of the bth block. From Lemma B.3 it follows that η̂b <∞ almost surely for all b. Moreover, for all b, η̂b is a so-called U-statistic of SKCEk and hence satisfies E η̂b = SKCEk (see, e.g., van der Vaart, 1998). Since (PX1 , Y1), . . . , (PXn , Yn) are pairwise independent, this implies that ŜKCEk,B is an unbiased estimator of SKCEk." }, { "heading": "B.4 CALIBRATION TESTS", "text": "Lemma B.4. Let B ∈ {2, . . . , n}. If VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) < ∞, then for all b ∈ {1, . . . , bn/Bc}\nV η̂b = σ2B := ( B\n2\n)−1( 2(B − 2)ζ1 + VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) )) ,\nwhere η̂b is defined according to Eq. (B.2) and\nζ1 := EPX ,Y E 2 PX′ ,Y\n′ h ( (PX , Y ), (PX′ , Y ′) ) − SKCE2k. (B.3)\nIf model P is calibrated, it simplifies to\nσ2B =\n( B\n2\n)−1 EPX ,Y,PX′ ,Y ′ h 2 ( (PX , Y ), (PX′ , Y ′) ) .\nProof. Let b ∈ {1, . . . , bn/Bc}. Since VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) < ∞, the CauchySchwarz inequality implies V η̂b <∞ as well. As mentioned in the proof of Lemma 2 above, η̂b is a U-statistic of SKCEk. From the general formula of the variance of a U-statistic (see, e.g., Hoeffding, 1948, p. 298–299) we obtain\nV η̂b = ( B\n2\n)−1(( 2\n1 )( B − 2 2− 1 ) ζ1 + ( 2 2 )( B − 2 2− 2 ) VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ))\n=\n( B\n2\n)−1( 2(B − 2)ζ1 + VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) )) ,\nwhere ζ1 = EPX ,Y E 2 PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) − SKCE2k.\nIf model P is calibrated, then (PX , Y ) d = (PX , Z), and hence for all (p, y) ∈ P × Y EPX ,Y h ( (p, y), (PX , Y ) ) = EPX ,Y k ( (p, y), (PX , Y ) ) − EZ′∼p EPX ,Y k ( (p, Z ′), (PX , Y ) ) − EPX ,Z k ( (p, y), (PX , Z) ) + EZ′∼p EPX ,Z k ( (p, Z ′), (PX , Y )\n) = 0.\nThis implies ζ1 = EPX ,Y E 2 PX′ ,Y\n′ h ( (PX , Y ), (PX′ , Y ′) )\n= 0 and SKCE2k = 0 due to Lemma B.2. Thus\nσ2B =\n( B\n2\n)−1 EPX ,Y,PX′ ,Y ′ h 2 ( (PX , Y ), (PX′ , Y ′) ) ,\nas stated above. Corollary B.1. Let B ∈ {2, . . . , n}. If VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) <∞, then\nV ŜKCEk,B = bn/Bc−1σ2B .\nwhere σ2B is defined according to Lemma B.4.\nProof. Since the estimators η̂1, . . . , η̂bn/Bc in each block are pairwise independent, this is an immediate consequence of Lemma B.4.\nCorollary B.2. Let B ∈ {2, . . . , n}. If VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) <∞, then√\nbn/Bc ( ŜKCEk,B − SKCEk ) d−→ N (0, σ2B) as n→∞, where block size B is fixed and σ2B is defined according to Lemma B.4.\nProof. The result follows from Lemma 2, Lemma B.4, and the central limit theorem (see, e.g., Serfling, 1980, Theorem A in Section 1.9).\nRemark B.1. Corollary B.2 shows that ŜKCEk,B is a consistent estimator of SKCEk in the large sample limit as n → ∞ with fixed number B of samples per block. In particular, for the linear estimator with B = 2 we obtain√\nbn/2c ( ŜKCEk,2 − SKCEk ) d−→ N (0, σ22) as n→∞. Moreover, Lemma B.4 and Corollary B.2 show that the p-value of the null hypothesis that model P is calibrated can be estimated by\nΦ ( − √ bn/BcŜKCEk,B\nσ̂B\n) ,\nwhere Φ is the cumulative distribution function of the standard normal distribution and σ̂B is the empirical standard deviation of the block estimates η̂1, . . . , η̂bn/Bc, and\nΦ ( − √ bn/BcB(B − 1)ŜKCEk,B√\n2σ̂\n) ,\nwhere σ̂2 is an estimate of EPX ,Y,PX′ ,Y ′ h 2 ( (PX , Y ), (PX′ , Y ′) ) . Similar p-value approximations for the two-sample test with blocks of fixed size were used by Chwialkowski et al. (2015).\nCorollary B.3. Assume VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) <∞. Let s ∈ {1, . . . , bn/2c}. Then for all b ∈ {1, . . . , s} √ B ( η̂b − SKCEk\n) d−→ N (0, 4ζ1) as B →∞, (B.4) where η̂b is defined according to Eq. (B.2) with n = Bs, the number s of equally-sized blocks is fixed, and ζ1 is defined according to Eq. (B.3).\nIf model P is calibrated, then √ B ( η̂b − SKCEk ) = √ Bη̂b is asymptotically tight since ζ1 = 0, and\nBη̂b d−→ ∞∑ i=1 λi(Zi − 1) as B →∞, (B.5)\nwhere Zi are independent χ21 distributed random variables and λi ∈ R are eigenvalues of the Hilbert-Schmidt integral operator\nKf(p, y) := EPX ,Y ( h((p, y), (PX , Y ))f(PX , Y ) ) for Borel-measurable functions f : P × Y → R with EPX ,Y f2(PX , Y ) <∞.\nProof. Let s ∈ {1, . . . , bn/2c} and b ∈ {1, . . . , s}. As mentioned above in the proof of Lemma 2, the estimator η̂b, defined according to Eq. (B.2), is a so-called U-statistic of SKCEk (see, e.g., van der Vaart, 1998). Thus Eq. (B.4) follows from the asymptotic behaviour of U-statistics (see, e.g., van der Vaart, 1998, Theorem 12.3).\nIf P is calibrated, then we know from the proof of Lemma B.4 that ζ1 = 0, and hence η̂b is a so-called degenerate- U-statistic (see, e.g., van der Vaart, 1998, Section 12.3). From the theory of degenerate U-statistics it follows that the sequence Bη̂b converges in distribution to the limit distribution in Eq. (B.5), which is known as Gaussian chaos.\nCorollary B.4. Assume VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) <∞. Let s ∈ {1, . . . , bn/2c}. Then\n√ B ( ŜKCEk,B − SKCEk ) d−→ N (0, 4s−1ζ1) as B →∞, where the number s of equally-sized blocks is fixed, n = Bs, and ζ1 is defined according to Eq. (B.3).\nIf model P is calibrated, then √ B ( ŜKCEk,B − SKCEk ) = √ BŜKCEk,B is asymptotically tight since ζ1 = 0, and\nBŜKCEk,B d−→ s−1 ∞∑ i=1 λi(Zi − s) as B →∞,\nwhere Zi are independent χ2s distributed random variables and λi ∈ R are eigenvalues of the Hilbert-Schmidt integral operator\nKf(p, y) := EPX ,Y ( h((p, y), (PX , Y ))f(PX , Y ) ) for Borel-measurable functions f : P × Y → R with EPX ,Y f2(PX , Y ) <∞.\nProof. Since the estimators η̂1, . . . , η̂s in each block are pairwise independent, this is an immediate consequence of Corollary B.3.\nRemark B.2. Corollary B.4 shows that ŜKCEk,B is a consistent estimator of SKCEk in the large sample limit as B →∞ with fixed number bn/Bc of blocks. Moreover, for the minimum variance unbiased estimator with B = n, Corollary B.4 shows that under the null hypothesis that model P is calibrated\nnŜKCEk,n d−→ ∞∑ i=1 λi(Zi − 1) as n→∞,\nwhere Zi are independent χ21 distributed random variables. Unfortunately quantiles of the limit distribution of ∑∞ i=1 λi(Zi − 1) (and hence the p-value of the null hypothesis that model P is calibrated) can not be computed analytically but have to be estimated by, e.g., bootstrapping (Arcones & Giné, 1992), using a Gram matrix spectrum (Gretton et al., 2009), fitting Pearson curves (Gretton et al., 2007), or using a Gamma approximation (Johnson et al., 1994, p. 343, p. 359).\nCorollary B.5. Assume VPX ,Y,PX′ ,Y ′ h ( (PX , Y ), (PX′ , Y ′) ) <∞. Then√\nbn/BcB ( ŜKCEk,B − SKCEk ) d−→ N (0, 4ζ1) as B →∞ and bn/Bc → ∞, (B.6) where B is the block size and s is the number of equally-sized blocks, n = Bs, and ζ1 is defined according to Eq. (B.3).\nIf model P is calibrated, then √ bn/BcB ( ŜKCEk,B − SKCEk ) = √ bn/BcBŜKCEk,B is asymptotically tight since ζ1 = 0, and√ bn/BcBŜKCEk,B d−→ N ( 0,\n∞∑ i=1 λ2i ) as B →∞ and bn/Bc → ∞,\nwhere λi ∈ R are eigenvalues of the Hilbert-Schmidt integral operator Kf(p, y) := EPX ,Y ( h((p, y), (PX , Y ))f(PX , Y ) ) for Borel-measurable functions f : P × Y → R with EPX ,Y f2(PX , Y ) <∞.\nProof. The result follows from Corollary B.3 and the central limit theorem (see, e.g., Serfling, 1980, Theorem A in Section 1.9).\nRemark B.3. Corollary B.5 shows that ŜKCEk,B is a consistent estimator of SKCEk in the large sample limit as B → ∞ and bn/Bc → ∞, i.e., as both the number of samples per block and the number of blocks go to infinity. Moreover, Corollaries B.3 and B.5 show that the p-value of the null hypothesis that P is calibrated can be estimated by\nΦ ( − √ bn/BcŜKCEk,B\nσ̂B\n) ,\nwhere σ̂B is the empirical standard deviation of the block estimates η̂1, . . . , η̂bn/Bc. Similar p-value approximations for the two-sample problem with blocks of increasing size were proposed and applied by Zaremba et al. (2013)." }, { "heading": "C CALIBRATION MEAN EMBEDDING", "text": "" }, { "heading": "C.1 DEFINITION", "text": "Similar to the unnormalized mean embedding (UME) proposed by Chwialkowski et al. (2015) in the standard MMD setting, instead of the calibration error CEFk = ‖µPXY − µPXZX‖H we can consider the unnormalized calibration mean embedding (UCME).\nDefinition C.1. Let J ∈ N. The unnormalized calibration mean embedding (UCME) for kernel k with J test locations is defined as the random variable\nUCME2k,J = J −1 J∑ j=1 ( µPXY (Tj)− µPXZX (Tj) )2 = J−1\nJ∑ j=1 ( EPX ,Y k(Tj , (PX , Y ))− EPX ,ZX k(Tj , (PX , ZX)) )2 ,\nwhere T1, . . . , TJ are i.i.d. random variables (so-called test locations) whose distribution is absolutely continuous with respect to the Lebesgue measure on P × Y . As mentioned above, in many machine learning applications we actually have P × Y ⊂ Rd (up to some isomorphism). In such a case, if k is an analytic, integrable, characteristic kernel, then for each J ∈ N UCMEk,J is a random metric between the distributions of (PX , Y ) and (PX , ZX), as shown by Chwialkowski et al. (2015, Theorem 2). In particular, this implies that UCMEk,J = 0 almost surely if and only if the two distributions are equal." }, { "heading": "C.2 ESTIMATION", "text": "Again we assume (PX1 , Y1), . . . , (PXn , Yn) is a validation data set of predictions and targets, which are i.i.d. according to the law of (PX , Y ). The consistent, but biased, plug-in estimator of UCME2k,J is given by\nÛCME 2\nk,J = J −1 J∑ j=1\n( n−1\nn∑ i=1 ( k ( Tj , (PXi , Yi) ) − EZXi k ( Tj , (PXi , ZXi) )))2 ." }, { "heading": "C.3 CALIBRATION MEAN EMBEDDING TEST", "text": "As Chwialkowski et al. (2015) note, if model P is calibrated, for every fixed sequence of unique test locations √ nÛCME 2\nk,J converges in distribution to a sum of correlated χ 2 random variables,\nas n→∞. The estimation of this asymptotic distribution, and its quantiles required for hypothesis testing, requires a bootstrap or permutation procedure, which is computationally expensive. Hence Chwialkowski et al. (2015) proposed the following test based on Hotelling’s T 2-statistic (Hotelling, 1931).\nFor i = 1, . . . , n, let\nZi := k ( T1, (PXi , Yi) ) − EZXi k ( T1, (PXi , ZXi) ) ...\nk ( TJ , (PXi , Yi) ) − EZXi k ( TJ , (PXi , ZXi)\n) ∈ RJ ,\nand denote the empirical mean and covariance matrix of Z1, . . . , Zn by Z and S, respectively. If UCMEk,J is a random metric between the distributions of (PX , Y ) and (PX , ZX), then the test statistic\nQn := nZ T S−1Z\nis almost surely asymptotically χ2 distributed with J degrees of freedom if model P is calibrated, as n → ∞ with J fixed; moreover, if model P is uncalibrated, then for any fixed r ∈ R almost surely P(Qn > r)→ 1 as n→∞ (Chwialkowski et al., 2015, Proposition 2). We call the resulting calibration test calibration mean embedding (CME) test." }, { "heading": "D KERNEL CHOICE", "text": "A natural choice for the kernel k : (P × Y) × (P × Y) → R on the product space of predicted distributions P and targets Y is a tensor product kernel of the form k = kP ⊗ kY , i.e., a kernel of the form\nk ( (p, y), (p′, y′) ) = kP(p, p ′)kY(y, y ′),\nwhere kP : P × P → R and kY : Y × Y → R are kernels on the spaces of predicted distributions and targets, respectively.\nAs discussed in Section 3.1, if kernel k is characteristic, then the kernel calibration error KCEk of model P is zero if and only if P is calibrated. Unfortunately, as shown by Szabó & Sriperumbudur (2018, Example 1), even if kP and kY are characteristic, the tensor product kernel k = kP⊗kY might not be characteristic. However, when analyzing calibration, it is sufficient to be able to distinguish distributions for which the conditional distributions P(Y |PX) and P(ZX |PX) = PX are not equal almost surely. Thus it is sufficient if kY is characteristic and kP is non-zero almost surely.\nMany common kernels such as the Gaussian and Laplacian kernel on Rd are characteristic and can therefore be chosen as kernel kY for real-valued target spaces. The choice of kP might be less obvious since P is a space of probability distributions. Intuitively one might want to use kernels of the form\nkP ( p, p′ ) = exp ( − λdνP(p, p′) ) , (D.1)\nwhere dP : P × P → R is a metric on P and ν, λ > 0 are kernel hyperparameters. Kernels of this form would be a generalization of the Gaussian and Laplacian kernel, and would clearly be non-zero almost surely.\nUnfortunately, this construction does not necessarily yield valid kernels. Most prominently, the Wasserstein distance does not lead to valid kernels kP in general (Peyré & Cuturi, 2019, Chapter 8.3). However, if dP(·, ·) is a Hilbertian metric, i.e., a metric of the form\ndP(p, p ′) = ∥∥φ(p)− φ(p′)∥∥ H\nfor some Hilbert space H and mapping φ : P → H , then kP in Eq. (D.1) is a valid kernel for all λ > 0 and ν ∈ (0, 2] (Berg et al., 1984, Corollary 3.3.3, Proposition 3.2.7)." }, { "heading": "D.1 NORMAL DISTRIBUTIONS", "text": "Assume that Y = Rd and P = {N (µ,Σ): µ ∈ Rd,Σ ∈ Rd×d psd}, i.e., the model outputs normal distributions PX = N (µX ,ΣX). The distribution of these outputs is defined by the distribution of their mean µX and covariance matrix ΣX .\nLet Px = N (µx,Σx) ∈ P , y ∈ Y = Rd, and γ > 0. We obtain EZx∼Px exp ( − γ‖Zx − y‖22 ) = ∣∣Id + 2γΣx∣∣−1/2 exp(− γ(µx − y)T(Id + 2γΣx)−1(µx − y))\nfrom Mathai & Provost (1992, Theorem 3.2.a.3). In particular, if Σx = diag(Σx,1, . . . ,Σx,d), then EZx∼Px exp ( − γ‖Zx − y‖22 ) =\nd∏ i=1 [( 1 + 2γΣx,i )−1/2 exp ( − γ ( 1 + 2γΣx,i )−1( µx,i − yi )2)] .\nLet Px′ = N (µx′ ,Σx′) be another normal distribution. Then we have EZx∼Px,Zx′∼Px′ exp ( − γ‖Zx − Zx′‖22 ) = ∣∣Id + 2γΣx∣∣−1/2 EZx′∼Px′ exp(− γ(µx − Zx′)T(Id + 2γΣx)−1(µx − Zx′))\n= ∣∣Id + 2γ(Σx + Σx′)∣∣−1/2 exp(− γ(µx − µx′)T(Id + 2γ(Σx + Σx′))−1(µx − µx′)).\nThus if Σx = diag(Σx,1, . . . ,Σx,d) and Σx′ = diag ( Σx′,1, . . . ,Σx′,d ) , then\nEZx∼Px,Zx′∼Px′ exp ( − γ‖Zx − Zx′‖22 ) =\nd∏ i=1 [( 1 + 2γ(Σx,i + Σx′,i) )−1/2 exp ( − γ ( 1 + 2γ(Σx,i + Σx′,i) )−1( µx,i − µx′,i )2)] .\nHence we see that a Gaussian kernel kY(y, y ′) = exp ( − γ‖y − y′‖22 ) with inverse length scale γ > 0 on the space of targets Y = Rd allows us to compute EZx∼Px kY(Zx, y) and EZx∼Px,Zx′∼Px′ kY(Zx, Zx′) analytically. Moreover, the Gaussian kernel is characteristic on Rd (Fukumizu et al., 2008). Hence, as discussed above, by choosing a kernel kP that is non-zero almost surely we can guarantee that KCEk = 0 if and only if model P is calibrated.\nOn the space of normal distributions, the 2-Wasserstein distance with respect to the Euclidean distance between Px = N (µx,Σx) and Px′ = N (µx′ ,Σx′) is given by\nW 22 ( Px, Px′ ) = ‖µx − µx′‖22 + Tr ( Σx + Σx′ − 2 ( Σx′ 1/2ΣxΣx′ 1/2 )1/2) ,\nwhich can be simplified to W 22 ( Px, Px′ ) = ∥∥µx − µx′∥∥22 + ∥∥∥Σ1/2x − Σ1/2x′ ∥∥∥2Frob, if ΣxΣx′ = Σx′Σx. This shows that the 2-Wasserstein distance is a Hilbertian metric on the space of normal distributions. Hence as discussed above, the choice\nkP ( Px, Px′ ) = exp ( − λW ν2 (Px, Px′) ) yields a valid kernel for all λ > 0 and ν ∈ (0, 2]. Thus for all λ, γ > 0 and ν ∈ (0, 2]\nk ( (p, y), (p′, y′) ) = exp ( − λW ν2 (p, p′) ) exp ( − γ‖y − y′‖22 ) is a valid kernel on the product space P × Y of normal distributions on Rd and Rd that allows to evaluate h ( (p, y), (p′, y′) ) analytically and guarantees that KCEk = 0 if and only if model P is calibrated." }, { "heading": "D.2 LAPLACE DISTRIBUTIONS", "text": "Assume that Y = R and P = {L(µ, β) : µ ∈ R, β > 0}, i.e., the model outputs Laplace distributions PX = L(µX , βX) with probability density function\npX(y) = 1\n2βX exp\n( − β−1X |y − µX | ) for y ∈ Y = R. The distribution of these outputs is defined by the distribution of their mean µX and scale parameter βX .\nLet Px = L(µx, βx) ∈ P , y ∈ Y = R, and γ > 0. If βx 6= γ−1, we have EZx∼Px exp ( − γ|Zx − y| ) = ( β2xγ 2 − 1 )−1( βxγ exp ( − β−1x |µx − y| ) − exp ( − γ|µx − y| )) .\nAdditionally, if βx = γ−1, the dominated convergence theorem implies EZx∼Px exp ( − γ|Zx − y| ) = lim γ→β−1x ( β2xγ 2 − 1 )−1( βxγ exp ( − β−1x |µx − y| ) − exp ( − γ|µx − y|\n)) = 1\n2\n( 1 + γ|µx − y| ) exp ( − γ|µx − y| ) .\nLet Px′ = L(µx′ , βx′) be another Laplace distribution. If βx 6= γ−1, βx′ 6= γ−1, and βx 6= βx′ , we obtain\nEZx∼Px,Zx′∼Px′ exp ( − γ|Zx − Zx′ | ) =\nγβ3x (β2xγ 2 − 1)(β2x − βx′ 2)\nexp ( − β−1x |µx − µx′ | ) + γβ3x′\n(β2x′γ 2 − 1)(βx′2 − β2x)\nexp ( − βx′−1|µx − µx′ | ) + 1\n(β2xγ 2 − 1)(βx′2γ2 − 1)\nexp ( − γ|µx − µx′ | ) .\nAs above, all other possible cases can be deduced by applying the dominated convergence theorem. More concretely,\n• if βx = βx′ = γ−1, then\nEZx∼Px,Zx′∼Px′ exp ( − γ|Zx − Zx′ | ) = 1\n8\n( 3 + 3γ|µx − µx′ |+ γ2|µx − µx′ |2 ) exp ( − γ|µx − µx′ | ) ,\n• if βx = βx′ and βx 6= γ−1, then\nEZx∼Px,Zx′∼Px′ exp ( − γ|Zx − Zx′ | ) =\n1\n(β2xγ 2 − 1)2\nexp ( − γ|µx − µx′ | ) + ( γ ( βx + |µx − µx′ |\n) 2(β2xγ 2 − 1) − βxγ (β2xγ 2 − 1)2 ) exp ( − β−1x |µx − µx′ | ) ,\n• if βx 6= βx′ and βx = γ−1, then\nEZx∼Px,Zx′∼Px′ exp ( − γ|Zx − Zx′ | ) =\nβx′ 3γ3\n(βx′ 2γ2 − 1)2\nexp ( − βx′−1|µx − µx′ | ) − (\n1 + γ|µx − µx′ | 2(βx′ 2γ2 − 1) +\nβx′ 2γ2\n(βx′ 2γ2 − 1)2\n) exp ( − γ|µx − µx′ | ) ,\n• and if βx 6= βx′ and βx′ = γ−1, then\nEZx∼Px,Zx′∼Px′ exp ( − γ|Zx − Zx′ | ) =\nβ3xγ 3\n(β2xγ 2 − 1)2\nexp ( − β−1x |µx − µx′ | ) − (\n1 + γ|µx − µx′ | 2(β2xγ 2 − 1) +\nβ2xγ 2\n(β2xγ 2 − 1)2\n) exp ( − γ|µx − µx′ | ) .\nThe calculations above show that by choosing a Laplacian kernel kY ( y, y′ ) = exp ( − γ|y − y′| ) with inverse length scale γ > 0 on the space of targets Y = R, we can compute EZx∼Px kY(Zx, y) and EZx∼Px,Zx′∼Px′ kY(Zx, Zx′) analytically. Additionally, the Laplacian kernel is characteristic on R (Fukumizu et al., 2008).\nSince the Laplace distribution is an elliptically contoured distribution, we know from Gelbrich (1990, Corollary 2) that the 2-Wasserstein distance with respect to the Euclidean distance between Px = L(µx, βx) and Px′ = L(µx′ , βx′) can be computed in closed form and is given by\nW 22 ( Px, Px′ ) = (µx − µx′)2 + 2(βx − βx′)2.\nThus we see that the 2-Wasserstein distance is also a Hilbertian metric on the space of Laplace distributions, and hence\nkP ( Px, Px′ ) = exp ( − λW ν2 (Px, Px′) ) is a valid kernel for 0 < ν ≤ 2 and all λ > 0. Therefore, as discussed above, for all λ, γ > 0 and ν ∈ (0, 2]\nk ( (p, y), (p′, y′) ) = exp ( − λW ν2 (p, p′) ) exp ( − γ|y − y′| ) is a valid kernel on the product space P × Y of Laplace distributions and R that allows to evaluate h ( (p, y), (p′, y′) ) analytically and guarantees that KCEk = 0 if and only if model P is calibrated." }, { "heading": "D.3 PREDICTING MIXTURES OF DISTRIBUTIONS", "text": "Assume that the model predicts mixture distributions, possibly with different numbers of components. A special case of this setting are ensembles of models, in which each ensemble member predicts a component of the mixture model.\nLet p, p′ ∈ P with p = ∑ i πipi and p ′ = ∑ j π ′ jp ′ j , where π, π\n′ are histograms and pi, p′j are the mixture components. For kernel kY and y ∈ Y we obtain\nEZ∼p kY(Z, y) = ∑ i πi EW∼pi kY(Z, y)\nand EZ∼p,Z′∼p′ kY(Z,Z ′) = ∑ i,j πiπ ′ j EZ∼pi,Z′∼p′j kY(Z,Z ′).\nOf course, for these derivations to be meaningful, we require that they do not depend on the choice of histograms π, π′ and mixture components pi, p′j .\nDefinition D.1 (see Yakowitz & Spragins (1968)). A family P of finite mixture models is called identifiable if two mixtures p = ∑K i=1 πipi ∈ P and p′ = ∑K′ j=1 π ′ jp ′ j ∈ P , written such that all pi and all p′j are pairwise distinct, are equal if and only if K = K ′ and the indices can be reordered such that for all k ∈ {1, . . . ,K} there exists some k′ ∈ {1, . . . ,K} with πk = π′k′ and pk = p′k′ . Clearly, if P is identifiable, then the derivations above do not depend on the choice of histograms and mixture components. Prominent examples of identifiable mixture models are Gaussian mixture models and mixture models of families of products of exponential distributions (Yakowitz & Spragins, 1968).\nMoreover, similar to optimal transport for Gaussian mixture models by Chen et al. (2019; 2020); Delon & Desolneux (2020), we can consider metrics of the form\ninf w∈Π(π,π′) (∑ i,j wi,jc s(pi, p ′ j) )1/s ,\nwhere\nΠ(π, π′) = { w : ∑ i wi,j = π ′ j ∧ ∑ j wi,j = πi ∧ ∀i, j : wi,j ≥ 0 }\nare the couplings of π and π′, and c(·, ·) is a cost function between the components of the mixture model.\nTheorem D.1. Let P be a family of finite mixture models that is identifiable in the sense of Definition D.1, and let s ∈ [1,∞)." }, { "heading": "If d(·, ·) is a (Hilbertian) metric on the space of mixture components, then the Mixture Wasserstein", "text": "distance of order s defined by\nMWs(p, p ′) := inf\nw∈Π(π,π′) (∑ i,j wi,jd s(pi, p ′ j) )1/s , (D.2)\nis a (Hilbertian) metric on P .\nProof. First of all, note that for all p, p′ ∈ P an optimal coupling ŵ exists (Villani, 2009, Theorem 4.1). Moreover, ∑ i,j ŵi,jd s(pi, p ′ j) ≥ 0, and hence MWs(p, p′) exists. Moreover, since P is identifiable, we see that MWs(p, p′) does not depend on the choice of histograms and mixture components. Thus MWs is well-defined.\nClearly, for all p, p′ ∈ P we have MWs(p, p′) ≥ 0 and MWs(p, p′) = MWs(p′, p). Moreover,\nMWss(p, p) = min w∈Π(π,π) ∑ i,j wi,jd s(pi, pj) ≤ ∑ i,j πiδi,jd s(pi, pj)\n= ∑ i πid s(pi, pi) = ∑ i πi0 2 = 0,\nand hence MWs(p, p) = 0. On the other hand, let p, p′ ∈ P with optimal coupling ŵ with respect to π and π′, and assume that MWs(p, p′) = 0. We have\np = ∑ i πipi = ∑ i,j ŵi,jpi = ∑\ni,j : ŵi,j>0\nŵi,jpi.\nSince MWs(p, p′) = 0, we have ŵi,jds(pi, p′j) = 0 for all i, j, and hence d s(pi, p ′ j) = 0 if ŵi,j > 0. Since d is a metric, this implies pi = p′j if ŵi,j > 0. Thus we get\np = ∑\ni,j : ŵi,j>0\nŵi,jpi = ∑\ni,j : ŵi,j>0\nŵi,jp ′ j = ∑ i,j ŵi,jp ′ j = ∑ j π′jp ′ j = p ′.\nFunction MWs also satisfies the triangle inequality, following a similar argument as Chen et al. (2019). Let p(1), p(2), p(3) ∈ P and denote the optimal coupling with respect to π(1) and π(2) by ŵ(12), and the optimal coupling with respect to π(2) and π(3) by ŵ(23). Define w(13) by\nw (13) i,k := ∑ j : π\n(2) j 6=0\nŵ (12) i,j ŵ (23) j,k\nπ (2) j\n.\nClearly w(13)i,k ≥ 0 for all i, k, and we see that\n∑ i w (13) i,k = ∑ i ∑ j : π\n(2) j 6=0\nŵ (12) i,j ŵ (23) j,k\nπ (2) j\n= ∑\nj : π (2) j 6=0\n∑ i ŵ (12) i,j ŵ (23) j,k π (2) j\n= ∑\nj : π (2) j 6=0\nπ (2) j ŵ (23) j,k\nπ (2) j\n= ∑\nj : π (2) j 6=0\nŵ (23) j,k = π\n(3) − ∑\nj : π (2) j =0\nŵ (23) j,k\nfor all k. Since for all j, k, π(2)j ≥ ŵ (23) j,k , we know that π (2) j = 0 implies ŵ (23) j,k = 0 for all k. Thus for all k\n∑ i w (13) i,k = π (3).\nSimilarly we obtain for all i\n∑ k w (13) i,k = π (1).\nThus w(13) ∈ Π(π(1), π(3)), and therefore by exploiting the triangle inequality for metric d and the Minkowski inequality we get MWs ( p(1), p(3) ) ≤ (∑\ni,k\nw (13) i,k d\ns ( p\n(1) i , p (3) k\n))1/s = (∑ i,k ∑ j : π\n(2) j 6=0\nŵ (12) i,j ŵ (23) j,k\nπ (2) j\nds ( p\n(1) i , p (3) k\n))1/s\n≤ (∑\ni,k ∑ j : π\n(2) j 6=0\nŵ (12) i,j ŵ (23) j,k\nπ (2) j\n( d(p\n(1) i , p (2) j ) + d(p (2) j , p (3) k ) )s)1/s\n≤ (∑\ni,k ∑ j : π\n(2) j 6=0\nŵ (12) i,j ŵ (23) j,k\nπ (2) j\nds ( p\n(1) i , p (2) j\n))1/s\n+ (∑ i,k ∑ j : π\n(2) j 6=0\nŵ (12) i,j ŵ (23) j,k\nπ (2) j\nds ( p\n(2) j , p (3) k\n))1/s\n= (∑ i ∑ j : π\n(2) j 6=0\nŵ (12) i,j d\ns ( p\n(1) i , p (2) j\n))1/s\n+ (∑ k ∑ j : π\n(2) j 6=0\nŵ (23) i,k d\ns ( p\n(2) j , p (3) k\n))1/s\n≤ (∑\ni,j\nŵ (12) i,j d\ns ( p\n(1) i , p (2) j\n))1/s + (∑ j,k ŵ (23) i,k d s ( p (2) j , p (3) k ))1/s = MWs ( p(1), p(2) ) + MWs ( p(2), p(3) ) .\nThus MWs is a metric, and it is just left to show that it is Hilbertian if d is Hilbertian. Since d is a Hilbertian metric, there exists a Hilbert spaceH and a mapping φ such that\nd(x, y) = ‖φ(x)− φ(y)‖H. Let r1, . . . , rn ∈ R with ∑ i ri = 0 and p\n(1), . . . , p(n) ∈ P . Denote the optimal coupling with respect to π(i) and π(j) by ŵ(i,j). Then we have∑\ni,j rirj ∑ k,l ŵ (i,j) k,l ‖φ(p (i) k )‖ 2 H = ∑ i,k ri‖φ(p(i)k )‖ 2 H ∑ j rj ∑ l ŵ (i,j) k,l\n= ∑ i,k ri‖φ(p(i)k )‖ 2 H ∑ j rjπ (i) k\n= ∑ i,k riπ (i) k ‖φ(p (i) k )‖ 2 H ∑ j rj = 0,\n(D.3)\nand similarly ∑ i,j rirj ∑ k,l ŵ (i,j) k,l ‖φ(p (j) l )‖ 2 H = 0. (D.4)\nMoreover, for all k, l we get∑ i,j rirjŵ (i,j) k,l 〈 φ ( p (i) k ) , φ ( p (j) l )〉 H = 〈∑ i ri √ ŵ (i,j) k,l φ ( p (i) k ) , ∑ j rj √ ŵ (i,j) k,l φ ( p (j) l )〉 H\n= ∥∥∥∑\ni\nri\n√ ŵ\n(i,j) k,l φ\n( p\n(i) k )∥∥∥2 H ≥ 0,\nand hence ∑ i,j rirj ∑ k,l ŵ (i,j) k,l 〈 φ(p (i) k ), φ(p (j) l ) 〉 H ≥ 0, (D.5)\nand similarly ∑ i,j rirj ∑ k,l ŵ (i,j) k,l 〈 φ(p (j) l ), φ(p (i) k ) 〉 H ≥ 0. (D.6)\nHence from Eqs. (D.3) to (D.6) we get∑ i,j rirjMW s s(p (i), p(j)) = ∑ i,j rirj ∑ k,l ŵ (i,j) k,l d s ( p (i) k , p (j) l ) = ∑ i,j rirj ∑ k,l ŵ (i,j) k,l\n∥∥∥φ(p(i)k )− φ(p(j)l )∥∥∥2H = ∑ i,j rirj ∑ k,l ŵ (i,j) k,l\n∥∥∥φ(p(i)k )∥∥∥2H − ∑ i,j rirj ∑ k,l ŵ (i,j) k,l 〈 φ ( p (i) k ) , φ ( p (j) l )〉 H\n− ∑ i,j rirj ∑ k,l ŵ (i,j) k,l 〈 φ ( p (j) l ) , φ ( p (i) k )〉 H\n+ ∑ i,j rirj ∑ k,l ŵ (i,j) k,l ∥∥∥φ(p(j)l )∥∥∥2H ≤ 0,\nwhich shows that MWss is a negative definite kernel (Berg et al., 1984, Definition 3.1.1). Since 0 < 1/s <∞, MWs is a negative definite kernel as well (Berg et al., 1984, Corollary 3.2.10), which implies that metric MWs is Hilbertian (Berg et al., 1984, Proposition 3.3.2).\nHence we can lift a Hilbertian metric for the mixture components to a Hilbertian metric for the mixture models. For instance, if the mixture components are normal distributions, then the 2-Wasserstein distance with respect to the Euclidean distance is a Hilbertian metric for the mixture components. When we lift it to the space P of Gaussian mixture models we obtain the MW2 metric proposed by Chen et al. (2019; 2020); Delon & Desolneux (2020). As shown by Delon & Desolneux (2020), the discrete formulation of MW2 obtained by our construction is equivalent to the definition\nMW22(p, p ′) := inf\nγ∈Π(p,p′)∩GMM2n(∞) ∫ Rn×Rn d2(y, y′) dγ(y, y′) (D.7)\nfor two Gaussian mixtures p, p′ on Rn, where Π(p, p′) are the couplings of p and p′ (not of the histograms!) and GMM2n(∞) = ∪k≥0GMM2n(k) is the set of all finite Gaussian mixture distributions on R2n. The construction of the discrete formulation as a solution to a constrained optimization problem similar to Eq. (D.7) can be generalized to mixtures of t-distributions. However, it is not possible for arbitrary mixture models such as mixtures of generalized Gaussian distributions, even though they are elliptically contoured distributions (Deledalle et al., 2018; Delon & Desolneux, 2020).\nThe optimal coupling of the discrete histograms can be computed efficiently using techniques from linear programming and optimal transport theory such as the network simplex algorithm and the Sinkhorn algorithm. As discussed above, if metric dP is of the form in Eq. (D.2), functions of the form\nkP(p, p ′) = exp ( − λdνP(p, p′) ) are valid kernels on P for all λ > 0 and ν ∈ (0, 2]. Thus taken together, if kY is a characteristic kernel on the target space Y and d(·, ·) is a Hilbertian metric on the space of mixture components, then for all s ∈ [1,∞), λ > 0, and ν ∈ (0, 2]\nk ( (p, y), (p′, y′) ) = exp ( − λMWνs (p, p′) ) kY(y, y ′)\nis a valid kernel on the product space P×Y of mixture distributions and targets that allows to evaluate h ( (p, y), (p′, y′) ) analytically and guarantees that KCEk = 0 if and only if model P is calibrated." }, { "heading": "E CLASSIFICATION AS A SPECIAL CASE", "text": "We show that the calibration error introduced in Definition 2 is a generalization of the calibration error for classification proposed by Widmann et al. (2019). Their formulation of the calibration error is based on a weighted sum of class-wise discrepancies between the left hand side and right hand side of Definition 1, where the weights are output by a vector-valued function of the predictions. Hence their framework can only be applied to finite target spaces, i.e., if |Y| <∞. Without loss of generality, we assume that Y = {1, . . . , d} for some d ∈ N \\ {1}. In our notation, the previously defined calibration error, denoted by CCE (classification calibration error), with respect to a function space G ⊂ {f : P → Rd} is given by\nCCEG := sup g∈G ∣∣∣∣EPX (∑ y∈Y ( P(Y = y|PX)− PX({y}) ) gy ( PX ))∣∣∣∣.\nFor the function class\nF := { f : P × Y → R, (p, y) 7→ gy(p) ∣∣g ∈ G} we get\nCCEG = sup f∈F ∣∣EPX ,Y f(PX , Y )− EPX ,ZX f(PX , ZX)∣∣ = CEF . Similarly, for every function class F ⊂ {f : P × Y → R}, we can define the space\nG := { g : P → Rd, p 7→ ( f(p, 1), . . . , f(p, d) )T∣∣∣f ∈ F}, for which\nCEF = sup g∈G ∣∣∣∣EPX (∑ y∈Y ( P(Y = y|PX)− PX({y}) ) gy ( PX ))∣∣∣∣ = CCEG .\nThus both definitions are equivalent for classification models but the structure of the employed function classes differs. The definition of CCE is based on vector-valued functions on the probability simplex whereas the formulation presented in this paper uses real-valued function on the product space of the probability simplex and the targets.\nAn interesting theoretical aspect of this difference is that in the case of KCE we consider real-valued kernels on P × Y instead of matrix-valued kernels on P , as shown by the following comparison. By ei ∈ Rd we denote the ith unit vector, and for a prediction p ∈ P its representation vp ∈ Rd in the probability simplex is defined as\n(vp)y = p ( {y} )\nfor all targets y ∈ Y . Let k : (P × Y)× (P × Y)→ R. We define the matrix-valued function K : P × P → Rd×d by[\nK(p, p′) ] y,y′ = k ( (p, y), (p′, y′) ) for all y, y′ ∈ Y and p, p′ ∈ P . From the positive definiteness of kernel k it follows that K is a matrix-valued kernel (Micchelli & Pontil, 2005, Definition 2). We obtain\nSKCEk = EPX ,Y,PX′ ,Y ′ [ K(PX , PX′) ]" }, { "heading": "Y,Y ′", "text": "− 2EPX ,Y,PX′ ,ZX′ [ K(PX , PX′) ] Y,ZX′\n+ EPX ,ZX ,PX′ ,ZX′ [ K(PX , PX′) ] ZX ,ZX′\n= EPX ,Y,PX′ ,Y ′ e T YK(PX , PX′)eY ′ − 2EPX ,Y,PX′ ,Y ′ e T YK(PX , PX′)vPX′\n+ EPX ,Y,PX′ ,Y ′ v T PXK(PX , PX′)vPX′\n= EPX ,Y,PX′ ,Y ′ (eY − vPX ) T K(PX , PX′)(eY ′ − vPX′ ),\nwhich is exactly the result by Widmann et al. (2019) for matrix-valued kernels.\nAs a concrete example, Widmann et al. (2019) used a matrix-valued kernel of the form (p, p′) 7→ exp (−γ‖p− p′‖)Id in their experiments. In our formulation this corresponds to the real-valued tensor product kernel ( (p, y), (p′, y′) ) 7→ exp (−γ‖p− p′‖)δy,y′ ." }, { "heading": "F TEMPERATURE SCALING", "text": "Since many modern neural network models for classification have been demonstrated to be uncalibrated (Guo et al., 2017), it is of high practical interest being able to improve calibration of predictive models. Generally, one distinguishes between calibration techniques that are applied during training and post-hoc calibration methods that try to calibrate an existing model after training.\nTemperature scaling (Guo et al., 2017) is a simple calibration method for classification models with only one scalar parameter. Due to its simplicity it can trade off calibration of different classes (Kull et al., 2019), but conveniently it does not change the most-confident prediction and hence does not affect the accuracy of classification models with respect to the 0-1 loss.\nIn regression, common post-hoc calibration methods are based on quantile binning and hence insufficient for our framework. Song et al. (2019) proposed a calibration method for regression models with real-valued targets, based on a special case of Definition 1. This calibration method was shown to perform well empirically but is computationally expensive and requires users to choose hyperparameters for a Gaussian process model and its variational inference. As a simpler alternative, we generalize temperature scaling to arbitrary predictive models in the following way.\nDefinition F.1. Let Px be the output of a probabilistic predictive model P for feature x. If Px has probability density function px with respect to a reference measure µ, then temperature scaling with respect to µ with temperature T > 0 yields a new output Qx whose probability density function qx with respect to µ satisfies\nqx ∝ p1/Tx .\nThe notion for classification models given by Guo et al. (2017) can be recovered by choosing the counting measure on the classes as reference measure.\nFor some exponential families on Rd we obtain particularly simple transformations with respect to the Lebesgue measure λd that keep the type of predicted distribution and its mean invariant. Hence in contrast to other calibration methods, for these models temperature scaling yields analytically tractable distributions and does not negatively impact the accuracy of the models with respect to the mean squared error and the mean absolute error.\nFor instance, temperature scaling of multivariate power exponential distributions (Gómez et al., 1998) in Rd, of which multivariate normal distributions are a special case, with respect to λd corresponds to multiplication of their scale parameter with T 1/β , where β is the so-called kurtosis parameter (GómezSánchez-Manzano et al., 2008). For normal distributions, this corresponds to multiplication of the covariance matrix with T .\nSimilarly, temperature scaling of Beta and Dirichlet distributions with respect to reference measure µ(dx) := x−1(1− x)−11(0,1)(x)λ1(dx)\nand\nµ(dx) := ( d∏ i=1 x−1i ) 1(0,1)d(x)λ d(dx),\nrespectively, corresponds to division of the canonical parameters of these distributions by T without affecting the predicted mean value.\nAll in all, we see that temperature scaling for general predictive models preserves some of the nice properties for classification models. For some exponential families such as normal distributions reference measure µ can be chosen such that temperature scaling is a simple transformation of the parameters of the predicted distributions (and hence leaves the considered model class invariant) that does not affect accuracy of these models with respect to the mean squared error and the mean absolute error." }, { "heading": "G EXPECTED CALIBRATION ERROR FOR COUNTABLY INFINITE DISCRETE TARGET SPACES", "text": "In literature, ECEd and MCEd are defined for binary and multi-class classification problems (Guo et al., 2017; Naeini et al., 2015; Vaicenavicius et al., 2019). For common distance measures on the\nprobability simplex such as the total variation distance and the squared Euclidean distance, ECEd and MCEd can be formulated as a calibration error in the framework of Widmann et al. (2019), which is a special case of the framework proposed in this paper for binary and multi-class classification problems.\nIn contrast to previous approaches, our framework handles countably infinite discrete target spaces as well. For every problem with countably infinitely many targets, such as, e.g., Poisson regression, there exists an equivalent regression problem on the set of natural numbers. Hence without loss of generality we assume Y = N. Denote the space of probability distributions on N, the infinite dimensional probability simplex, with ∆∞. Clearly, ∆∞ can be viewed as a subspace of the sequence space `1 that consists of all sequences x = (xn)n∈N with xn ≥ 0 for all n ∈ N and ‖x‖1 = 1. Theorem G.1. Let 1 < p <∞ with Hölder conjugate q. If\nF := {f : ∆∞ × N→ R | EPX ‖ ( f(PX , n) ) n∈N‖ p p ≤ 1},\nthen CEqF = EPX ‖P(Y |PX)− PX‖ q q.\nLet µ be the law of PX . If F := {f : ∆∞ × N→ R | EPX ‖(f(PX , n))n∈N‖1 ≤ 1}, then CEF = µ- ess sup\nξ∈∆∞ sup y∈N |P(Y = y|PX = ξ)− ξ({y})|.\nMoreover, if F = {f : ∆∞ × N→ R | µ- ess supξ∈∆∞ supy∈N |f(ξ, y)| ≤ 1}, then\nCEF = EPX ‖P(Y |PX)− PX‖1.\nProof. Let 1 ≤ p ≤ ∞, and let µ be the law of PX and ν be the counting measure on N. Since both µ and ν are σ-finite measures, the product measure µ⊗ ν is uniquely determined and σ-finite as well. Using these definitions, we can reformulate F as\nF = {f ∈ Lp(∆∞ × N;µ⊗ ν) | ‖f‖p;µ⊗ν ≤ 1}.\nDefine the function δ : ∆∞ × N→ R (µ⊗ ν)-almost surely by δ(ξ, y) := P(Y = y |PX = ξ)− ξ({y}).\nNote that δ is well-defined since we assume that all singletons on ∆∞ are µ-measurable. Moreover, δ ∈ Lq(∆∞ × N;µ ⊗ ν), which follows from (ξ, y) 7→ P(Y = y |PX = ξ) and (ξ, y) 7→ ξ({y}) being functions in Lq(∆∞ × N;µ⊗ ν). Since µ⊗ ν is a σ-finite measure, the extremal equality of Hölder’s inequality implies that\nCEF = sup f∈F EPX ,Y f(PX , Y )− EPX ,ZX f(PX , ZX)\n= sup f∈F ∣∣∣∣EPX ,Y f(PX , Y )− EPX ,ZX f(PX , ZX)∣∣∣∣ = sup f∈F ∣∣∣∣ ∫ ∆∞×N f(ξ, y)δ(ξ, y) (µ⊗ ν)(d(ξ, y)) ∣∣∣∣\n= ‖δ‖q;µ⊗ν . Note that the second equality follows from the symmetry of the function spaces F : for every f ∈ F , also −f ∈ F . Hence for 1 < p ≤ ∞, we obtain\nCEqF = ∫ ∆∞×N |δ(ξ, y)|q (µ⊗ ν)(d(ξ, y))\n= EPX ‖(δ(PX , y))y∈N‖qq = EPX ‖P(Y |PX)− PX‖qq. For p = 1, we get\nCEF = µ- ess sup ξ∈∆∞ sup y∈N |δ(ξ, y)| = µ- ess sup ξ∈∆∞ sup y∈N |P(Y = y|PX = ξ)− ξ({y})|,\nwhich concludes the proof.\nWe see that our framework deals with countably infinite discrete target spaces seamlessly whereas the previously proposed framework by Widmann et al. (2019) is not applicable to such spaces. It is mathematically pleasing to see that for countably infinite discrete targets the calibration errors obtained in Theorem G.1 within our framework coincide with the natural generalization of ECEd and MCEd given in Appendix B.2." } ]
1,980
CALIBRATION TESTS BEYOND CLASSIFICATION
SP:becb496310e88c1e2e7d03131093b9ebcf075c1d
[ "The authors consider the problem of learning a hash function such that semantically similar elements have high collision probability. They modify the approach Deep Hashing Networks (Zhu et al., 2016) with a new loss function. Rather than use a sigmoid based loss function, the authors argue that a loss function based on angular similarity and SimHash would be better. Specifically, they use the probability of SimHash collisions as a loss function. They then experimentally verify their method on synthetic data from a Stochastic Block Model distribution, image data (CIFAR-10 and ImageNet), and text data (OSCAR). They show improvements over related methods." ]
Semantic hashing methods have been explored for learning transformations into binary vector spaces. These learned binary representations may then be used in hashing based retrieval methods, typically by retrieving all neighboring elements in the Hamming ball with radius 1 or 2. Prior studies focus on tasks with a few dozen to a few hundred semantic categories at most, and it is not currently well known how these methods scale to domains with richer semantic structure. In this study, we focus on learning embeddings for the use in exact hashing retrieval, where Approximate Nearest Neighbor search comprises of a simple table lookup. We propose similarity learning methods in which the optimized base similarity is the angular similarity (the probability of collision under SimHash.) We demonstrate the benefits of these embeddings on a variety of domains, including a coocurrence modelling task on a large scale text corpus; the rich structure of which cannot be handled by a few hundred semantic groups.
[]
[ { "authors": [ "Stanley C Ahalt", "Ashok K Krishnamurthy", "Prakoon Chen", "Douglas E Melton" ], "title": "Competitive learning algorithms for vector quantization", "venue": "Neural networks,", "year": 1990 }, { "authors": [ "Sunil Arya", "David M Mount", "Nathan S Netanyahu", "Ruth Silverman", "Angela Y Wu" ], "title": "An optimal algorithm for approximate nearest neighbor searching fixed dimensions", "venue": "Journal of the ACM (JACM),", "year": 1998 }, { "authors": [ "Jon Louis Bentley" ], "title": "Multidimensional binary search trees used for associative searching", "venue": "Communications of the ACM,", "year": 1975 }, { "authors": [ "Erik Bernhardsson" ], "title": "Annoy – approximate nearest neighbors in c++/python optimized for memory usage and loading/saving to disk", "venue": "https://github.com/spotify/annoy,", "year": 2013 }, { "authors": [ "David M Blei", "Andrew Y Ng", "Michael I Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of machine Learning research,", "year": 2003 }, { "authors": [ "Jane Bromley", "Isabelle Guyon", "Yann LeCun", "Eduard Säckinger", "Roopak Shah" ], "title": "Signature verification using a “siamese” time delay neural network. In Advances in neural information processing systems, pp", "venue": null, "year": 1994 }, { "authors": [ "Yue Cao", "Mingsheng Long", "Bin Liu", "Jianmin Wang" ], "title": "Deep cauchy hashing for hamming space retrieval", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhangjie Cao", "Mingsheng Long", "Jianmin Wang", "Philip S Yu" ], "title": "Hashnet: Deep learning to hash by continuation", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Moses S Charikar" ], "title": "Similarity estimation techniques from rounding algorithms", "venue": "In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing,", "year": 2002 }, { "authors": [ "Gal Chechik", "Varun Sharma", "Uri Shalit", "Samy Bengio" ], "title": "Large scale online learning of image similarity through ranking", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Qi Chen", "Haidong Wang", "Mingqin Li", "Gang Ren", "Scarlett Li", "Jeffery Zhu", "Jason Li", "Chuanjie Liu", "Lintao Zhang", "Jingdong Wang" ], "title": "SPTAG: A library for fast approximate nearest neighbor search, 2018", "venue": "URL https://github.com/Microsoft/SPTAG", "year": 2018 }, { "authors": [ "Tat-Seng Chua", "Jinhui Tang", "Richang Hong", "Haojie Li", "Zhiping Luo", "Yantao Zheng" ], "title": "Nus-wide: a real-world web image database from national university of singapore", "venue": "In Proceedings of the ACM international conference on image and video retrieval,", "year": 2009 }, { "authors": [ "Jerome H Friedman", "Jon Louis Bentley", "Raphael Ari Finkel" ], "title": "An algorithm for finding best matches in logarithmic expected time", "venue": "ACM Transactions on Mathematical Software (TOMS),", "year": 1977 }, { "authors": [ "Salvador Garcia", "Francisco Herrera" ], "title": "An extension on“statistical comparisons of classifiers over multiple data sets”for all pairwise comparisons", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Aristides Gionis", "Piotr Indyk", "Rajeev Motwani" ], "title": "Similarity search in high dimensions via hashing", "venue": "In Vldb,", "year": 1999 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Elad Hoffer", "Nir Ailon" ], "title": "Deep metric learning using triplet network", "venue": "In International Workshop on Similarity-Based Pattern Recognition,", "year": 2015 }, { "authors": [ "Thomas Hofmann" ], "title": "Probabilistic latent semantic indexing", "venue": "In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval,", "year": 1999 }, { "authors": [ "Paul W Holland", "Kathryn Blackmond Laskey", "Samuel Leinhardt" ], "title": "Stochastic blockmodels: First steps", "venue": "Social networks,", "year": 1983 }, { "authors": [ "Po-Sen Huang", "Xiaodong He", "Jianfeng Gao", "Li Deng", "Alex Acero", "Larry Heck" ], "title": "Learning deep structured semantic models for web search using clickthrough data", "venue": "In Proceedings of the 22nd ACM international conference on Information & Knowledge Management,", "year": 2013 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Piotr Indyk", "Rajeev Motwani" ], "title": "Approximate nearest neighbors: towards removing the curse of dimensionality", "venue": "In Proceedings of the thirtieth annual ACM symposium on Theory of computing,", "year": 1998 }, { "authors": [ "Masajiro Iwasaki", "Daisuke Miyazaki" ], "title": "Optimization of indexing based on k-nearest neighbor graph for proximity search in high-dimensional data", "venue": "arXiv preprint arXiv:1810.07355,", "year": 2018 }, { "authors": [ "Jeff Johnson", "Matthijs Douze", "Hervé Jégou" ], "title": "Billion-scale similarity search with gpus", "venue": "arXiv preprint arXiv:1702.08734,", "year": 2017 }, { "authors": [ "Teuvo Kohonen" ], "title": "Improved versions of learning vector quantization", "venue": "ijcnn international joint conference on Neural networks,", "year": 1990 }, { "authors": [ "Yehuda Koren", "Robert Bell", "Chris Volinsky" ], "title": "Matrix factorization techniques for recommender systems", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Daniel D Lee", "H Sebastian Seung" ], "title": "Algorithms for non-negative matrix factorization", "venue": "In Advances in neural information processing systems,", "year": 2001 }, { "authors": [ "Wu-Jun Li", "Sheng Wang", "Wang-Cheng Kang" ], "title": "Feature learning based deep supervised hashing with pairwise labels", "venue": "In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Haomiao Liu", "Ruiping Wang", "Shiguang Shan", "Xilin Chen" ], "title": "Deep supervised hashing for fast image retrieval", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Qin Lv", "William Josephson", "Zhe Wang", "Moses Charikar", "Kai Li" ], "title": "Multi-probe lsh: efficient indexing for high-dimensional similarity search", "venue": "In Proceedings of the 33rd international conference on Very large data bases,", "year": 2007 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "George A Miller" ], "title": "Wordnet: a lexical database for english", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Andriy Mnih", "Russ R Salakhutdinov" ], "title": "Probabilistic matrix factorization", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Marius Muja", "David G Lowe" ], "title": "Fast approximate nearest neighbors with automatic algorithm configuration", "venue": null, "year": 2009 }, { "authors": [ "Dai Quoc Nguyen", "Dat Quoc Nguyen", "Ashutosh Modi", "Stefan Thater", "Manfred Pinkal" ], "title": "A mixture model for learning multi-sense word embeddings", "venue": "arXiv preprint arXiv:1706.05111,", "year": 2017 }, { "authors": [ "Pedro Javier Ortiz Suarez", "Benoit Sagot", "Laurent Romary" ], "title": "Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures", "venue": "Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff,", "year": 2019 }, { "authors": [ "Pedro Javier Ortiz Suarez", "Laurent Romary", "Benoit Sagot" ], "title": "A monolingual approach to contextualized word embeddings for mid-resource languages", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1703–1714,", "year": 2020 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543,", "year": 2014 }, { "authors": [ "Steffen Rendle", "Christoph Freudenthaler", "Zeno Gantner", "Lars Schmidt-Thieme" ], "title": "Bpr: Bayesian personalized ranking from implicit feedback", "venue": "arXiv preprint arXiv:1205.2618,", "year": 2012 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Atsushi Sato", "Keiji Yamada" ], "title": "Generalized learning vector quantization", "venue": "In Advances in neural information processing systems,", "year": 1996 }, { "authors": [ "Kilian Q Weinberger", "John Blitzer", "Lawrence K Saul" ], "title": "Distance metric learning for large margin nearest neighbor classification", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Zhibiao Wu", "Martha Palmer" ], "title": "Verb semantics and lexical selection", "venue": "arXiv preprint cmplg/9406033,", "year": 1994 }, { "authors": [ "Eric P Xing", "Michael I Jordan", "Stuart J Russell", "Andrew Y Ng" ], "title": "Distance metric learning with application to clustering with side-information", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "Han Zhu", "Mingsheng Long", "Jianmin Wang", "Yue Cao" ], "title": "Deep hashing network for efficient similarity retrieval", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "One of most challenging aspects in many Information Retrieval (IR) systems is the discovery and identification of the nearest neighbors of a query element in an vector space. This is typically solved using Approximate Nearest Neighbors (ANN) methods as exact solutions typically do not scale well with the dimension of the vector space. ANN methods typically fall into one of three categories: space partitioning trees, such as the kd-tree (Bentley (1975); Friedman et al. (1977); Arya et al. (1998)), neighborhood graph search (Chen et al. (2018); Iwasaki & Miyazaki (2018)) or Locality Sensitive Hashing (LSH) methods (Charikar (2002); Gionis et al. (1999); Lv et al. (2007)).\nDespite their theoretical, intuitive, and computational appeal, LSH methods are not as prevalent in modern IR systems as are space-partitioning trees or neighborhood graph methods (Bernhardsson (2013); Chen et al. (2018); Johnson et al. (2017); Iwasaki & Miyazaki (2018)). Empirical studies demonstrate that LSH techniques frequently do not attain the same level of quality as spacepartitioning trees (Muja & Lowe (2009)).\nNonetheless, space-partitioning and neighborhood graph search methods are expensive, both in data structure construction and in query time, and remain a bottleneck in many modern IR pipelines. As many modern retrieval tasks revolve around solving ANN for vector representations learned from raw, structured data, one might attempt to learn representations which are more suited towards efficient retrieval. Metric learning methods (Xing et al. (2003); Weinberger et al. (2006); Chechik et al. (2010); Hoffer & Ailon (2015); Kulis et al. (2013)) have been proposed for learning linear and non-linear transformations of given representations for improved clustering and retrieval quality. A class of related methods, semantic hashing or hash learning methods (Salakhutdinov & Hinton (2009)), have also been explored for learning transformations into binary vector spaces. These learned binary representations may then be used in hashing based retrieval methods, typically by retrieving all neighboring elements in the Hamming ball with radius 1 or 2.\nExact hashing retrieval algorithms, that is, Hamming ball “search” with radius 0, have a particular computational appeal in that search data structures are not needed nor is enumeration of all codes within a Hamming ball. In addition, binary representations that are suitable for exact hashing retrieval can also be used to identify groups of related items that can be interpreted as clusters in the traditional sense. As the number of clusters discovered by the algorithm isn’t explicitly controlled\n(only bounded by 2d,) algorithms generating binary embeddings suitable for exact hashing retrieval can be viewed as nonparametric clustering methods.\nTo this end, we propose a method for learning continuous representations in which the optimized similarity is the angular similarity. The angular similarity corresponds to the collision probability of SimHash, a hyperplane based LSH function (Charikar (2002)). Angular distance gives a sharp topology on the embedding space which encourages similar objects have nearly identical embeddings suitable for exact hashing retrieval.\nRelated work on similarity learning, LSH, and hash learning can be found in Section 2. The proposed models are found in Section 3. The experimental results, and other technical details, can be found in Sections 4. Finally, we conclude in Section 5." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 SIMILARITY MODELLING", "text": "Similarity learning methods are a class of techniques for learning a similarity function between objects. One successful approach for similarity learning are “twin network” or “two tower architecture” models, in which two neural network architectures are joined to produce a similarity prediction (Bromley et al. (1994); Chopra et al. (2005); Huang et al. (2013)). The weights of these networks may be shared or not, depending on whether the two input domains are equivalent or not.\nLet i ∈ U and j ∈ V be the identities of two objects, where U and V are the two domains across which a similarity function is to be learned. Let φu(i) and φv(j) be the input representations for the objects (these functions φ may be identity functions if the input domains are discrete.) These representations are then transformed through parameterized vector-valued functions fu(·|θu) and fv(·|θv), whose output are typically the learned representations ui = fu(φu(i)|θu) and vj = fv(φv(j)|θv). A loss is then defined using pairwise labels yij and an interaction function s(ui, vj) which denotes the similarity or relevancy of the pair. Taking fu to be a mapping for each index i to an independent parameter vector ui (similarly for fv and vi), and taking s(ui, vj) = uTi vj with an appropriate loss results in a variety of matrix factorization approaches (Koren et al. (2009); Lee & Seung (2001); Mnih & Salakhutdinov (2008); Blei et al. (2003); Rendle et al. (2012); Pennington et al. (2014)).\nTaking fu to be a neural network mapping a context φu(i) to a representation ui allows for similarity models that readily make use of complex contextual information. Common choices for the similarity function include transformations of Euclidean distance (Chopra et al. (2005)), and cosine similarity: s(ui, vj) = uTi vj ||ui||||vj || (Huang et al. (2013)). In addition, the loss can be defined for pairs (Chopra et al. (2005)), triplets (one positive pair, one negative pair) (Rendle et al. (2012); Chechik et al. (2010)), or on larger sets (Huang et al. (2013))." }, { "heading": "2.2 LOCALITY SENSITIVE HASHING AND ANGULAR SIMILARITY", "text": "A Locality Sensitive Hash (LSH) family F is a distribution of hashes h on a collection of objects Q such that for qi, qj ∈ Q, (Indyk & Motwani (1998); Gionis et al. (1999); Charikar (2002))\nPr[h(qi) = h(qj)] = s(qi, qj) (1)\nfor some similarity function s on the objects. SimHash (Charikar (2002)) is a LSH technique developed for document deduplication but may be used in other contexts. For a vector representations q ∈ Rd, SimHash draws a random matrix Z ∈ Rd×M with standard Normal entries. The hash h(qi) ∈ {0, 1}M is then constructed as\nh(qi)m = 1[q T i Z:m > 0]. (2)\nIntuitively, SimHash draws random hyperplanes intersecting the origin to separate points. A useful property of this hash function, as stated in Charikar (2002), is that\nψ(qi, qj) := Pr[h(qi)m = h(qj)m] = 1− 1\nπ cos−1 ( qTi qj ||qi||||qj || ) ,\nwhere the above probability is measured with respect to Z. ψ(qi, qj), the collision probability for two vectors, is also known as the angular similarity, and ξ = 1 − ψ is the angular distance, which is a proper metric (unlike the cosine distance 1− q T i qj\n||qi||||qj || ). As the columns of Z are independent, the collision probability for a K bit hash is ψK ." }, { "heading": "2.3 LEARNING TO HASH", "text": "A related approach to similarity learning is hash learning methods, introduced in Salakhutdinov & Hinton (2009). These methods train binary embeddings directly and then use hash collisions or Hamming Ball search to retrieve approximate nearest neighbors. Binary representations lead to some technical challenges; Salakhutdinov & Hinton (2009) uses contrastive divergence for training, whereas Hubara et al. (2016) implement binary threshold activation functions with stochastic neurons.\nAnother approach (and the one followed in this work) is to avoid explicit binary representations in training and to introduce an quantization loss to penalize embeddings that are not close to binary, and to subsequently threshold these near-binary embeddings to binary ones. This type of quantization loss is distinct from those used in vector quantization methods (Ahalt et al. (1990); Kohonen (1990); Sato & Yamada (1996)) in which the data representations are fixed and the codes are learned; here the codes are fixed and the representations are learned.\nThe quantization loss introduced in Deep Hashing Networks (DHN) Zhu et al. (2016) is of the form b(ui|θ) = ∑ d log cosh (|uid| − 1) ≈ ‖|ui| − 1‖1 . (3)\nOther quantization losses based on distances to binary codes have been used in Li et al. (2016); Liu et al. (2016). Cao et al. (2017) utilizes a quantization loss whose strength increases over time. Finally, Deep Cauchy Hashing (DCH) (Cao et al. (2018)) has shown improvements by utilizing a heavy-tailed similarity function with a similarly inspired quantization loss." }, { "heading": "3 LOCALITY SENSITIVE EMBEDDINGS", "text": "Many similarity learning methods utilize dot products or cosine similarity to relate the embeddings of a pair to each other. For example GloVe (Pennington et al. (2014)) minimizes the weighted error between the dot product of the embeddings and a log-coocurrence matrix, and the DSSM model (Huang et al. (2013)) utilizes cosine similarity as the “crossing” layer between the two halves of a twin network. In general, embeddings trained in this way are not suitable for SimHash retrieval, as can be seen in Figure 1. If models are trained so as to minimize the error of a prediction made by cosine similarity, extremely low tolerances are required in order to achieve embeddings with\nsignificant collision probability. Similar observations on the misspecifiation of cosine distance for Semantic Hashing were made in Cao et al. (2018). In this section, we define models in which collision probabilities of learned representations are directly optimized." }, { "heading": "3.1 LOSS DEFINITION", "text": "In the following, we define a population loss through a data distribution D of relevant and irrelevant pairs. Each sample from D is a tuple (y, i, j) ∈ {0, 1} × U × V , where U and V are the sets across which a similarity is to be learned – for example, “users” and “items” in a recommender system. y is the relevancy of the pair (i, j).\nThe population losses we consider are expectations over D of a per-tuple loss l with regularization terms r per item:\nL(θ) = E y,i,j∼D l(y, i, j|θ) + λr(i|θ) + λr(j|θ). (4)\nIn practice, we minimize the empirical loss L̂ constructed from a finite sample from D, and we use r(i|θ) = b(ui|θ) defined in equation 3. θ represents all parameters of the model, including any learned representations for the elements of the sets U and V . An embedding ui for element i may either be a vector of free parameters, as would be in a fixed vocabulary embedding model, or may be the output of a model on a raw input: ui = fu(φu(i)), as would be in a twin network model. In addition, each half of the pair (ui, vj) may represent a different input space, as in the DSSM model." }, { "heading": "3.2 BINARY CROSS ENTROPY LOSS", "text": "We may simply model the relevancy yij for the pair (ui, vj) with a binary cross entropy loss:\nl(yij , i, j|θ) =− yij log (p̂(yij |i, j, θ))− β(1− yij) log (1− p̂(yij |i, j, θ)) . (5)\nwhere p̂ is the learned estimate for E[yij |i, j, θ], and β is a scalar hyperparameter for tuning the relative importance of positive and negative samples. One standard choice for p̂ in representation learning is to take\np̂(yij |i, j, θ) = sσ(ui, vj) := σ ( α\nuTi vj ||ui||||vj ||\n) , (6)\nwhere σ is the logistic function and α is a scalar. As the logistic function saturates quickly, the embeddings ui and vj do not need to be extremely close (when yij is positive) in order to achieve low error. Thus, to encourage representations that are amenable to hashing retrieval, we might consider other transformations of the embeddings that do not saturate so quickly. For example, one may take a polynomial transformation of cosine similarity:\np̂(yij |i, j, θ) = sc(ui, vj)K := 1\n2K\n( 1 +\nuTi vj ||ui||||vj ||\n)K , (7)\nor a polynomial transformation of the angular similarity: p̂(yij |i, j, θ) = ψ(ui, vj)K = (\n1− 1 π\ncos−1 (\nuTi vj ||ui||||vj ||\n))K . (8)\nThe p̂(yij |i, j, θ) = ψ(ui, vj)K choice has a natural interpretation of using the SimHash collision probability under a K bit hash as the estimation function. Intuitively, we are training representations whose collision probability distribution under SimHash has minimum cross entropy with the pairwise label distribution y. Embeddings trained with equation 8 are termed Locality Sensitive Embeddings (LSE) and are the proposed method of this paper. Deterministic thresholding is still used to derive binary embeddings from dense versions.\nDCH Cao et al. (2018) introduced the following similarity measure for defining the loss:\np̂(yij |i, j, θ) = sh(ui, vj) := γ\nγ + d2\n( 1− u T i vj\n||ui||||vj ||\n) . (9)" }, { "heading": "3.3 TOPOLOGICAL ANALYSIS", "text": "The SimHash method and the angular similarity can be used for studying the topologies induced by the different similarity measures in the previous section. Theorem 1. Let B(qi) = N(δ, qi, 1 − s) denote a ball around qi with radius δ under the 1 − s distance. For an arbitrary point qj ∈ B(qi), we can consider the probability qj and qi will collide under SimHash – denote this with Ps(δ). Then,\n1. (LSE) Pψ(δ) ≥ 1− δ\n2. (COS) Psc(δ) ≥ 1− 2 √ δ π −O(δ 3 2 )\n3. (DCH) Psh(δ) ≥ 1− ( 4γδ π2d(1−δ) ) 1 2 −O(δ 32 )\nProof in Appendix. These bounds are tight as we know the asymptotic error for each (for LSE there is no error term). Theorem 1 reveals that, for a similarity model trained to tolerance level δ for positive pairs, under a SimHash algorithm LSE would have linear scaling of a single bit collision probability, while COS and DCH would have sublinear scaling.\nNote that for the logistic based similarity, Psσ (δ) is only well defined for α > | log(δ)− log(1− δ)| (otherwise 1− sσ cannot be below δ.) Any analysis here requires choosing a rate for α. There is also a relationship between Angular and Hamming distances. Angular distance can be viewed as a dimension scaled version of Hamming distance applied to randomly rotated inputs. Lemma 1. Let b(qi) be the vector indicating signs of qi, that is b(qi)m := 1[qim > 0]. Denote the Hamming distance of the sign vectors as ρH(qi, qj) := ||b(qi)− b(qj)||1 which defines a semimetric on Rd. Take R as a uniformly random orthogonal matrix. Then\n1− ψ(qi, qj) = 1\nd ER[ρH(Rqi, Rqj)]. (10)\nProof in Appendix. Lemma 1 demonstrates that angular distance may be viewed as an expectation of a dimension-scaled Hamming distance, where the expectation is taken with respect to the choice of basis. In other words, minimizing angular distance is equivalent to minimizing the Hamming semimetric ρH averaged over all possible bases." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we compare representations trained using equation 8 (LSE), using equation 7 (COS), using DHN’s logistic based similarity, and using DCH. All methods use the quantization loss in equation 3 from Zhu et al. (2016), except DCH which uses the quantization loss from Cao et al. (2018)." }, { "heading": "4.1 SYNTHETIC DATA", "text": "We generate data from a Stochastic Block Model (SBM) (Holland et al. (1983)) with 500 factions, 10 individuals per faction, and the probability of an edge appearing between two individuals belonging to factions i and j governed by the matrix of probabilitiesW ∈ R500×500 withWij = 0.8I(|i−j| = 0)+0.1I(0 < |i−j| < 3)+ I(|i−j| >= 3). The resulting cooccurrence matrixD ∈ {0, 1}5000×5000\nis roughly block diagonal, with additional edges appearing from “nearby” factions at a lower rate. The task is to hash the individuals from each faction together, while separating them from all other factions. In order to enable higher precision of the resulting models, a set of hard negatives is generated by taking pairs (i, j) such that (DTD)ij > 0 and Dij = 0. Easy negatives are also generated by taking pairs at random and assuming a cooccurrence of 0, as is common in Noise Contrastive Estimation (Gutmann & Hyvärinen (2010)) inspired methods.\nEach individual is given a free parameter ui which is the output of a embedding layer followed by tanh activation. This output is dropped-out (shared randomization for each half of the training pair) before the cosine similarity computation. Batches are constructed from 1024 positive pairs, and 3072 negative pairs (1024 of which may either be hard or easy negatives, determined as a hyperparameter.)\nWe trained all models with 32 dimensional representations, for 50 epochs, where 1 epoch is the number of batches required to iterate through all positive pairs. We explore 4 hyperparameters, K, β, λ, and the use of hard negative samples. During evaluation, we retrieve all individuals which have the same binarized embedding as the query individual. We measure precision, recall and F1-score with the data generating factions as the target. 10 trials are repeated for each hyperparameter setting, and the mean over trials is reported. Figure 2 shows a detailed view of the hyperparameter tuning, and Table 1 shows the chosen hyperparameters for each model when ranking by F1-score. DCH and LSE are competitive, however the LSE model is able to achieve surprisingly accurate recovery of the data generating structure with an F1 of nearly 0.99." }, { "heading": "4.2 HAMMING BALL IMAGE RETRIEVAL ON IMAGE DATASETS", "text": "The experiment on image datasets is motivated from state of the art methods which build binary embeddings directly and use hashing retrieval for image similarity search. In this section we demonstrate that LSE is appropriate for these tasks as well. We follow the experimental setup as in Zhu et al. (2016) on three datasets: Cifar-10 Krizhevsky et al. (2009), NUS-WIDE Chua et al. (2009) and ImageNet Russakovsky et al. (2015). Cifar-10 is a dataset consisting of 10 categories and 60000 color images of size 32 × 32. We use the same data splits (available online) as Zhu et al. (2016): 500 images per category in training set, 100 images per category it test query set and the remaining 54000 images are used as database. We again follow Zhu et al. (2016) for experiments with NUSWIDE dataset: 149736 images that are associated with 21 most frequent categories as database and 2100 images as queries, and 10500 images from the database as training set. For ImageNet, we use the same 100 categories as Cao et al. (2017) as the indexing database and the 13000/5000 image train/test split. We follow Cao et al. (2018) and use pretrained AlexNet as described. We take the open-source code for deep hashing methods 1 (Zhu et al. (2016)) and add the LSE model. Results are shown in Table 2. We show results for binary embedding size of 64 bits, 48 bits, 32 bits, and 16 bits. We do not alter any other setting and use K = 1, λ = 0.1 for all models. In pilot experiments we did not find any significant improvement for higher K for any of the methods. All baseline papers use mean average precision for the evaluation (Cao et al. (2018); Zhu et al. (2016)) which is the evaluation method we adopt for this experiment. The LSE model is comparable to the baseline methods. Cifar10 results are statistically significant with the p-value of 7.5× 10−4, NUS Wide with the p-value of 2.81 × 10−146 and Imagenet with the p-value of 7.99 × 10−4 according to widely used Iman Daveport test (Garcia & Herrera, 2008). We want to point out that all the experimental results (originally in the respective baseline papers and in this paper) on DCH and DHN have used pretrained AlexNet. The usage of pretrained network and well separated categories reduces the need for a model with a strong inductive bias like LSE. Nonetheless LSE remains a good choice for these type of tasks as well." }, { "heading": "4.3 OSCAR COOCCURRENCE MODEL", "text": "Finally, we compare all methods on a cooccurrence matrix generated from the OSCAR English dataset (Ortiz Suarez et al. (2019); Ortiz Suarez et al. (2020)). We take the deduplicated version of the corpus (1.2TB compressed) and generate an initial symmetric cooccurrence matrix by counting word pairs with a window of size 10, inversely weighting the counts by the distance of the words within the sentence, as in Pennington et al. (2014). This initial matrix is then filtered to remove extremely common terms, extremely rare terms. Additional filtering based on row and column normalized cooccurrences is used to retain pairs that are atypical compared to the marginal frequencies of the two terms. For each row, the top 100 pairs ranked by the original cooccurrence are kept, and the resulting binary matrix is symmetrized. The resulting cooccurrence matrix D has 660K unique\n1https://github.com/swuxyj/DeepHash-pytorch\nwords, with 16M nonzero entries. A set of hard negatives is generated by taking pairs (i, j) such that (DTD)ij > 0 and Dij = 0. 4600 pairs from D are held out for the final evaluation.\nPopular models for text data frequently allow for each word’s representation to have multiple contexts, as in topic modelling (Hofmann (1999); Blei et al. (2003)) or multi-sense embedding models (Nguyen et al. (2017)). To incorporate multiple context representations into semantic hashing methods, we represent each word i with L embeddings uil (these are free parameters per word and no subword information is used.) The maximum among all pairwise cosine similarities is then taken as the base similarity function:\ns(ui, uj) = max l,m uTilujm ||uil||||ujm|| . (11)\nThis base similarity is then used in place of cosine similarity in defining the loss for all models2. At retrieval time, a query word is mapped to its L hashes, corresponding to L “clusters.” The union of the L clusters is the retrieved set for the query – no search is performed and the only data structures used are hash tables. As each word is associated with L hashes, this model may be understood as a “word2hashes” method.\nThe base architecture used for all models is a 32 dimension tanh activated embedding layer followed by dropout (with shared randomization across all 2L embeddings,) with L = 3. Following the dropout layer is equation 11. Batches are constructed from 8192 positives, 8192 hard negatives, and 16384 easy negatives. Models are trained for 20 epochs through the positive set using 2 GPUs.\nWe utilize a semantic quality measure based on Wu-Palmer similarity (WP) (Wu & Palmer (1994)) on WordNet (WN) (Miller (1995); Fellbaum et al. (1998)). We take all nouns, verbs and adjectives from the WordNet corpus and remove all words with no hypernyms (these are typically isolated nodes in the WordNet graph for which WP values are not available.) The intersection with the 660K\n2As DHN uses the unnormalized inner product, for which the max operation has undesirable properties, we modify the DHN implementation to use 5s(ui, uj) as the input to the logistic function.\nvocabulary leaves 46K words, which we index based on the semantic hashing models. For each query word w and its retrieved set V (w), the average WP similarity is computed across all pairs w, v with v ∈ V (w). Self-pairs are removed, and empty V (w) are given 0 values. This WP measure is bounded between 0 and 1, with higher values indicating more semantically meaningful clusters.\nFigure 3 shows WP of the models on a 1K word query set (taken randomly from the 46K WN vocabulary) which we use as a tuning data set. We also report F1 score on a 1K query set sampled from the 660K vocabulary to evaluate how well each model reconstructs the training data. All models used the same tuning grid, except COS for which K = 16 was added, as the initial sweep showed potentially large improvement by expanding the grid for the COS model.\nTable 3 shows the final model comparison. The hyperparameters with highest WP per model are taken and models for each are trained with 100 epochs. We compare the scores of these models on a non-tuning set of 1K queries from WN for WP, and 1K queries from the full vocab for Precision, Recall, and F1. In addition, we evaluate a HitRatio (HR) score on the heldout 4600 pairs, where all colliding words for a query are retrieved, and if the target word appears in the top n items ranked by cosine similarity (of the dense embeddings,) the query achieves a HR of 1. This is the only measure to use the dense embeddings. We also report the number of non-singleton clusters. As can be seen, LSE outperforms the baselines on training, test, and semantic quality measures.3\nWe display some example queries and their retrieved hash siblings from the 100 epoch LSE model in Table 7. T-SNE Maaten & Hinton (2008) plots of the dense embeddings on WordNet vocabulary are shown in Figures 5, 6, and 7. Within the discrete hash based clusters used in retrieval, there is still additional structure in the dense embeddings that may be leveraged. This can be seen from T-SNE plots (Figures 8 and 9) of the dense embeddings that collide together in the “Generic Food” cluster seen in Table 7." }, { "heading": "5 CONCLUSION", "text": "We extend semantic hashing methods to problems with substantial label noise and to the exact hashing retrieval case via the introduction of Locality Sensitive Embeddings, which leverage angular similarity as the main component of an output prediction. The learned representations show superior performance in the exact hashing retrieval setting. We applied LSE to a multiple-context representation learning model to a cooccurrence matrix generated from the OSCAR English corpus, producing a “word2hashes” model which is novel to the best of the authors’ knowledge." }, { "heading": "A APPENDIX", "text": "A.1 EFFECT OF HARD NEGATIVES AND ABLATION STUDY\nIn this section we study the impact of hard negatives. Recall from the main text that we define hard negatives from a dataset D by taking pairs where (DTD)ij > 0 and Dij = 0. See Figure 4 for a diagram showing these hard negative pairs on the SBM synthetic data.\nWe performed an experiment on the OSCAR dataset in which a modified loss function is used where hard negatives samples are weighted (that is, easy negatives are always given weight of 1.) These tuned models are compared to the tuned models of the main text in Table 5. This tuning gives a modest improvement to COS and DCH models in terms of WP (and a degradation for LSE,) while keeping F1 much the same. However, DCH is not able to obtain the same F1 measure as LSE (in either tuning.) In addition, the LSE model from the original tuning outperforms the other models in WP+F1. Also note that the DHN model improves substantially on the removal of Hard Negatives, however it still remains the worst performing algorithm of the four.\nWe also use this experiment to perform an ablation study, see Table 6. DHN is most competitive when hard negatives are removed, achieving the highest WP (but still lowest F1.) It is also this case where we see LSE achieve a Recall of 0.54 – this metric is essentially traded-off for Precision\nand WP when increasing the negative weight. All methods have comparable performance when the quantization loss is removed. And finally, DCH performs the best when K = 1.\nA.2 PROOFS\nTheorem 1. Let B(qi) = N(δ, qi, 1 − s) denote a ball around qi with radius δ under the 1 − s distance. For an arbitrary point qj ∈ B(qi), we can consider the probability qj and qi will collide under SimHash – denote this with Ps(δ). Then,\n1. (LSE) Pψ(δ) ≥ 1− δ\n2. (COS) Psc(δ) ≥ 1− 2 √ δ π −O(δ 3 2 )\n3. (DCH) Psh(δ) ≥ 1− ( 4γδ π2d(1−δ) ) 1 2 −O(δ 32 )\nProof. (1): 1− ψ(qi, qj) ≤ δ by definition, so Pψ(δ) ≥ 1− δ.\nFor each of the following, we will use the expansion via the Frobenius method: cos−1(1 − δ) =√ 2δ +O(δ 3 2 ).\n(2): 1 − sc(qi, qj) ≤ δ. Substituting q T i qj\n||qi||||qj || = cos(π(1 − ψ(qi, qj))) in sc and rearranging gives (note cos−1 is monotonically decreasing on (0, 1))\n1− sc(qi, qj) = 1\n2\n( 1− q T i qj\n||qi||||qj ||\n) ≤ δ\ncos(π(1− ψ(qi, qj)) ≥ 1− 2δ\n1− ψ(qi, qj) ≤ 1 π cos−1(1− 2δ) = 2\n√ δ π +O(δ 3 2 ).\n(3): 1− sh(qi, qj) ≤ δ. Proceeding as above\n1− sh(qi, qj) = 1− γ\nγ + d2\n( 1− q T i qj\n||qi||||qj || ) ≤ δ cos(π(1− ψ(qi, qj)) ≥ 1− 2γδ\nd(1− δ)\n1− ψ(qi, qj) ≤ 1\nπ cos−1\n( 1− 2γδ\nd(1− δ)\n) = ( 4γδ\nπ2d(1− δ)\n) 1 2\n+O(δ 3 2 ).\nNote that for the logistic based similarity, Psσ (δ) is only well defined for α > | log(δ)− log(1− δ)| (otherwise 1− sσ cannot be below δ.) Any analysis here requires choosing a rate for α. Lemma 1. Let b(qi) be the vector indicating signs of qi, that is b(qi)m := 1[qim > 0]. Denote the Hamming distance of the sign vectors as ρH(qi, qj) := ||b(qi)− b(qj)||1 which defines a semimetric on Rd. Take R as a uniformly random orthogonal matrix. Then\n1− ψ(qi, qj) = 1\nd ER[ρH(Rqi, Rqj)]. (12)\nProof. Consider a modified SimHash algorithm where z′ is taken uniformly at random from the standard basis vectors {em}m∈[1,...d] and let h̄(qi) := 1[qTi z′ > 0] denote this hash. The collision probability is simply the chance that two given embeddings share the same sign for a randomly chosen dimension, so\nPr[h̄(qi) = h̄(qj)] = Em∼(1,...,d) [1[qim = qjm]] = 1− ρH(qi, qj)\nd . (13)\nLet z0 ∈ Rd be a vector with independent standard Normal entries, and take z = z0||z0|| , which is equal in distribution to the uniform distribution on the unit sphere. Thus Rz′ d= z. The original SimHash function h(qi) = 1[qTi z0 > 0] = 1[q T i z > 0], and so h(qi) d = h̄(Rqi), and thus\nψ(qi, qj) = Pr[h(qi) = h(qj)] = ER [ 1− ρH(Rqi, Rqj)\nd\n] . (14)\nA.3 EVALUATION METRICS\n• Precision - Retrieve all items with the same hash, and compute precision • Recall - Retrieve all items with the same hash, and compute recall • F1 - Compute F1-measure using the above Precision and Recall • WP - Wu-Palmer similarity measure for evaluation of semantic quality of hash groups. Wu-\nPalmer similarity (Wu & Palmer (1994)) is computed on WordNet (WN) (Miller (1995); Fellbaum et al. (1998)). We take all nouns, verbs and adjectives from the WordNet corpus and remove all words with no hypernyms (these are typically isolated nodes in the WordNet graph for which WP values are not available.) The intersection with the 660K OSCAR vocabulary leaves 46K words, which we index based on the semantic hashing models. For each query word w and its retrieved set V (w), the average WP similarity is computed across all pairs w, v with v ∈ V (w). Self-pairs are removed, and empty V (w) are given 0 values. This WP measure is bounded between 0 and 1, with higher values indicating more semantically meaningful clusters. • HR@n - HitRatio score on the heldout (test) 4600 pairs, where all colliding words for a\nquery are retrieved, and if the target word appears in the top n items ranked by cosine similarity (of the dense embeddings,) the query achieves a HR of 1. This is the only measure in the OSCAR experiment to use the dense embeddings. • Mean Average Precision (MAP) - Consider rank position of each relevant retrieved item\n(in top R) based on Hamming distance - K1,K2, . . . ,KR. First calculate precision @ k by setting a rank threshold k ≤ R and then ratio of relevant in top k divided by k (ignoring the ranked lower than k). Next step is to calculate average of precision at 0 < r ≤ R. Finally mean average precision is calculated by taking mean of average precision over all queries. • Recall@2 - Retrieve everything within hamming distance 2 and calculate recall\nA.4 QUALITATIVE PLOTS AND TABLES\nTable 7 shows example hash factors retrieved in the OSCAR model for a set of queries. These are constructed from exact hash collisions only – no search or ANN is performed in either the binary representation space or the dense embeddings space. Figures 5, 6 and 7 show TSNE plots for the dense embeddings on the WordNet set. Figures 8 and 9 show TSNE plots for the dense embeddings associated with the single hash that resembles a ”Generic Food” cluster. These figures demonstrate there is significant structure remaining in the dense version of the embeddings that is semantically meaningful." } ]
2,020
null
SP:7611ee6b9dfabf7ec6a65da58cb6e3892705e1c9
[ "This paper introduces a new method for leveraging auxiliary information and unlabelled data to improve out-of-distribution model performance. Theoretically, in a linear model with latent variables, they demonstrate using auxiliary data as inputs helps in-distribution test-error, but can hurt out-of-distribution error, while using auxiliary data to pretrain a \"good\" representation always improve out-of-distribution error. The proposed method uses the auxiliary data to learn an initial model, which generates psuedolabels to fine-tune the pretrained model." ]
Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both inand out-of-distribution (OOD). The goal is to learn a model which performs well both in-distribution and OOD. In these settings, auxiliary information is often cheaply available for every input. How should we best leverage this auxiliary information for the prediction task? Empirically across three image and time-series datasets, and theoretically in a multi-task linear regression setting, we show that (i) using auxiliary information as input features improves in-distribution error but can hurt OOD error; but (ii) using auxiliary information as outputs of auxiliary pre-training tasks improves OOD error. To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary outputs and fine-tunes this model with the pseudolabels (self-training). We show both theoretically and empirically that In-N-Out outperforms auxiliary inputs or outputs alone on both in-distribution and OOD error.
[ { "affiliations": [], "name": "DISTRIBUTION ROBUSTNESS" }, { "affiliations": [], "name": "Sang Michael Xie" }, { "affiliations": [], "name": "Ananya Kumar" }, { "affiliations": [], "name": "Robbie Jones" }, { "affiliations": [], "name": "Fereshte Khani" }, { "affiliations": [], "name": "Tengyu Ma" }, { "affiliations": [], "name": "Percy Liang" } ]
[ { "authors": [ "Sajjad Ahmad", "Ajay Kalra", "Haroon Stephen" ], "title": "Estimating soil moisture using remote sensing data: A machine learning approach", "venue": "Advances in Water Resources,", "year": 2010 }, { "authors": [ "EA AlBadawy", "A Saha", "MA Mazurowski" ], "title": "Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing", "venue": "Med Phys.,", "year": 2018 }, { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "John Blitzer", "Fernando Pereira" ], "title": "Domain adaptation of natural language processing systems", "venue": "University of Pennsylvania,", "year": 2007 }, { "authors": [ "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": "arXiv preprint arXiv:2005.14165,", "year": 2020 }, { "authors": [ "Yaping Cai", "Kaiyu Guan", "Jian Peng", "Shaowen Wang", "Christopher Seifert", "Brian Wardlow", "Zhan Li" ], "title": "A high-performance and in-season classification system of field-level crop types using time-series landsat data and a machine learning approach", "venue": "Remote Sensing of Environment,", "year": 2018 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "Percy Liang", "John C. Duchi" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine Learning,", "year": 1997 }, { "authors": [ "Rich Caruana", "Virginia R. de Sa" ], "title": "Benefitting from the variables that variable selection discards", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2003 }, { "authors": [ "Yining Chen", "Colin Wei", "Ananya Kumar", "Tengyu Ma" ], "title": "Self-training avoids using spurious features under domain shift", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Gordon Christie", "Neil Fendley", "James Wilson", "Ryan Mukherjee" ], "title": "Functional map of the world", "venue": "In Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Hal Daumé III" ], "title": "Frustratingly easy domain adaptation", "venue": "In Association for Computational Linguistics (ACL),", "year": 2007 }, { "authors": [ "R S DeFries", "JRG Townshend" ], "title": "NDVI-derived land cover classifications at a global scale", "venue": "International Journal of Remote Sensing,", "year": 1994 }, { "authors": [ "Ruth DeFries", "Matthew Hansen", "John Townshend" ], "title": "Global discrimination of land cover types from metrics derived from AVHRR pathfinder data", "venue": "Remote Sensing of Environment,", "year": 1995 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Association for Computational Linguistics (ACL),", "year": 2019 }, { "authors": [ "Simon S. Du", "Wei Hu", "Sham M. Kakade", "Jason D. Lee", "Qi Lei" ], "title": "Few-shot learning via learning the representation, provably", "venue": null, "year": 2020 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "Francois Laviolette", "Mario March", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2016 }, { "authors": [ "Pall Oskar Gislason", "Jon Atli Benediktsson", "Johannes R. Sveinsson" ], "title": "Random forests for land cover classification", "venue": "Pattern Recognition Letters,", "year": 2006 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Judy Hoffman", "Eric Tzeng", "Taesung Park", "Jun-Yan Zhu", "Phillip Isola", "Kate Saenko", "Alexei A. Efros", "Trevor Darrell" ], "title": "Cycada: Cycle consistent adversarial domain adaptation", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Daniel Hsu", "Sham M. Kakade", "Tong Zhang" ], "title": "Random design analysis of ridge regression", "venue": "In Conference on Learning Theory (COLT),", "year": 2012 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": null, "year": 1905 }, { "authors": [ "Neal Jean", "Marshall Burke", "Michael Xie", "W. Matthew Davis", "David B. Lobell", "Stefano Ermon" ], "title": "Combining satellite imagery and machine learning to predict poverty", "venue": null, "year": 2016 }, { "authors": [ "Robin Jia", "Percy Liang" ], "title": "Adversarial examples for evaluating reading comprehension systems", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2017 }, { "authors": [ "Michael D. Johnson", "William W. Hsieh", "Alex J. Cannon", "Andrew Davidson", "Frédéric Bédard" ], "title": "Crop yield forecasting on the canadian prairies by remotely sensed vegetation indices and machine learning methods", "venue": "Agricultural and Forest Meteorology,", "year": 2016 }, { "authors": [ "Fereshte Khani", "Percy Liang" ], "title": "Removing spurious features can hurt accuracy and affect groups disproportionately", "venue": "In ACM Conference on Fairness, Accountability, and Transparency (FAccT),", "year": 2021 }, { "authors": [ "Serkan Kiranyaz", "Onur Avci", "Osama Abdeljaber", "Turker Ince", "Moncef Gabbouj", "Daniel J" ], "title": "Inman. 1d convolutional neural networks and applications: A survey", "venue": null, "year": 1905 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "Ananya Kumar", "Tengyu Ma", "Percy Liang" ], "title": "Understanding self-training for gradual domain adaptation", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "N. Kussul", "M. Lavreniuk", "S. Skakun", "A. Shelestov" ], "title": "Deep learning classification of land cover and crop types using remote sensing data", "venue": "IEEE Geoscience and Remote Sensing Letters,", "year": 2017 }, { "authors": [ "David J. Lary", "Amir H. Alavi", "Amir H. Gandomi", "Annette L. Walker" ], "title": "Machine learning in geosciences and remote sensing", "venue": "Geoscience Frontiers,", "year": 2016 }, { "authors": [ "Ainong Li", "Shunlin Liang", "Angsheng Wang", "Jun Qin" ], "title": "Estimating crop yield from multi-temporal satellite data using multivariate regression and neural network techniques", "venue": "Photogrammetric Engineering & Remote Sensing,", "year": 2007 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Ross Lunetta", "Joseph F Knight", "L Dorsey Worthy" ], "title": "Land-cover change detection using multi-temporal MODIS NDVI data", "venue": "Remote sensing of environment,", "year": 2006 }, { "authors": [ "Aaron E. Maxwell", "Timothy A. Warner", "Fang Fang" ], "title": "Implementation of machine-learning classification in remote sensing: an applied review", "venue": "International Journal of Remote Sensing,", "year": 2018 }, { "authors": [ "Amir Najafi", "Shin ichi Maeda", "Masanori Koyama", "Takeru Miyato" ], "title": "Robustness to adversarial perturbations in learning from incomplete data", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Jianmo Ni", "Jiacheng Li", "Julian McAuley" ], "title": "Justifying recommendations using distantly-labeled reviews and fine-grained aspects", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2019 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John C. Duchi", "Percy Liang" ], "title": "Understanding and mitigating the tradeoff between robustness and accuracy", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Alexander Ratner", "Stephen H Bach", "Henry Ehrenberg", "Jason Fries", "Sen Wu", "Christopher Ré" ], "title": "Snorkel: Rapid training data creation with weak supervision", "venue": "In Very Large Data Bases (VLDB),", "year": 2017 }, { "authors": [ "Alexander J Ratner", "Christopher M De Sa", "Sen Wu", "Daniel Selsam", "Christopher Ré" ], "title": "Data programming: Creating large training sets, quickly", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do imagenet classifiers generalize to imagenet", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-Net: Convolutional networks for biomedical image segmentation", "venue": null, "year": 2015 }, { "authors": [ "Marc Rußwurm", "Sherrie Wang", "Marco Korner", "David Lobell" ], "title": "Meta-learning for few-shot land cover classification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Aleksander Madry" ], "title": "Breeds: Benchmarks for subpopulation", "venue": "shift. arXiv,", "year": 2020 }, { "authors": [ "Rui Shu", "Hung H. Bui", "Hirokazu Narui", "Stefano Ermon" ], "title": "A DIRT-T approach to unsupervised domain adaptation", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "K Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Masashi Sugiyama", "Matthias Krauledat", "Klaus-Robert Muller" ], "title": "Covariate shift adaptation by importance weighted cross validation", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2007 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Terrence Tao" ], "title": "Topics in random matrix theory", "venue": "American Mathematical Society,", "year": 2012 }, { "authors": [ "Rohan Taori", "Achal Dave", "Vaishaal Shankar", "Nicholas Carlini", "Benjamin Recht", "Ludwig Schmidt" ], "title": "Measuring robustness to natural distribution shifts in image classification", "venue": "arXiv preprint arXiv:2007.00644,", "year": 2020 }, { "authors": [ "Nilesh Tripuraneni", "Michael I. Jordan", "Chi Jin" ], "title": "On the theory of transfer learning: The importance of task", "venue": "diversity. arXiv,", "year": 2020 }, { "authors": [ "Jonathan Uesato", "Jean-Baptiste Alayrac", "Po-Sen Huang", "Robert Stanforth", "Alhussein Fawzi", "Pushmeet Kohli" ], "title": "Are labels required for improving adversarial robustness", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "E. Vermote" ], "title": "MOD09A1 MODIS/terra surface reflectance 8-day L3 global 500m SIN grid V006", "venue": null, "year": 2015 }, { "authors": [ "Sherrie Wang", "William Chen", "Sang Michael Xie", "George Azzari", "David B. Lobell" ], "title": "Weakly supervised deep learning for segmentation of remote sensing imagery", "venue": "Remote Sensing,", "year": 2020 }, { "authors": [ "Karl Weiss", "Taghi M Khoshgoftaar", "DingDing Wang" ], "title": "A survey of transfer learning", "venue": "Journal of Big Data,", "year": 2016 }, { "authors": [ "Sen Wu", "Hongyang R. Zhang", "Christopher Ré" ], "title": "Understanding and improving information transfer in multi-task learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Michael Xie", "Neal Jean", "Marshall Burke", "David Lobell", "Stefano Ermon" ], "title": "Transfer learning from deep features for remote sensing and poverty mapping", "venue": "In Association for the Advancement of Artificial Intelligence (AAAI),", "year": 2016 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V. Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": null, "year": 2020 }, { "authors": [ "Christopher Yeh", "Anthony Perez", "Anne Driscoll", "George Azzari", "Zhongyi Tang", "David Lobell", "Stefano Ermon", "Marshall Burke" ], "title": "Using publicly available satellite imagery and deep learning to understand economic well-being in africa", "venue": "Nature Communications,", "year": 2020 }, { "authors": [ "Barret Zoph", "Golnaz Ghiasi", "Tsung-Yi Lin", "Yin Cui", "Hanxiao Liu", "Ekin D. Cubuk", "Quoc V. Le" ], "title": "Rethinking pre-training and self-training", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "When models are tested on distributions that are different from the training distribution, they typically suffer large drops in performance (Blitzer and Pereira, 2007; Szegedy et al., 2014; Jia and Liang, 2017; AlBadawy et al., 2018; Hendrycks et al., 2019a). For example, in remote sensing, central tasks include predicting poverty, crop type, and land cover from satellite imagery for downstream humanitarian, policy, and environmental applications (Xie et al., 2016; Jean et al., 2016; Wang et al., 2020; Rußwurm et al., 2020). In some developing African countries, labels are scarce due to the lack of economic resources to deploy human workers to conduct expensive surveys (Jean et al., 2016). To make accurate predictions in these countries, we must extrapolate to out-of-distribution (OOD) examples across different geographic terrains and political borders.\nWe consider a semi-supervised setting with few in-distribution labeled examples and many unlabeled examples from both in- and out-of-distribution (e.g., global satellite imagery). While labels are scarce, auxiliary information is often cheaply available for every input and may provide some signal for the missing labels. Auxiliary information can come from additional data sources (e.g., climate data from other satellites) or derived from the original input (e.g., background or non-visible spectrum image channels). This auxiliary information is often discarded or not leveraged, and how to best use them is unclear. One way is to use them directly as input features (aux-inputs); another is to treat them as prediction outputs for an auxiliary task (aux-outputs) in pre-training. Which approach leads to better in-distribution or OOD performance?\nAux-inputs provide more features to potentially improve in-distribution performance, and one may hope that this also improves OOD performance. Indeed, previous results on standard datasets show that improvements in in-distribution accuracy correlate with improvements in OOD accuracy (Recht et al., 2019; Taori et al., 2020; Xie et al., 2020; Santurkar et al., 2020). However, in this paper we find that aux-inputs can introduce more spurious correlations with the labels: as a result, while aux-inputs often improve in-distribution accuracy, they can worsen OOD accuracy. We give examples of this trend on CelebA (Liu et al., 2015) and real-world satellite datasets in Sections 5.2 and 5.3.\nConversely, aux-output methods such as pre-training may improve OOD performance through auxiliary supervision (Caruana, 1997; Weiss et al., 2016; Hendrycks et al., 2019a). Hendrycks et al.\n∗Equal contribution.\n𝑥\n𝑧\n𝑤\n𝑦\n𝑢\n𝐵∗\n𝐴∗ 𝐶∗ 𝜃\" 𝜃#\nFigure 2: Graphical model for our theoretical setting: prediction task with input x, target y, and auxiliary information z, which is related to y through the latent variable w and latent noise u.\n(2019a) show that pre-training on ImageNet can improve adversarial robustness, and Hendrycks et al. (2019b) show that auxiliary self-supervision tasks can improve robustness to synthetic corruptions. In this paper, we find that while aux-outputs improve OOD accuracy, the in-distribution accuracy is worse than with aux-inputs. Thus, we elucidate a tradeoff between in- and out-of-distribution accuracy that occurs when using auxiliary information as inputs or outputs.\nTo theoretically study how to best use auxiliary information, we extend the multi-task linear regression setting (Du et al., 2020; Tripuraneni et al., 2020) to allow for distribution shifts. We show that auxiliary information helps in-distribution error by providing useful features for predicting the target, but the relationship between the aux-inputs and the target can shift significantly OOD, worsening the OOD error. In contrast, the aux-outputs model first pre-trains on unlabeled data to learn a lower-dimensional representation and then solves the target task in the lower-dimensional space. We prove that the aux-outputs model improves robustness to arbitrary covariate shift compared to not using auxiliary information.\nCan we do better than using auxiliary information as inputs or outputs alone? We answer affirmatively by proposing the In-N-Out algorithm to combine the benefits of auxiliary inputs and outputs (Figure 1). In-N-Out first uses an aux-inputs model, which has good in-distribution accuracy, to pseudolabel in-distribution unlabeled data. It then pre-trains a model using aux-outputs and finally fine-tunes this model on the larger training set consisting of labeled and pseudolabeled data. We prove that In-N-Out, which combines self-training and pre-training, further improves both in-distribution and OOD error over the aux-outputs model.\nWe show empirical results on CelebA and two remote sensing tasks (land cover and cropland prediction) that parallel the theory. On all datasets, In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over aux-inputs or aux-outputs alone and improves 1–2% in-distribution, 2–3% OOD over not using auxiliary information on remote sensing tasks. Ablations of In-N-Out show that In-N-Out achieves similar improvements over pre-training or self-training alone (up to 5% in-distribution, 1–2% OOD on remote sensing tasks). We also find that using OOD (rather than in-distribution) unlabeled examples for pre-training is crucial for OOD improvements." }, { "heading": "2 SETUP", "text": "Let x∈Rd be the input (e.g., a satellite image), y ∈R be the target (e.g., crop type), and z ∈RT be the cheaply obtained auxiliary information either from additional sources (e.g., climate information) or derived from the original data (e.g., background).\nTraining data. Let Pid and Pood denote the underlying distribution of (x,y,z) triples in-distribution and out-of-distribution, respectively. The training data consists of (i) in-distribution labeled data {(xi, yi, zi)}ni=1 ∼ Pid, (ii) in-distribution unlabeled data {(xidi , zidi )} mid i=1 ∼ Pid, and (iii) out-of-distribution unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood.\nGoal and risk metrics. Our goal is to learn a model from input and auxiliary information to the target, f :Rd×RT →R. For a loss function `, the in-distribution population risk of the model f is Rid(f)=Ex,y,z∼Pid [`(f(x,z),y)], and its OOD population risk isRood(f)=Ex,y,z∼Pood [`(f(x,z),y)]." }, { "heading": "2.1 MODELS", "text": "We consider three common ways to use the auxiliary information (z) to learn a model.\nBaseline. The baseline minimizes the empirical risk on labeled data while ignoring the auxiliary information (accomplished by setting z to 0):\nf̂bs =argmin f\n1\nn n∑ i=1 `(f(xi,0),yi). (1)\nAux-inputs. The aux-inputs model minimizes the empirical risk on labeled data while using the auxiliary information as features:\nf̂in =argmin f\n1\nn n∑ i=1 `(f(xi,zi),yi). (2)\nAux-outputs. The aux-outputs model leverages the auxiliary information z by using it as the prediction target of an auxiliary task, in hopes that there is a low-dimensional feature representation that is common to predicting both z and y. Training the aux-outputs model consists of two steps:\nIn the pre-training step, we use all the unlabeled data to learn a shared feature representation. Let h :Rd→Rk denote a feature map and gz-out :Rk→RT denote a mapping from feature representation to the auxiliary outputs. Let `aux denote the loss function for the auxiliary information. We define the empirical risk of h and gz-out as:\nR̂pre(h,gz-out)= 1\nmid+mood (mid∑ i=1 `aux(gz-out(h(x id i )),z id i )+ mood∑ i=1 `aux(gz-out(h(x ood i )),z ood i ) ) . (3)\nThe estimate of the feature map is ĥout =argminhmingz-outR̂pre(h,gz-out).\nIn the transfer step, the model uses the pre-trained feature map ĥout and the labeled data to learn the mapping gy-out :Rk→R from feature representation to target y. We define the transfer empirical risk as:\nR̂trans(ĥout,gy-out)= 1\nn n∑ i=1 `(gy-out(ĥout(xi)),yi) (4)\nThe estimate of the target mapping is ĝy-out = argmingy-out R̂trans(ĥout,gy-out). The final aux-outputs model is\nf̂out(x,z)= ĝy-out(ĥout(x)). (5)\nLike the baseline model, the aux-outputs model ignores the auxiliary information for prediction." }, { "heading": "3 THEORETICAL ANALYSIS OF AUX-INPUTS AND AUX-OUTPUTS MODELS", "text": "We now analyze the baseline, aux-inputs, and aux-outputs models introduced in Section 2. Our setup extends a linear regression setting commonly used for analyzing multi-task problems (Du et al., 2020; Tripuraneni et al., 2020).\nSetup. See Figure 2 for the graphical model. Letw=B?x∈Rk be a low-dimensional latent feature (k≤d) shared between auxiliary information z and the target y. Let u∈Rm denote unobserved latent variables not captured in x. We assume z and y are linear functions of u andw:\ny=θ>ww+θ > u u+ , (6) z=A?w+C?u, (7)\nwhere ∼ P denotes noise with mean 0 and variance σ2. As in Du et al. (2020), we assume the dimension of the auxiliary information T is greater than the feature dimension k, that is T ≥k, and thatA?,B? andC? have full rank (rank k). We also assume T ≥m, wherem is the dimension of u. Data. Let Px and Pu denote the distribution of x and u in-distribution (ID), and let P ′x, P ′u denote the distribution x and uOOD. We assume x and u are independent, have distributions with bounded density everywhere, and have invertible covariance matrices. We assume the mean of u is zero in-\nand out-of-distribution1. We assume we have n≥m+d in-distribution labeled training examples and unlimited access to unlabeled data both ID and OOD, a common assumption in unsupervised domain adaptation theory (Sugiyama et al., 2007; Kumar et al., 2020; Raghunathan et al., 2020).\nLoss metrics. We use the squared loss for the target and auxiliary losses: `(ŷ,y) = (y− ŷ)2 and `aux(z,z ′)=‖z−z′‖22.\nModels. We assume all model families (f , h, gz-out, gy-out) in Section 2 are linear.\nLet S=(A?,B?,C?,θw,θu,Px,Pu) denote a problem setting which satisfies all the above assumptions." }, { "heading": "3.1 AUXILIARY INPUTS HELP IN-DISTRIBUTION, BUT CAN HURT OOD", "text": "We first show that the aux-inputs model (2) performs better than the baseline model (1) in-distribution. Intuitively, the target y depends on both the inputs x (throughw) and latent variable u (Figure 2). The baseline model only uses x to predict y; thus it cannot capture the variation in y due to u. On the other hand, the aux-inputs model uses x and z to predict y. Since z is a function of x (through w) and u, u can be recovered from x and z by inverting this relation. Note that u is unobserved but implicitly recovered. The aux-inputs model can then combine u and x to predict y better.\nLet σ2u=Eu∼Pu [(θ>u u)2] denote the (in-distribution) variance of y due to the latent variables u. The following proposition shows that if σ2u>0 then with enough training examples the aux-inputs model has lower in-distribution population risk than the baseline model.2\nProposition 1. For all problem settings S, P , assuming regularity conditions (bounded x, u, sub-Gaussian noise , and T =m), and σ2u>0, for all δ>0, there existsN such that for n≥N number of training points, with probability at least 1−δ over the training examples, the aux-inputs model improves over the baseline:\nRid(f̂in)<Rid(f̂bs). (8)\nAlthough using z as input leads to better in-distribution performance, we show that the aux-inputs model can perform worse than the baseline model OOD for any number of training examples. Intuitively, the aux-inputs model uses z, which can be unreliable OOD because z depends on u and u can shift OOD. In more detail, the aux-inputs model learns to predict ŷ= θ̂>x,inx+θ̂ > z,inz, where the true output y=θ>x x+θ > z z, and θ̂z,in is an approximation to the true parameter θz , that has some error. Out-of-distribution u and hence z can have very high variance, which would magnify (θ̂z,in−θz)>z and lead to bad predictions.\nExample 1. There exists a problem setting S , P , such that for every n, there is some test distribution P ′x,P ′ u with:\nE[Rood(f̂in)]>E[Rood(f̂bs)] (9)" }, { "heading": "3.2 PRE-TRAINING IMPROVES RISK UNDER ARBITRARY COVARIATE SHIFT", "text": "While using z as inputs (aux-inputs) can worsen performance relative to the baseline, our first main result is that the aux-outputs model (which pre-trains to predict z from x, and then transfers the learned representation to predict y from x) outperforms the baseline model for all test distributions.\nIntuition. Referring to Figure 2, we see that the mapping from inputs x to auxiliary z passes through the lower dimensional features w. In the pre-training step, the aux-outputs model predicts z from x using a low rank linear model, and we show that this recovers the ‘bottleneck’ features w (up to symmetries; more formally we recover the rowspace of B?). In the transfer step, the aux-outputs model learns a linear map from the lower-dimensionalw to y, while the baseline predicts y directly from x. To warm up, without distribution shift, the expected excess risk only depends on the dimension of the input, and not the conditioning. That is, the expected excess risk in linear regression is exactly dσ2/n, where d is the input dimension, so the aux-outputs trivially improves over the baseline since dim(w)<dim(x). In contrast, the worst case risk under distribution shift depends on the conditioning of the data, which could be worse for w than x. Our proof shows that the worst case risk (over all x and u) is still better for the aux-outputs model because projecting to the low-dimensional feature representation “zeroes-out” some error directions.\n1This is not limiting because bias in z can be folded into x. 2Since z is typically low-dimensional and x is high-dimensional (e.g., images), the aux-inputs model needs\nonly a slightly larger number of examples before it outperforms the baseline.\nAlgorithm 1 In-N-Out Require: in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid,\nin-distribution unlabeled data {(xidi ,zidi )} mid i=1∼Pid, OOD unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood\n1: Learn f̂in : (x,z) 7→y from in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid 2: Pre-train gz-out◦ĥout :x 7→z on aux-outputs from all unlabeled data {(xidi ,zidi )} mid i=1∪{(x ood i ,z ood i )} mood i=1 3: Return f̂= ĝ◦ĥout :x 7→y trained on labeled and pseudolabeled data {(xi,yi)}ni=1∪{(xidi ,f̂in(xidi ,zidi )} mid i=1\nTheorem 1. For all problem settings S , noise distributionsP , test distributionsP ′x ,P ′u, and n≥m+d number of training points:\nE[Rood(f̂out)]≤E[Rood(f̂bs)]. (10)\nSee Appendix A for the proof." }, { "heading": "4 IN-N-OUT: COMBINING AUXILIARY INPUTS AND OUTPUTS", "text": "We propose the In-N-Out algorithm, which combines both the aux-inputs and aux-outputs models for further complementary gains (Figure 1). As a reminder: (i) The aux-inputs model (x,z→y) is good in-distribution, but bad OOD because z can be misleading OOD. (ii) The aux-outputs model (x→y) is better than the baseline OOD, but worse than aux-inputs in-distribution because it doesn’t use z. (iii) We propose the In-N-Out model (x→y), which uses pseudolabels from aux-inputs (stronger model) in-distribution to transfer in-distribution accuracy to the aux-outputs model. The In-N-Out model does not use z to make predictions since z can be misleading / spurious OOD.\nIn more detail, we use the aux-inputs model (which is good in-distribution) to pseudolabel in-distribution unlabeled data. The pseudolabeled data provides more effective training samples (self-training) to fine-tune an aux-outputs model pre-trained on predicting auxiliary information from all unlabeled data. We present the general In-N-Out algorithm in Algorithm 1 and analyze it in the linear multi-task regression setting of Section 2. The In-N-Out model f̂ = ĝ ◦ ĥout optimizes the empirical risk on labeled and pseudolabeled data:\nĝ=argmin g\n(1−λ)R̂trans(ĥout,g)+λR̂st(ĥout,f̂in,g) (11)\nwhere R̂st(ĥout,f̂in,g)= 1m1 ∑m1 i=1`(g(ĥout(x id i )),f̂in(x id i ,z id i )) is the loss of self-training on pseudolabels from the aux-inputs model, and λ ∈ [0,1] is a hyperparameter that trades off between labeled and pseudolabeled losses. In our experiments, we fine-tune ĝ and ĥout together.\nTheoretical setup. Because fine-tuning is difficult to analyze theoretically, we analyze a slightly modified version of In-N-Out where we train an aux-inputs model to predict y given the features ĥout(x) and auxiliary information z, so the aux-inputs model ĝin : Rk × RT → R is given by ĝin = argming 1 n ∑n i=1`(g(ĥout(xi),zi),yi). The population self-training loss on pseudolabels from the aux-inputs model ĝin ◦ ĥout is: Rst(ĥout,ĝin,g) = Ex,z∼Pid [`(g(ĥout(x)),ĝin(ĥout(x),z))], and we minimize the self-training loss: ĝ=argmingRst(ĥout,ĝin,g). At test time given input x,z the In-N-Out model predicts ĝ(ĥout(x)). For the theory, we assume all models (ĝin,ĝ,andĥout) are linear." }, { "heading": "4.1 IN-N-OUT IMPROVES OVER PRE-TRAINING UNDER ARBITRARY COVARIATE SHIFT", "text": "We prove that In-N-Out helps on top of pre-training, as long as the auxiliary features give us information about y relative to the noise in-distribution—that is, if σ2u is much larger than σ 2.\nTo build intuition, first consider the special case where the noise σ2 = 0 (equivalently, = 0). Since u can be recovered fromw and z, we can write y as a linear function ofw and z: y=γ>ww+γ > z z. We train an aux-inputs model ĝin fromw,z to y on finite labeled data. Since there is no noise, ĝin predicts y perfectly from w,z (we learn γw and γz). We use ĝin to pseudolabel a large amount of unlabeled data, and since ĝin predicts y perfectly fromw,z, the pseudolabels are perfect. So here pseudolabeling gives us a much larger and correctly labeled dataset to train the In-N-Out model on.\nThe technical challenge is proving that self-training helps under arbitrary covariate shift even when the noise is non-zero (σ2 > 0), so the aux-inputs model ĝin that we learn is accurate but not perfect.\nIn this case, the pseudolabels have an error which propagates to the In-N-Out model self-trained on these pseudolabels, but we want to show that the error is lower than for the aux-outputs model. The error in linear regression is proportional to the noise of the target y, which for the aux-outputs model is σ2 +σ2u. We show that the In-N-Out model uses the aux-inputs model to reduce the dependence on the noise σ2u, because the aux-inputs model uses both w and z to predict y. The proof reduces to showing that the max singular value for the In-N-Out error matrix is less than the min-singular value of the aux-outputs error matrix with high probability. A core part of the argument is to lower bound the min-singular value of a random matrix (Lemma 3). This uses techniques from random matrix theory (see e.g., Chapter 2.7 in Tao (2012)); the high level idea is to show that with probability 1−δ each column of the random matrix has a (not too small) component orthogonal to all other columns.\nTheorem 2. In the linear setting, for all problem settings S with σ2u > 0, test distributions P ′x,P ′u, n≥m+d number of training points, and δ>0, there exists a,b>0 such that for all noise distributions P , with probability at least 1−δ over the training examples and test example x′∼P ′x, the ratio of the excess risks (for all σ2 small enough that a−bσ2>0) is:\nRoodin-out−R∗\nRoodout −R∗ ≤ σ\n2\na−bσ2 (12)\nHere R∗ = ming∗,h∗Ex′,y′,z′∼P ′ [`(g∗(h∗(x′)),y′)] is the min. possible (Bayes-optimal) OOD risk, Roodin-out = Ey′∼P ′y′|x′ [`(ĝ(ĥout(x ′)), y′)] is the risk of the In-N-Out model on test example x′, and Roodout =Ey′∼P ′y′|x′ [`(ĝy-out(ĥout(x ′)),y′)] is the risk of the aux-outputs model on test example x′. Note thatRoodin-out andR ood out are random variables that depend on the test input x ′ and the training setX .\nRemark 1. As σ→ 0, the excess risk ratio of In-N-Out to Aux-outputs goes to 0, so the In-N-Out estimator is much better than the aux-outputs estimator.\nThe proof of the result is in Appendix A." }, { "heading": "5 EXPERIMENTS", "text": "We show on real-world datasets for land cover and cropland prediction that aux-inputs can hurt OOD performance, while aux-outputs improve OOD performance. In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over other models on all datasets (Section 5.2). Secondly, we show that the tradeoff between in-distribution and OOD performance depends on the choice of auxiliary information on CelebA and cropland prediction (Section 5.3). Finally, we show that OOD unlabeled examples are important for improving OOD robustness (Section 5.4)." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "We give a summary of considered datasets and setup here — see Figure 3 and Appendix B for details. Our datasets use auxiliary information both derived from the input (CelebA, Cropland) and from other sources (Landcover).\nCelebA. In CelebA (Liu et al., 2015), the input x is a RGB image (resized to 64×64), the target y is a binary label for gender, and the auxiliary information z are 7 (of 40) binary-valued attributes derived from the input (e.g., presence of makeup, beard). We designate the set of images where the celebrity is wearing a hat as OOD. We use a ResNet18 as the backbone model architecture for all models (see Appendix B.1 for details).\nCropland. Crop type or cropland prediction is an important intermediate problem for crop yield prediction (Cai et al., 2018; Johnson et al., 2016; Kussul et al., 2017). The input x is a 50× 50 RGB image taken by a satellite, the target y is a binary label that is 1 when the image contains majority cropland, and the auxiliary information z is the center location coordinate plus 50× 50 vegetation-related bands. The vegetation bands in the auxiliary information z is derived from the original satellite image, which contains both RGB and other frequency bands. We use the Cropland dataset from Wang et al. (2020), with data from the US Midwest. We designate Iowa, Missouri, and Illinois as in-distribution and Indiana and Kentucky as OOD. Following Wang et al. (2020), we use a U-Net-based model (Ronneberger et al., 2015). See Appendix B.2 for details.\nLandcover. Land cover prediction involves classifying the land cover type (e.g., “grasslands”) from satellite data at a location (Gislason et al., 2006; Rußwurm et al., 2020)). The input x is a time series measured by NASA’s MODIS satellite (Vermote, 2015), the target y is one of 6 land cover classes, and the auxiliary information z is climate data (e.g., temperature) from ERA5, a dataset computed from various satellites and weather station data (C3S, 2017). We designate non-African locations as in-distribution and Africa as OOD. We use a 1D-CNN to handle the temporal structure in the MODIS data. See Appendix B.3 for details.\nData splits. We first split off the OOD data, then split the rest into training, validation, and in-distribution test (see Appendix B for details). We use a portion of the training set and OOD set as in-distribution and OOD unlabeled data respectively. The rest of the OOD set is held out as test data. We run 5 trials, where we randomly re-generate the training/unlabeled split for each trial (keeping held-out splits fixed). We use a reduced number of labeled examples from each dataset (1%, 5%, 10% of labeled examples for CelebA, Cropland, and Landcover respectively), with the rest as unlabeled.\nRepeated self-training. In our experiments, we also consider augmenting In-N-Out models with repeated self-training, which has fueled recent improvements in both domain adaptation and ImageNet classification (Shu et al., 2018; Xie et al., 2020). For one additional round of repeated self-training, we use the In-N-Out model to pseudolabel all unlabeled data (both ID and OOD) and also initialize the weights with the In-N-Out model. Each method is trained with early-stopping and hyperparameters are chosen using the validation set." }, { "heading": "5.2 MAIN RESULTS", "text": "Table 1 compares the in-distribution (ID) and OOD accuracy of different methods. In all datasets, pretraining with aux-outputs improves OOD performance over the baseline, and In-N-Out (with or without repeated ST) generally improves both in- and out-of-distribution performance over all other models.\nCelebA. In CelebA, using auxiliary information either as aux-inputs or outputs improves both ID (2–4%) and OOD accuracy (5%). We hypothesize this is because the auxiliary information is quite robust. Figure 4 shows that there is a significant correlation (r=0.72) between ID and OOD accuracy for 100 different sets of aux-inputs, supporting results on standard datasets (Recht et al., 2019; Xie et al., 2020; Santurkar et al., 2020). In-N-Out achieves the best OOD performance and comparable ID performance even though there is no tradeoff between ID and OOD accuracy.\nRemote sensing. In the remote sensing datasets, aux-inputs can induce a tradeoff where increasing ID accuracy hurts OOD performance. In cropland prediction, even with a small geographic shift (US Midwest), the baseline model has a significant drop from ID to OOD accuracy (4%). The aux-inputs model improves ID accuracy almost 1% above the baseline but OOD accuracy drops 6%. In land cover prediction, using climate information as aux-inputs decreases OOD accuracy by over 4% compared to the baseline. The aux-outputs model improves OOD, but decreases ID accuracy by 3% over the baseline.\n90.0 90.5 91.0 91.5 92.0 92.5 In-distribution accuracy\n74\n75\n76\n77\n78\nOO D\nac cu\nra cy\nFigure 5: In-distribution vs. OOD accuracy on CelebA when sequentially adding a random set of 15 auxiliary inputs one-by-one. Even if adding all 15 auxiliary inputs improves both in-distribution and OOD accuracy, some intermediate in-distribution gains can hurt OOD.\nID Test Acc OOD Test Acc\nOnly in-distribution 69.73± 0.51 57.73± 1.58 Only OOD 69.92± 0.41 59.28± 1.01 Both 70.07± 0.46 59.84± 0.98\nTable 2: Ablation study on the use of indistribution vs. OOD unlabeled data in pre-training models on Landcover, where unlabeled sample size is standardized (much smaller than Table 1). Using OOD unlabeled examples are important for gains in OOD accuracy (%). Results are shown with 90% error intervals over 5 trials.\nImproving in-distribution accuracy over aux-outputs. One of the main goals of the self-training step in In-N-Out is to improve the in-distribution performance of the aux-outputs model. We compare to oracle models that use a large amount of in-distribution labeled data to compare the gains from In-N-Out. In Landcover, the oracle model which uses 160k labeled ID examples gets 80.5% accuracy. In-N-Out uses 16k labeled examples and 150k unlabeled ID examples (with 50k unlabeled OOD examples) and improves the ID accuracy of aux-output from 72.5% to 77.4%, closing most (62%) of the gap. In Cropland, the oracle model achieves 95.6% accuracy. Here, In-N-Out closes 80% of the gap between aux-outputs and the oracle, improving ID accuracy from 95.1% to 95.5%.\nAblations with only pre-training or self-training. We analyze the individual contributions of selftraining and pre-training in In-N-Out. On both cropland and land cover prediction, In-N-Out outperforms standard self-training on pseudolabels from the aux-inputs model (In-N-Out without pre-training), especially on OOD performance, where In-N-Out improves by about 1% and 2% respectively. Similarly, In-N-Out improves upon pre-training (aux-outputs model) both ID and OOD for both datasets." }, { "heading": "5.3 CHOICE OF AUXILIARY INPUTS MATTERS", "text": "We find that the choice of auxiliary inputs affects the tradeoff between ID and OOD performance significantly, and thus is important to consider for problems with distribution shift. While Figure 4 shows that auxiliary inputs tend to simultaneously improve ID and OOD accuracy in CelebA, our theory suggests that in the worst case, there should be auxiliary inputs that worsen OOD accuracy. Indeed, Figure 5 shows that when taking a random set of 15 auxiliary inputs and adding them sequentially as auxiliary inputs, there are instances where an extra auxiliary input improves in-distribution but hurts OOD accuracy even if adding all 15 auxiliary inputs improves both ID and OOD accuracy. In cropland prediction, we compare using location coordinates and vegetation data as auxiliary inputs with only using vegetation data. The model with locations achieves the best ID performance, improving almost 1% in-distribution over the baseline with only RGB. Without locations (only vegetation data), the ID accuracy is similar to the baseline but the OOD accuracy improves by 1.5%. In this problem, location coordinates help with in-distribution interpolation, but the model fails to extrapolate to new locations." }, { "heading": "5.4 OOD UNLABELED DATA IS IMPORTANT FOR PRE-TRAINING", "text": "We compare the role of in-distribution vs. OOD unlabeled data in pre-training. Table 2 shows the results of using only in-distribution vs. only OOD vs. a balanced mix of unlabeled examples for pre-training on the Landcover dataset, where unlabeled sample size is standardized across the models (by reducing to the size of the smallest set, resulting in 4x less unlabeled data). Using only in-distribution unlabeled examples does not improve OOD accuracy, while having only OOD unlabeled examples does well both in-distribution and OOD since it also has access to the labeled in-distribution data. For the same experiment in cropland prediction, the differences were not statistically significant, perhaps due to the smaller geographic shift (across states in cropland vs. continents in landcover)." }, { "heading": "6 RELATED WORK", "text": "Multi-task learning and weak supervision. Caruana and de Sa (2003) proposed using noisy features (aux-outputs) as a multi-task output, but do not theoretically analyze this approach. Wu et al. (2020) also study multi-task linear regression. However, their auxiliary tasks must have true parameters that are closely aligned (small cosine distance) to the target task. Similarly, weak supervision works assume access to weak labels correlated with the true label (Ratner et al., 2016; 2017). In our paper,\nwe make no assumptions about the alignment of the auxiliary and target tasks beyond a shared latent variable while also considering distribution shifts.\nTransfer learning, pre-training, and self-supervision. We support empirical works that show the success of transfer learning and pre-training in vision and NLP (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015; Devlin et al., 2019). Theoretically, Du et al. (2020); Tripuraneni et al. (2020) study pre-training in a similar linear regression setup. They show in-distribution generalization bound improvements, but do not consider OOD robustness or combining with auxiliary inputs. Hendrycks et al. (2019b) shows empirically that self-supervision can improve robustness to synthetic corruptions. We support these results by showing theoretical and empirical robustness benefits for pre-training on auxiliary information, which can be derived from the original input as in self-supervision.\nSelf-training for robustness. Raghunathan et al. (2020) analyze robust self-training (RST) (Carmon et al., 2019; Najafi et al., 2019; Uesato et al., 2019), which improves the tradeoff between standard and adversarially robust accuracy, in min-norm linear regression. Khani and Liang (2021) show how to use RST to make a model robust against a predefined spurious feature without losing accuracy. While related, we work in multi-task linear regression, study pre-training, and prove robustness to arbitrary covariate shifts. Kumar et al. (2020) show that repeated self-training on gradually shifting unlabeled data can enable adaptation over time. In-N-Out is complementary and may provide better pseudolabels in each step of this method. Chen et al. (2020) show that self-training can remove spurious features for Gaussian input features in linear models, whereas our results hold for general input distributions (with density). Zoph et al. (2020) show that self-training and pre-training combine for in-distribution gains. We provide theory to support this and also show benefits for OOD robustness.\nDomain adaptation. Domain adaptation works account for covariate shift by using unlabeled data from a target domain to adapt the model (Blitzer and Pereira, 2007; Daumé III, 2007; Shu et al., 2018; Hoffman et al., 2018; Ganin et al., 2016). Often, modern domain adaptation methods (Shu et al., 2018; Hoffman et al., 2018) have a self-training or entropy minimization component that benefits from having a better model in the target domain to begin with. Similarly, domain adversarial methods (Ganin et al., 2016) rely on the inductive bias of the source-only model to correctly align the source and target distributions. In-N-Out may provide a better starting point for these domain adaptation methods." }, { "heading": "7 DISCUSSION", "text": "Using spurious features for robustness. Counterintuitively, In-N-Out uses potentially spurious features (the auxiliary information, which helps in-distribution but hurts OOD accuracy) to improve OOD robustness. This is in contrast to works on removing spurious features from the model (Arjovsky et al., 2019; Ilyas et al., 2019; Chen et al., 2020). In-N-Out promotes utilizing all available information by leveraging spurious features as useful in-distribution prediction signals rather than throwing them away.\nGeneral robustness with unlabeled data. In-N-Out is an instantiation of a widely applicable paradigm for robustness: collect unlabeled data in all parts of the input space and learn better representations from the unlabeled data before training on labeled data. This paradigm has driven large progress in few-shot generalization in vision (Hendrycks et al., 2019a;b) and NLP (Devlin et al., 2019; Brown et al., 2020). In-N-Out enriches this paradigm by proposing that some features of the collected data can be used as input and output simultaneously, which results in robustness to arbitrary distribution shifts.\nLeveraging metadata and unused features in applications. Many applications have inputs indexed by metadata such as location coordinates or timestamps (Christie et al., 2018; Yeh et al., 2020; Ni et al., 2019). We can use such metadata to join (in a database sense) other auxilary data sources on this metadata for use in In-N-Out. This auxiliary information may often be overlooked or discarded, but In-N-Out provides a way to incorporate them to improve both in- and out-of-distribution accuracy.\nDivision between input features and auxiliary information. While a standard division between inputs and auxiliary information may exist in some domains, In-N-Out applies for any division of the input. An important further question is how to automatically choose this division under distribution shifts." }, { "heading": "8 CONCLUSION", "text": "We show that while auxiliary information as inputs improve in-distribution and OOD on standard curated datasets, they can hurt OOD in real-world datasets. In contrast, we show that using auxiliary information as outputs by pretraining improves OOD performance. In-N-Out combines the strengths of auxiliary inputs and outputs for further improvements both in- and out-of-distribution." }, { "heading": "9 ACKNOWLEDGEMENTS", "text": "We thank Sherrie Wang and Andreas Schlueter for their help in procuring remote sensing data, Daniel Levy for his insight in simplifying the proof of Theorem 1, Albert Gu for a key insight in proving Lemma 3 using tools from random matrix theory, as well as Shyamal Buch, Pang Wei Koh, Shiori Sagawa, and anonymous reviewers for their valuable help and comments. This work was supported by an Open Philanthropy Project Award, an NSF Frontier Award as part of the Center for Trustworthy Machine Learning (CTML). SMX was supported by an NDSEG Fellowship. AK was supported by a Stanford Graduate Fellowship. TM was partially supported by the Google Faculty Award, JD.com, Stanford Data Science Initiative, and the Stanford Artificial Intelligence Laboratory." }, { "heading": "10 REPRODUCIBILITY", "text": "All code, data, and experiments are on CodaLab at this link." } ]
null
null
SP:b6dd62914f7464efb601c6d9f8a4d35e047447d5
[ "This paper studies the training of deep hierarchical VAEs and focuses on the problem of posterior collapse. It is argued that reducing the variance of the gradient estimate may help to overcome posterior collapse. The authors focus on reducing the variance of the functions parameterizing the variational distribution of each layer using a layer-wise smoothing operator based on the Ornstein-Uhlenbeck semigroup (parameterized by a parameter $\\rho$). The operator requires additional Monte-Carlo samples. The authors provide an analytical analysis of bias and variance. Last they train multiple VAEs models, measure the posterior collapse and observe a phase transition behaviour depending on the parameter $\\rho$." ]
Variational autoencoders with deep hierarchies of stochastic layers have been known to suffer from the problem of posterior collapse, where the top layers fall back to the prior and become independent of input. We suggest that the hierarchical VAE objective explicitly includes the variance of the function parameterizing the mean and variance of the latent Gaussian distribution which itself is often a high variance function. Building on this we generalize VAE neural networks by incorporating a smoothing parameter motivated by Gaussian analysis to reduce higher frequency components and consequently the variance in parameterizing functions and show that this can help to solve the problem of posterior collapse. We further show that under such smoothing the VAE loss exhibits a phase transition, where the top layer KL divergence sharply drops to zero at a critical value of the smoothing parameter that is similar for the same model across datasets. We validate the phenomenon across model configurations and datasets.
[]
[ { "authors": [ "Samuel R. Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M. Dai", "Rafal Józefowicz", "Samy Bengio" ], "title": "Generating Sentences from a Continuous Space", "venue": "In CoNLL", "year": 2016 }, { "authors": [ "Xi Chen", "Diederik P. Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Variational lossy autoencoder", "venue": "ArXiv, abs/1611.02731", "year": 2016 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: Non-linear Independent Components Estimation. arXiv:1410.8516 [cs", "venue": null, "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using Real NVP", "venue": "[cs, stat]", "year": 2017 }, { "authors": [ "Ishaan Gulrajani", "Kaushalendra Kumar", "Faruk Ahmed", "Adrien Ali Taïga", "Francesco Visin", "David Vázquez", "Aaron C. Courville" ], "title": "PixelVAE: A Latent Variable Model for Natural Images. ArXiv, abs/1611.05013", "venue": null, "year": 2017 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders", "venue": null, "year": 2019 }, { "authors": [ "Irina Higgins", "Loïc Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework", "venue": "In ICLR", "year": 2017 }, { "authors": [ "Svante Janson" ], "title": "Gaussian hilbert spaces, volume 129", "venue": null, "year": 1997 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": null, "year": 2014 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved Variational Inference with Inverse Autoregressive Flow", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "James Lucas", "George Tucker", "Roger Grosse", "Mohammad Norouzi" ], "title": "Understanding Posterior Collapse in Generative Latent Variable Models", "venue": null, "year": 2019 }, { "authors": [ "Lars Maaløe", "Marco Fraccaro", "Valentin Liévin", "Ole Winther" ], "title": "BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling", "venue": null, "year": 2019 }, { "authors": [ "Lars Maaløe", "Casper Kaae Sønderby", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Auxiliary Deep Generative Models. arXiv:1602.05473 [cs, stat", "venue": null, "year": 2016 }, { "authors": [ "Andrew C. Miller", "Nicholas J. Foti", "Alexander D’Amour", "Ryan P. Adams" ], "title": "Reducing reparameterization gradient variance", "venue": null, "year": 2017 }, { "authors": [ "Aäron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel Recurrent Neural Networks", "venue": null, "year": 2016 }, { "authors": [ "Ali Razavi", "Aäron van den Oord", "Ben Poole", "Oriol Vinyals" ], "title": "Preventing Posterior Collapse with delta-VAEs", "venue": null, "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In ICML", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": null, "year": 2014 }, { "authors": [ "Geoffrey Roeder", "Yuhuai Wu", "David Duvenaud" ], "title": "Sticking the landing: Simple, lower-variance gradient estimators for variational inference. 10 Under review as a conference", "venue": null, "year": 2017 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder Variational Autoencoders", "venue": null, "year": 2016 }, { "authors": [ "Jakub M. Tomczak", "Max Welling" ], "title": "Improving variational auto-encoders using householder flow. arXiv preprint arXiv:1611.09630", "venue": null, "year": 2016 }, { "authors": [ "Jakub M. Tomczak", "Max Welling" ], "title": "VAE with a VampPrior", "venue": "arXiv preprint arXiv:1705.07120", "year": 2017 }, { "authors": [ "Arash Vahdat", "William G. Macready", "Zhengbing Bian", "Amir Khoshaman", "Evgeny Andriyash" ], "title": "DVAE++: Discrete Variational Autoencoders with Overlapping Transformations", "venue": null, "year": 2018 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "title": "Improved variational autoencoders for text modeling using dilated convolutions", "venue": "In ICML", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Variational autoencoders (VAE) [10] are a popular latent variable model for unsupervised learning that simplifies learning by the introduction of a learned approximate posterior. Given data x and latent variables z, we specify the conditional distribution p(x|z) by parameterizing the distribution parameters by a neural network. Since it is difficult to learn such a model directly, another conditional distribution q(z|x) is introduced to approximate the posterior distribution. During learning the goal is to maximize the evidence lower bound (ELBO), which lower bounds the log likelihood, log p(x) ≥ Eq(z|x) [ log p(x|z)+log p(z)− log q(z|x) ] . In their simplest form, the generative model p(x|z) and the approximate posterior q(z|x) are Gaussian distributions optimized in unison. A natural way to increase the modeling capacity of VAE is to incorporate a hierarchy of stochastic variables. Such models, however, turn out to be difficult to train and higher levels in the hierarchy tend to remain independent of input data – a problem termed posterior collapse. Posterior collapse in VAEs manifests itself by the latent distribution tending to fall back to the prior. With hierarchical VAEs the effect is found to be more pronounced in the top layers farther from the output. For the purpose of the paper and for clarity of exposition, we focus on the simplest extension of hierarchical variational autoencoders where stochastic layers are stacked serially on top of each other [2, 21] , p(x, z) = p(x|z1)p(zL) ∏L−1 i=1 p(zi|zi+1) and q(z|x) = q(z1|x) ∏L−1 i=1 q(zi+1|zi). The intermediate distributions in this model are commonly taken to be Gaussian distributions parameterized by neural network functions, so that p(zi|zi+1) = N (zi|µ(zi+1), σ(zi+1)), where µ(z), σ(z) are neural networks computing the mean and variance of the Gaussian distribution. We refer to them as vanilla hierarchical variational autoencoders. For each stochastic layer in this model there is a corresponding KL divergence term in the objective given by\nE[KL(q(zi|zi−1)||p(zi|zi+1)]. (1)\nAs described later, expression 1 can be easily decomposed to show an explicit dependence on the variance of the parameterizing functions µ(zi), σ(zi) of the intermediate Gaussian distribution. We further show the KL divergence term to be closely related to the harmonics of the parameterizing function. For complex parameterizing functions the KL divergence term has large high frequency components (and thus high variance) which leads to unstable training causing posterior collapse.\nBuilding on this, we suggest a method for training the simplest hierarchical extension of VAE that avoids the problem of posterior collapse without introducing further architectural complexity [13, 21]. Given a hierarchical variational autoencoder, our training method incorporates a smoothing parameter (we denote this by ρ) in the neural network functions used to parameterize the intermediate latent distributions. The smoothing is done such that expected values are preserved, the higher frequencies are attenuated and the variance is reduced. Next, the gradients computed with the smooth functions are used to train the original hierarchical variational autoencoder.\nFor the construction of the smoothing transformations for VAEs with Gaussian latent spaces we make use of ideas from the analysis of Gaussian spaces. We analyze the stochastic functions in vanilla hierarchical VAEs as Hermite expansions on Gaussian spaces [9]. The Ornstein-Uhlenbeck (OU) semigroup from Gaussian analysis is a set of operators that we show to smoothly interpolate between a random variable and its expectation. The OU semigroup provides the appropriate set of smoothing operators which enable us to control variance and avoid posterior collapse.\nWe further show that by smoothing the intermediate parameterizing functions µ(z), σ(z) in the proposed manner, the KL divergence of the top layer sees a sudden sharp drop toward zero as the amount of smoothing is decreased. This behaviour is retained when we evaluate the KL divergence on the original unsmoothed variational autoencoder model. This behaviour is reminiscent of phase transitions from statistical mechanics and we adopt the same terminology to describe the phenomenon. Our experiments suggest that the phenomenon is general across datasets and commonly used architectures. Furthermore, the critical value of the smoothing parameter ρ at which the transition occurs is fixed for a given model configuration and varies with stochastic depth and width.\nWe make the following contributions. First, we establish a connection between higher harmonics, variance, posterior collapse and phase transitions in hierarchical VAEs. Second, we show that by using the Ornstein-Uhlenbeck semigroup of operators on the generative stochastic functions in VAEs we reduce higher frequencies and consequently variance to mitigate posterior collpase. We corroborate our findings experimentally and further obtain in CIFAR-10 likelihoods competitive with more complex architectural solutions alongside a reduction in model size. We refer to the proposed family of models as Hermite variational autoencoders (HVAE)." }, { "heading": "2 HERMITE VARIATIONAL AUTOENCODERS", "text": "" }, { "heading": "2.1 ANALYSIS ON GAUSSIAN SPACES", "text": "The analysis of Gaussian spaces studies functions of Gaussian random variables. These are realvalued functions defined on Rn endowed with the Gaussian measure. Many functions employed in machine learning are instances of such functions: decoders for variational autoencoders, as is the case in this work, and generators for generative adversarial networks being two examples.\nBy way of summary, the main facts we use from this field are that a function on a Gaussian space can be expanded in an orthonormal basis, where the basis functions are the Hermite polynomials. This orthonormal expansion is akin to a Fourier transform in this space. The second fact is that the coefficients of such an expansion can be modified in a way to reduce the variance of the expanded function by applying an operator from the Ornstein-Uhlenbeck semigroup of operators. Next, we give a brief introduction. For further details on Gaussian analysis we refer to [9].\nGaussian Spaces: Let L2(Rn, γ) be the space of square integrable functions, f : Rn → R, with the Gaussian measure γ(z) = ∏ iN (zi|0, 1). Given functions f, g in this space, the inner product is given by 〈f, g〉 = Eγ(z)[f(z)g(z)]. Basis functions for L2(R, γ): Taking the space of univariate functions L2(R, γ) , it is known that the polynomial functions φi(z) = zi are a basis for this space. By a process of orthonormalization we obtain the normalized Hermite polynomial basis for this space. The first few Hermite polynomials are the following: h0(z) = 1, h1(z) = z, h2 = z\n2−1√ 2 , . . ..\nBasis functions for L2(Rn, γ): Letting α ∈ Nn be a multi-index, the basis functions for L2(Rn, γ) are obtained by multiplying the univariate basis functions across dimension, hα(z) = ∏ i hαi(zi).\nHermite expansion: A function in L2(Rn, γ) can be expressed as f = ∑ α∈Nn f̂(α)hα, where f̂(α) are the Hermite coefficients of f and are computed as f̂(α) = 〈f, hα〉 = Eγ(z)[f(z)hα(z)]. Plancherel’s theorem is the following relation between the norm of f and f̂ which follows from orthnormality of the basis functions.\n〈f, f〉 = ∑ α f̂(α)2, (2)\nOrnstein-Uhlenbeck (OU) Semigroup: Given a parameter ρ ∈ [0, 1] and a Gaussian variable z, we construct a correlated variable z′ as z′ = ρz + √ 1− ρ2zω, where zω ∼ N (0, 1) is a random standard Gaussian sample. The OU semigroup is a set of operators, denoted Uρ and parameterized by ρ ∈ [0, 1]. The action of Uρ on f at z is to average the function values on correlated z′s around z,\nUρf(z) = Ez′|z[f(z′)] = Ezω [f(ρz + √ 1− ρ2zω)] (3)\nThe action of the Uρ operators on the Hermite expansion of function f(z) is to decay Hermite coefficients according to their degree, Uρf(z) = ∑ α∈Nn ρ |α|f̂(α)hα. where |α| = ∑ i αi. If z is reparameterized as z = σ 1 + µ, the correlated OU sample is given by z′ = σ(ρ 1 +√ 1− ρ2 2) + µ, where 1, 2 are standard Gaussian variables. This can also be expressed in terms of z as z′ = ρz + (1− ρ)µ+ σ √ 1− ρ2 2, (4)" }, { "heading": "2.2 HERMITE EXPANSIONS FOR VAES", "text": "Our proposed method is a new training procedure for the vanilla hierarchical variational autoencoder that builds upon Hermite expansions of Gaussian functions and properties of the OU semigroup.\nIn the context of hierarchical variational autoencoders, the Gaussian function f is the generative model µi(zi+1) and σi(zi+1) that receives as inputs the latent variable zi+1 to return the Gaussian latent variable of the next layer, zi ∼ N (µi(zi+1), σi(zi+1)). We make use of the following properties of the OU semigroup to construct Gaussian functions of lower variance. The first property we employ is that the OU semigroup of operators interpolates between a random variable (ρ = 1) and its expectation (ρ = 0), where the parameter ρ controls the extent of the interpolation.\nProposition 1 The operators Uρ retain the expected value of the operated function, E[f ] = E[Uρf ].\nProposition 2 The operators Uρ interpolate between a random variable and its expectation. In particular, as ρ→ 1, Uρf = f . and as ρ→ 0, Uρf = E[f ]\nThe second property we exploit is that the new random variable Uρf(z) has lower variance compared with original variable f(z) and is in general a smoother function than f(z). The smoothing properties of the operator Uρ can be understood by examining the Hermite expansion of Uρf . First we note that we can express the expectation and variance of a function f in terms of its Hermite coefficients, specifically E[f ] = f̂(0) and Var(f) = E[(f −E[f ])2] = E[(f − f̂(0))2] =∑α:|α|>0 f̂(α)2, which follows from Plancherel’s theorem (equation 2).\nReplacing f with Uρf and using the Hermite expansion of Uρf from equation 3, the mean remains the same, E[Uρf ] = ρ0f̂(0) = f̂(0), and variance reduces like\nVar[Uρf ] = E[(Uρf − E[f ])2] = E[(f − f̂(0))2] = ∑\nα:|α|>0\nρ2|α|f̂(α)2. (5)\nThe last equation indicates that the contribution to the variance by f̂(α) decays by an amount ρ2|α| when ρ ∈ (0, 1). This, in turn, leads to a decrease in variance.\nAlgorithm. In essence, Hermite variational autoencoders are similar to variational autoencoders, save for applying the OU semigroup to the latent distributions p(zi|zi+1) that comprise the generator to compute gradients during training only. Specifically, we apply these operators to the functions parameterizing the mean and variance of the latent Gaussian distributions. For each distribution p(zi|zi+1) we substitute N (zi|µi(zi+1), σi(zi+1)) with N (zi|Uρµi(zi+1), Uρσi(zi+1)). The new functions result in latent distributions with parameters that have lower variance but the same expected value relative to the conditional input latent distribution.\nIn an alternative parameterization we apply the OU semigroup to the ratio of the mean and variance functions: Uρ µiσi (zi+1) (see next section for a justification of this). The OU semigroup operators can also be applied on approximate posterior functions, but we observe little benefit. In practice, we compute Uρµi(zi+1) and Uρσi(zi+1) by Monte Carlo averaging. As for a function f , Uρf = Ez′|z[f(z′)], where z′ are the correlated samples, we estimate the expectation by Monte Carlo averaging over z′. Experiments show that 5 to 10 samples suffice.\nIt is important to emphasize that the substitution of the lower variance functions for parameterizing the distributions is only done when computing gradients during training. All evaluations, training or test, are still done on the original hierarchical variational autoencoder model. Thus, the new training procedure has an additional computational cost only for the intermediate distributions in the generator, proportional to the number of correlated samples during training.\nComplexity. In Hermite VAE the OU sampling operation is only applied in the intermediate stochastic layers in the generator network. In particular, it is not applied in the inference network or in the last layer of the decoder. The fact that OU sampling is not applied in the final stochastic layer computing p(x|z1) is especially important for deep VAEs for images since feature maps are upsampled to match image dimensions in this layer. Thus, for 5 OU samples, the added computational and activation memory complexity is significantly less than 5 times the total cost of the base VAE model, and is 5 times the cost in the higher decoder layers only in the base model. An empirical comparison of maximum memory usage of various models can be found in table 6." }, { "heading": "3 KL DIVERGENCE ANALYSIS", "text": "In this section we justify our approach as affecting a bias-variance trade-off in the KL divergence terms of the hierarchical VAE objective. The bias-variance trade-off arises from the fact that the KL divergence term can be written so that it contains the variance of the functions µ(z), σ(z) parameterizing the intermediate distributions. This variance is related to spectral complexity and reducing spectral complexity leads to a reduction in variance.\nConsider the following expression from the ELBO as part of the KL divergence term. Eq(z1,z2|x)[log p(z1|z2)] = Eq(z1|x)Eq(z2|z1)[log p(z1|z2)] (6)\nwhere p(z1|z2) = N (z1|µp(z2), σp) and q(z1|x) = N (z1|µq(x), σq(x)). For the purpose of analysis we assume that the standard deviation for p is fixed and independent of z2, σ(z2) = σp. Furthermore for ease of analysis and without loss of generality we think of z1 as a scalar. The general result for multivariate z1 follows from summing for all dimensions.\nFor Gaussian p we write equation 6 as Eq(z1,z2|x)[− log √ 2πσ2p − 12σ2p (z1 − µp(z2)) 2]. From the inner term Eq(z1,z2|x)[(z1 − µp(z2))2] we focus on the quadratic µp(z2)2 which is expanded as 1\n2σ2p Eq(z1,z2|x)[µp(z2)\n2] = 1\n2σ2p Eq(z1|x)[E[µp(z2)] 2 +Var(µp)] (7)\nBy Plancherel’s theorem (equation 2) we have\n1\n2σ2p Eq(z1,z2|x)[µp(z2)\n2] = 1\n2σ2p Eq(z1|x) µ̂p(0)2 + ∑ α:|α|>0 µ̂p(α) 2 . (8) This shows that for σp independent of z2 the KL divergence term in the ELBO contains the variance of the parameterizing function µp(z2).\nIn our proposal we replace µp(z2) by Uρ[µp(z2)] and the right side of equation 8 becomes\n1\n2σ2p Eq(z1|x)[E[Uρµp]\n2 +Var(Uρµp)] = 1\n2σ2p Eq(z1|x) µ̂p(0)2 + ∑ α:|α|>0 ρ2|α|µ̂p(α) 2 (9) since E[Uρf ] = E[f ]. The new variance is of orderO(ρ2). Comparing this objective with the original VAE objective, for the second term we get a bias proportional to the difference of the variance\nbias = 1\n2σ2p (Var(µp)−Var(Uρµp)).\nComparing equations 8 and 9 we see the bias to be O(1− ρ2). The analysis above assumed σp independent of z2. For σp dependent on z2 we can make a similar argument for the variance of (z1 − µ(z2))/σ(z2) or for the ratio µ(z2)/σ(z2) by expanding the square and considering the terms separately." }, { "heading": "4 RELATED WORK", "text": "To increase the stochastic depth of VAEs [10, 19], [21] proposes the Ladder VAEs. With an architecture that shares a top-down dependency between the encoder and the decoder, Ladder VAEs allow for interactions between the bottom-up and top-down signals and enable training with several layers deep VAEs. Extending Ladder VAEs, [13] proposed the bidirectional-inference VAEs, adding skip connections in the generative model and a bidirectional stochastic path in the inference model.\n[1] observed that the latent distribution collapses to the prior in deep stochastic hierarchies – a phenomenon now called posterior collapse. Posterior collapse appears in different contexts including images or text, and is strongly associated with the presence of powerful decoders, be it LSTMs [1] for text or strong autoregressive models for images [16], where although the model may produce good reconstructions, it does not learn a meaningful generative distribution. A prevalent hypothesis behind posterior collapse is that when the decoder is strong enough to generate very low cross entropy losses, the optimization may find it easier to simply set the KL divergence term to 0 to minimize the ELBO [1]. Making an association with probabilistic PCA, [12] hypothesize that posterior collapse is caused by local optima in the optimization landscape due to high variance, even without powerful decoders. High variance was identified as a potential culprit also by [21] for posterior collapse.\nGiven the breadth of the problem, many tried to address posterior collapse. [1, 8, 21, 14] anneal the KL divergence between the approximate posterior to the prior from 0 to 1. Unfortunately, this solution does not optimize the original ELBO formulation and is shown [27, 3] to cause instabilities, especially with large datasets and complex decoders. [11] introduce the concept of free bits ignoring the gradient if not significant enough. [17] proposed δ-VAEs, which constrain the latent distribution to have a minimum distance to the prior. [7] monitor the mutual information between the latent and the observed variable to aggressively optimize the inference model before every model update. [2] suggest that using a tighter multi-sample ELBO can help alleviate collapse to some extent.\nWhile many [1, 21] suggested a connection between posterior collapse and variance reduction, no real solutions using variance reduction has been proposed. One reason may be the low variance the reparameterization trick [10, 19] already offers. Empirically, while the reparametrization is successful with producing low variance forward and backward propagations in shallow models, it has not been enough for deeper and wider ones. Another reason suggested by the approach of this paper is that variance appears as a side effect of spectral complexity.\nTo reduce the variance in reparameterization gradients, [20] suggest removing a mean zero score function term from the total derivative while [15] build a control variate using a linear approximation for variance reduction. [2] propose importance weighted gradients and [24] extend [15] to multiple samples to obtain an estimator with improved signal-to-noise ratio. Other approaches to increase the power of VAE models include normalizing flows [18], better posterior or prior distributions [22], adding autoregressive components [6] or a combination of both [11].\nHere, we theoretically argue and empirically validate that damping higher frequency components, thus lowering variance, allows for training deeper latent hierarchies while addressing posterior collapse.\nWe rely on tools from the field of analysis on Gaussian spaces, amenable to the analysis of stochastic processes [9]." }, { "heading": "5 EXPERIMENTS", "text": "In the following experiments we first test our method’s ability to prevent posterior collapse and compare with other methods designed for the same end. We then present our observation that the top layer KL divergence undergoes a phase transition as we decrease the amount of smoothing. Finally, we compare performance on commonly used benchmarks against other methods from the literature.\nWe validate Hermite variational autoencoders on binary MNIST, OMNIGLOT and CIFAR-10 with various ResNet and MLP architectures. Validation ELBOs are evaluated using importance-weighted samples [2] with L100 and L5000 denoting evaluation with 100 and 5000 samples." }, { "heading": "5.1 INVESTIGATING POSTERIOR COLLAPSE", "text": "HVAE (L=4), L5000 -81.2 HVAE (L=5), L5000 -81.1\nWe test Hermite VAEs for their ability to prevent posterior collapse on basic MLP network architectures. We compare on static and dynamically binarized MNIST against various standard methods for mitigating posterior collapse. For dynamic MNIST we choose two models: the first is a 4 stochastic layer model with 64,32,16,8 latent variables. The deterministic layers have two layers of 512,256,128,64 units respectively, going from bottom to top. The second model has 4 stochastic layers VAE with 40 units per stochastic layer and 2 layers of 200 units per stochastic layer . For static MNIST we only use the second model described above. All models have a simple stacked architecture with no skip connections.\nFirst we compare with a standard VAE on static MNIST. We trained the VAE with the standard training method and our method with ρ ∈ {0.9, 0.8}. We show the validation curves and the KL divergence of the top stochastic layer KL(q(z4|z3)||N (0, 1) in figure 2. The standard training collapses the posterior immediately while our method avoids posterior collapse and yields better validation ELBO.\nNext we compare against other methods designed to mitigate posterior collapse including KL annealing, free bits [11] and importance weighted objectives [2]. For KL annealing the annealing coefficient is set to 0 for the first 10,000 steps and is linearly annealed to 1 over the next 500,000 steps. For free bits, we apply the same free bits value to each stochastic layer. The free bits values are chosen from {0.5, 1.0, 2.0, 3.0}. We find that training slows down considerably when using free bits. Values of free bits of 4.0 or more caused training to become unstable.\nFor this comparison we use the 40-40-40-40 4-layer architecture used for all methods for static MNIST and both the 40-40-40-40 and 64-32-16-8 architectures for dynamic MNIST. We show the results in tables 4 and 3, where we include the total KL divergence, top layer KL divergence as well as the number of active units in each of the 4 layers. Compared to the baseline methods, our method is able to maintain significant activity across the layers and a considerably higher KL divergence in the top layer. For dynamic MNIST the 64,32,16,8 model has successively smaller networks higher in the hierarchy and the other methods show better activity in the latent units than the 40-40-40-40 model. This suggests that training dynamics and the architecture affect the extent of posterior collapse with standard methods and have less of an effect with our proposal.\nHere we also find that employing other posterior collapse mitigation techniques can sometimes help with HVAE as well. As shown in table 4, KL annealing with the 4 layer HVAE allows it to reach higher validation ELBO. However, we did not find KL annealing to be advantageous when used with HVAE on more complex architectures and in particular we do not see improvement in CIFAR-10 performance." }, { "heading": "5.1.1 PHASE TRANSITIONS", "text": "We have given some evidence to show that attenuating the higher frequency components of parameterizing functions by smoothing, thus also reducing variance, is a justifiable mitigation against posterior collapse.\nHere we show that as the amount of smoothing is reduced (as ρ approaches 1) to recover the original VAE gradients, the top level KL divergence shows a sudden decline at a critical value of the smoothing parameter. The sharpness of this decline depends on the model configuration (stochastic layers, latent dimension). On the other hand, our experiments suggest that the sharpness of the decline is independent of dataset.\nTable 5: Comparing bits per dimension, parameter efficiency and model depths between hierarchical methods and other well performing methods on CIFAR-10. ‘+’ indicates stochastic skip connections.\nModel BPD Layers Parameters\nLVAE [13] 3.60 15 72.36M LVAE+ [13] 3.41 15 73.35M LVAE+ [13] 3.45 29 119.71M BIVA [13] 3.12 15 102.95M Discrete VAE++ [26] 3.38 – – NICE [4] 4.48 – – RealNVP [5] 3.49 – – NVAE [25] 2.91 – – VAE+IAF [11] 3.11 – –\nHVAE, L1 3.5 3 9.95M HVAE, L1 3.46 4 12.4M HVAE, L1 3.43 6 16.8M HVAE+, L1 3.42 3 9.95M HVAE+, L100 3.39 3 9.95M\nAn example of this phenomenon can be seen in figure 1 and the appendix. Here we show the top layer KL divergence after training for 100k steps with varying ρ for 3 different architectures. We see that the phase transition becomes more pronounced with greater stochastic dimension. Figure 3 further shows how HVAE can prevent collapse across a spectrum of varying stochastic depth and width while maintaining good validation performance." }, { "heading": "5.2 BENCHMARK COMPARISONS", "text": "" }, { "heading": "5.2.1 MNIST & OMNIGLOT", "text": "MLPs. For static MNIST we report test ELBO for MLPs with 4 stochastic layers with 40 units each. Before each stochastic layer we have two deterministic layers of 200 tanh units.\nWe show the results in table 1 comparing with others that improve VAEs: IWAE [2], VAE with normalizing flow [18] and VampPrior [23].\nThe dynamic MNIST results are in table 2. The 5-layer HVAE model in this instance has latent dimensions 64, 32, 16, 8, 4 with two layers of 512, 256, 128, 64, 32 units in the respective stochastic layers.\nResidual ConvNets. Next, we experiment with a more complex ResNet architecture on MNIST and OMNIGLOT with up to 4 stochastic layers and up to 5 ResNet blocks between stochastic layers (14× 14 features maps). The networks have the same hierarchical structure as the MLP VAEs with convolutional latent layers. We do not employ stochastic skip connections between blocks.\nWe improve the MLP scores reaching -96.08 validation ELBO on OMNIGLOT, compared to -97.65 and -97.56 for VAE(L=2) and VampPrior (L=2). The detailed results are in the appendix.\nIn both experiments we obtain competitive scores despite the simple architecture." }, { "heading": "5.2.2 CIFAR-10", "text": "We experiment with ResNets of up to 6 stochastic layers, interwined with deterministic layers comprising 6 ResNet blocks. We used 100 feature maps for all deterministic layers. The stochastic layers have 8 feature maps of width 16 × 16. We also experiment with skip connections between stochastic layers. We report results in table 5. We obtain an ELBO of 3.5 bpd without skip connections. After adding skip connections, we improve to 3.42 bpd on the 3-layer architecture. This corresponds to an ELBO of 3.39 when evaluated with 100 importance samples. This result is on par with the 15-layer Ladder VAE (LVAE) [13], and comparable to the 15-layer LVAE+ [13] architecture that adds skip connections to LVAE.\nNote also that the improvement comes at a signification reduction of parameters. Compared to Ladder VAE the 3 layer Hermite VAE uses about 7 times fewer parameters." }, { "heading": "6 CONCLUSION", "text": "Training variational autoencoders with large hierarchies of stochastic layers has proven to be difficult. An often shared observation when moving to more complex variational autoencoders is posterior collapse where a subset of the latent variables falls back to the prior distribution.\nFor a solution we turn to the field of analysis of Gaussian functions and analyze intermediate VAE functions as Hermite expansions. We argue that the Ornstein-Uhlenbeck semigroup can be used to reduce variance and empirically show that its parameterizing variable can be used to precisely control phase transitions. We validate the analysis and solution on three datasets, MNIST, OMNIGLOT, CIFAR-10, where we are able to avoid posterior collapse and obtain performance that is competitive with more complex architectural solutions." } ]
2,020
null
SP:2d25eeb93ba90f9c4064bf794f9a132a6859c8e4
[ "The paper proposes an approximation method, called NEMO (Normalized maximum likelihood Estimation for model-based optimization) to compute the conditional normalized maximum log-likelihood of a query data point as a way to quantify the uncertainty in a forward prediction model in offline model-based optimization problems. The main idea is to construct a conditional NML (CNML) distribution that maps the high-dimensional inputs to a distribution over output variables. In addition, the paper provides a theoretical motivation that estimating the true function with the CNML is close to the best possible expert even if the test label is chosen adversarially, which is a great challenge for an optimizer to exploit the model. By using this CNML on three offline optimization benchmark datasets (Superconductor, GFP, MoleculeActivity) with gradient ascent-based optimization, the NEMO outputs all the other four baselines on the Superconductor dataset by almost 1.4x to 1.7x, the generate comparable results as the other four baselines method on the GFP and MoleculeActivity datasets. " ]
In this work we consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points. This problem setting emerges in many domains where function evaluation is a complex and expensive process, such as in the design of materials, vehicles, or neural network architectures. Because the available data typically only covers a small manifold of the possible space of inputs, a principal challenge is to be able to construct algorithms that can reason about uncertainty and out-of-distribution values, since a naive optimizer can easily exploit an estimated model to return adversarial inputs. We propose to tackle this problem by leveraging the normalized maximum-likelihood (NML) estimator, which provides a principled approach to handling uncertainty and out-of-distribution inputs. While in the standard formulation NML is intractable, we propose a tractable approximation that allows us to scale our method to high-capacity neural network models. We demonstrate that our method can effectively optimize high-dimensional design problems in a variety of disciplines such as chemistry, biology, and materials engineering.
[ { "affiliations": [], "name": "Justin Fu" }, { "affiliations": [], "name": "Sergey Levine" } ]
[ { "authors": [ "Andrew Barron", "Jorma Rissanen", "Bin Yu" ], "title": "The minimum description length principle in coding and modeling", "venue": "IEEE Transactions on Information Theory,", "year": 1998 }, { "authors": [ "Endika Bengoetxea", "Pedro Larrañaga", "Isabelle Bloch", "Aymeric Perchant" ], "title": "Estimation of distribution algorithms: A new evolutionary computation approach for graph matching problems", "venue": "In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition,", "year": 2001 }, { "authors": [ "Koby Bibas", "Yaniv Fogel", "Meir Feder" ], "title": "Deep pnml: Predictive normalized maximum likelihood for deep neural networks", "venue": "arXiv preprint arXiv:1904.12286,", "year": 2019 }, { "authors": [ "David H Brookes", "Hahnbeom Park", "Jennifer Listgarten" ], "title": "Conditioning by adaptive sampling for robust design", "venue": "arXiv preprint arXiv:1901.10060,", "year": 2019 }, { "authors": [ "Clara Fannjiang", "Jennifer Listgarten" ], "title": "Autofocused oracles for model-based design", "venue": "arXiv preprint arXiv:2006.08052,", "year": 2020 }, { "authors": [ "Yaniv Fogel", "Meir Feder" ], "title": "Universal supervised learning for individual data", "venue": "arXiv preprint arXiv:1812.09520,", "year": 2018 }, { "authors": [ "Marta Garnelo", "Dan Rosenbaum", "Chris J Maddison", "Tiago Ramalho", "David Saxton", "Murray Shanahan", "Yee Whye Teh", "Danilo J Rezende", "SM Eslami" ], "title": "Conditional neural processes", "venue": "arXiv preprint arXiv:1807.01613,", "year": 2018 }, { "authors": [ "Anna Gaulton", "Louisa J. Bellis", "A. Patricia Bento", "Jon Chambers", "Mark Davies", "Anne Hersey", "Yvonne Light", "Shaun McGlinchey", "David Michalovich", "Bissan Al-Lazikani", "John P. Overington" ], "title": "Chembl: a large-scale bioactivity database for drug discovery", "venue": "Nucleic acids research,", "year": 2012 }, { "authors": [ "Kam Hamidieh" ], "title": "A data-driven statistical model for predicting the critical temperature of a superconductor", "venue": "Computational Materials Science,", "year": 2018 }, { "authors": [ "Geoffroy Hautier", "Christopher C Fischer", "Anubhav Jain", "Tim Mueller", "Gerbrand Ceder" ], "title": "Finding nature’s missing ternary oxide compounds using machine learning and density functional theory", "venue": "Chemistry of Materials,", "year": 2010 }, { "authors": [ "Warren Hoburg", "Pieter Abbeel" ], "title": "Geometric programming for aircraft design optimization", "venue": "AIAA Journal,", "year": 2014 }, { "authors": [ "Thorsten Joachims", "Adith Swaminathan", "Maarten de Rijke" ], "title": "Deep learning with logged bandit feedback", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Rahul Kidambi", "Aravind Rajeswaran", "Praneeth Netrapalli", "Thorsten Joachims" ], "title": "Morel: Modelbased offline reinforcement learning", "venue": "arXiv preprint arXiv:2005.05951,", "year": 2020 }, { "authors": [ "Nathan Killoran", "Leo J Lee", "Andrew Delong", "David Duvenaud", "Brendan J Frey" ], "title": "Generating and designing dna with deep generative models", "venue": "arXiv preprint arXiv:1712.06148,", "year": 2017 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih", "Jonathan Schwarz", "Marta Garnelo", "Ali Eslami", "Dan Rosenbaum", "Oriol Vinyals", "Yee Whye Teh" ], "title": "Attentive neural processes", "venue": null, "year": 1901 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Aviral Kumar", "Sergey Levine" ], "title": "Model inversion networks for model-based optimization", "venue": "arXiv preprint arXiv:1912.13464,", "year": 2019 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Aria Mansouri Tehrani", "Anton O Oliynyk", "Marcus Parry", "Zeshan Rizvi", "Samantha Couper", "Feng Lin", "Lowell Miyagi", "Taylor D Sparks", "Jakoah Brgoch" ], "title": "Machine learning directed search for ultraincompressible, superhard materials", "venue": "Journal of the American Chemical Society,", "year": 2018 }, { "authors": [ "Jan Peters", "Stefan Schaal" ], "title": "Reinforcement learning by reward-weighted regression for operational space control", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Mariya Popova", "Olexandr Isayev", "Alexander Tropsha" ], "title": "Deep reinforcement learning for de novo drug design", "venue": "Science advances,", "year": 2018 }, { "authors": [ "Jorma Rissanen" ], "title": "Modeling by shortest data", "venue": "description. Automatica,", "year": 1978 }, { "authors": [ "Jorma Rissanen", "Teemu Roos" ], "title": "Conditional nml universal models", "venue": "Information Theory and Applications Workshop,", "year": 2007 }, { "authors": [ "Jorma J Rissanen" ], "title": "Fisher information and stochastic complexity", "venue": "IEEE transactions on information theory,", "year": 1996 }, { "authors": [ "Teemu Roos", "Tomi Silander", "Petri Kontkanen", "Petri Myllymaki" ], "title": "Bayesian network structure learning using factorized nml universal models", "venue": "In 2008 Information Theory and Applications Workshop,", "year": 2008 }, { "authors": [ "Reuven Rubinstein" ], "title": "The cross-entropy method for combinatorial and continuous optimization", "venue": "Methodology and computing in applied probability,", "year": 1999 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Glenn Shafer", "Vladimir Vovk" ], "title": "A tutorial on conformal prediction", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Bobak Shahriari", "Kevin Swersky", "Ziyu Wang", "Ryan P Adams", "Nando De Freitas" ], "title": "Taking the human out of the loop: A review of bayesian optimization", "venue": "Proceedings of the IEEE,", "year": 2015 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "The self-normalized estimator for counterfactual learning", "venue": "In advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Tianhe Yu", "Garrett Thomas", "Lantao Yu", "Stefano Ermon", "James Zou", "Sergey Levine", "Chelsea Finn", "Tengyu Ma" ], "title": "Mopo: Model-based offline policy optimization", "venue": "arXiv preprint arXiv:2005.13239,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many real-world optimization problems involve function evaluations that are the result of expensive or time-consuming process. Examples occur in the design of materials (Mansouri Tehrani et al., 2018), proteins (Brookes et al., 2019; Kumar & Levine, 2019), neural network architectures (Zoph & Le, 2016), or vehicles (Hoburg & Abbeel, 2014). Rather than settling for a slow and expensive optimization process through repeated function evaluations, one may instead adopt a data-driven approach, where a large dataset of previously collected input-output pairs is given in lieu of running expensive function queries. Not only could this approach be more economical, but in some domains, such as in the design of drugs or vehicles, function evaluations pose safety concerns and an online method may simply be impractical. We refer to this setting as the offline model-based optimization (MBO) problem, where a static dataset is available but function queries are not allowed.\nA straightforward method to solving offline MBO problems would be to estimate a proxy of the ground truth function f̂θ using supervised learning, and to optimize the input x with respect to this proxy. However, this approach is brittle and prone to failure, because the model-fitting process often has little control over the values of the proxy function on inputs outside of the training set. An algorithm that directly optimizes f̂θ could easily exploit the proxy to produce adversarial inputs that nevertheless are scored highly under f̂θ (Kumar & Levine, 2019; Fannjiang & Listgarten, 2020).\nIn order to counteract the effects of model exploitation, we propose to use the normalized maximum likelihood framework (NML) (Barron et al., 1998). The NML estimator produces the distribution closest to the MLE assuming an adversarial output label, and has been shown to be effective for resisting adversarial attacks (Bibas et al., 2019). Moreover, NML provides a principled approach to generating uncertainty estimates which allows it to reason about out-of-distribution queries. However, because NML is typically intractable except for a handful of special cases (Roos et al., 2008), we show in this work how we can circumvent intractability issues with NML in order to construct a reliable and robust method for MBO. Because of its general formulation, the NML distribution pro-\nvides a flexible approach to constructing conservative and robust estimators using high-dimensional models such as neural networks.\nThe main contribution of this work is to develop an offline MBO algorithm that utilizes a novel approximation to the NML distribution to obtain an uncertainty-aware forward model for optimization, which we call NEMO (Normalized maximum likelihood Estimation for Model-based Optimization). The basic premise of NEMO is to construct a conditional NML distribution that maps inputs to a distribution over outputs. While constructing the NML distribution is intractable in general, we discuss novel methods to amortize the computational cost of NML, which allows us the scale our method to practical problems with high dimensional inputs using neural networks. A separate optimization algorithm can then be used to optimize over the output to any desired confidence level. Theoretically, we provide insight into why NML is useful for the MBO setting by showing a regret bound for modeling the ground truth function. Empirically, we evaluate our method on a selection of tasks from the Design Benchmark (Anonymous, 2021), where we show that our method performs competitively with state-of-the-art baselines. Additionally, we provide a qualitative analysis of the uncertainty estimates produced by NEMO, showing that it provides reasonable uncertainty estimates, while commonly used methods such as ensembles can produce erroneous estimates that are both confident and wrong in low-data regimes." }, { "heading": "2 RELATED WORK", "text": "Derivative-free optimization methods are typically used in settings where only function evaluations are available. This includes methods such as REINFORCE (Williams, 1992) and reward-weighted regression (Peters & Schaal, 2007) in reinforcement learning, the cross-entropy method (Rubinstein, 1999), latent variable models (Garnelo et al., 2018; Kim et al., 2019), and Bayesian optimization (Snoek et al., 2012; Shahriari et al., 2015). Of these approaches, Bayesian optimization is the most often used when function evaluations are expensive and limited. However, all of the aforementioned methods focus on the active or online setting, whereas in this work, we are concerned with the offline setting where additional function evaluations are not available.\nNormalized maximum likelihood is an information-theoretic framework based on the minimum description length principle (Rissanen, 1978). While the standard NML formulation is purely generative, the conditional or predictive NML setting can be used Rissanen & Roos (2007); Fogel & Feder (2018) for supervised learning and prediction problems. Bibas et al. (2019) apply this framework for prediction using deep neural networks, but require an expensive finetuning process for every input. The goal of our work is to provide a scalable and tractable method to approximate the CNML distribution, and we apply this framework to offline optimization problems.\nLike CNML, conformal prediction (Shafer & Vovk, 2008) is concerned with predicting the value of a query point ŷt+1 given a prior dataset, and provides per-instance confidence intervals, based on how consistent the new input is with the rest of the dataset. Our work instead relies on the NML framework, where the NML regret serves a similar purpose for measuring how close a new query point is to existing, known data.\nThe offline model-based optimization problem has been applied to problems such as designing DNA (Killoran et al., 2017), drugs (Popova et al., 2018), or materials (Hautier et al., 2010). The estimation of distribution algorithm (Bengoetxea et al., 2001) alternates between searching in the input space and model space using a maximum likelihood objective. Kumar & Levine (2019) propose to learn an inverse mapping from output values to input values, and optimize over the output values which produce consistent input values. Brookes et al. (2019) propose CbAS, which uses a trust-region to limit exploitation of the model. Fannjiang & Listgarten (2020) casts the MBO problem as a minimax game based on the oracle gap, or the value between the ground truth function and the estimated function. In contrast to these works, we develop an approach to MBO which explicitly reasons about uncertainty. Approaches which utilize uncertainty, such as Bayesian optimization, are commonly used in online settings, and we expect these to work in offline settings as well.\nThere are several related areas that could arguably be viewed as special cases of MBO. One is in contextual bandits under the batch learning from bandit feedback setting, where learning is often done on logged experience (Swaminathan & Joachims, 2015; Joachims et al., 2018), or offline reinforcement learning (Levine et al., 2020), where model-based methods construct estimates of the MDP\nparameters (Kidambi et al., 2020; Yu et al., 2020). Our work focuses on a more generic function optimization setting, but could be applied in these domains as well." }, { "heading": "3 PRELIMINARIES", "text": "We begin by reviewing the problem formulation for offline model-based optimization, as well as necessary background on the normalized maximum likelihood estimator.\nProblem statement. We define the offline model-based optimization (MBO) problem as follows. Assume the existence of a stochastic ground truth function f(y|x). The MBO algorithm is given a dataset D of inputs x along with outputs y sampled from f(y|x). Like in standard optimization problems, the goal of MBO is to find the input value that maximizes the true function:\nx∗ = argmaxxEy∼f(y|x)[y]. (1) However, in offline MBO, the algorithm is not allowed to query the true function f(y|x), and must find the best possible point x∗ using only the guidance of a fixed dataset D = {x1:N , y1:N}. One approach to solving this problem is to introduce a separate proxy function f̂θ(y|x) ≈ f(y|x), which is learned from D as an estimate of the true function. From here, standard optimization algorithms such as gradient descent can be used to find the optimum of the proxy function, x̂∗ = argmaxxEy∼f̂θ(y|x)[y]. Alternatively, a trivial algorithm could be to select the highestperforming point in the dataset. While adversarial ground truth functions can easily be constructed where this is the best one can do (e.g., if f(x) = −∞ on any x /∈ D), in many reasonable domains it should be possible to perform better than the best point in the dataset.\nConditional normalized maximum likelihood. In order to produce a conditional distribution pNML(y|x) we can use for estimating the ground truth function, we leverage the conditional or predictive NML (CNML) framework (Rissanen & Roos, 2007; Fogel & Feder, 2018; Bibas et al., 2019). Intuitively, the CNML distribution is the distribution closest to the MLE assuming the test label y is chosen adversarially. This is useful for the MBO setting since we do not know the ground truth value y at points we are querying during optimization, and the CNML distribution gives us conservative estimates that help mitigate model exploitation (see Fig. 1). Formally, the CNML estimator is the minimax solution to a notion of regret, called the individual regret defined as Regretind(h, y) = log p(y|x, θ̂D∪(x,y)) − log h(y|x), and pNML(y|x) = arg minh maxy′ Regretind(h, y′) (Fogel & Feder, 2018). The notation D ∪ (x, y) refers to an augmented dataset by appending a query point and label (x, y), to a fixed offline dataset D, and θ̂D∪(x,y) denotes the MLE estimate for this augmented dataset. The query point (x, y) serves to represent the test point we are interested in modeling. The solution to the minimax problem can be expressed as (Fogel & Feder, 2018):\npNML(y|x) = p(y|x, θ̂D∪(x,y))∫\ny′ p(y′|x, θ̂D∪(x,y′))dy′\n, (2)\nwhere θ̂D∪(x,y) = arg maxθ 1\nN+1 ∑ (x,y)∈D∪(x,y) log p(y|x, θ) is the maximum likelihood estimate\nfor p using the dataset D augmented with (x, y).\nAlgorithm 1 NEMO: Normalized Maximum Likelihood for Model-Based Optimization Input Model class {fθ : θ ∈ Θ}, Dataset D = (x1:N , y1:N ), number of bins K, evaluation function g(y), learning rates αθ, αx. Initialize K models θ1:K0 , optimization iterate x0 Quantize y1:N into K bins, denoted as bYc = {by1c, · · · bykc}. for iteration t in 1 . . . T do\nfor k in 1 . . .K do construct augmented dataset: D′ ← D ∪ (xt, bykc). update model: θkt+1 ← θkt + αθ∇θkt LogLikelihood(θ k t ,D′) end for estimate CNML distribution: p̂NML(y|xt) ∝ p(y|xt, θyt )/ ∑ k p(bykc|xt, θkt )\nUpdate x: xt+1 ← xt + αx∇xEy∼p̂NML(y|x)[g(y)] end for\nThe NML family of estimators has connections to Bayesian methods, and has shown to be asymptotically equivalent to Bayesian inference under the uninformative Jeffreys prior (Rissanen, 1996). NML and Bayesian modeling both suffer from intractability, albeit for different reasons. Bayesian modeling is generally intractable outside of special choices of the prior and model class Θ where conjugacy can be exploited. On the other hand, NML is intractable because the denominator requires integrating and training a MLE estimator for every possible y. One of the primary contributions of this paper is to discuss how to approximate this intractable computation with a tractable one that is sufficient for optimization on challenging problems, which we discuss in Section 4." }, { "heading": "4 NEMO: NORMALIZED MAXIMUM LIKELIHOOD ESTIMATION FOR MODEL-BASED OPTIMIZATION", "text": "We now present NEMO, our proposed algorithm for high-dimensional offline MBO. NEMO is a tractable scheme for estimating and optimizing the estimated expected value of the target function under the CNML distribution. As mentioned above, the CNML estimator (Eqn. 2) is difficult to compute directly, because it requires a) obtaining the MLE for each value of y, and b) integrating these estimates over y. In this section, we describe how to address these two issues, using amortization and quantization. We outline the high-level pseudocode in Algorithm 1, and presented a more detailed implementation in Appendix A.2.1." }, { "heading": "4.1 AN ITERATIVE ALGORITHM FOR MODEL-BASED OPTIMIZATION", "text": "We first describe the overall structure of our algorithm, which addresses issue a), the intractability of computing an MLE estimate for every point we wish to query. In this section we assume that the domain of y is discrete, and describe in the following section how we utilize a quantization scheme to approximate a continuous y with a discrete one.\nRecall from Section 3 that we wish to construct a proxy for the ground truth, which we will then optimize with gradient ascent. The most straightforward way to integrate NML and MBO would be to fully compute the NML distribution described by Eqn. 2 at each optimization step, conditioned on the current optimization iterate xt. This would produce a conditional distribution pNML(y|x) over output values, and we can optimize xt with respect to some function of this distribution, such as the mean. While this method is tractable to implement for small problems, it will still be significantly slower than standard optimization methods, because it requires finding the MLE estimate for every y value per iteration of the algorithm. This can easily become prohibitively expensive when using large neural networks on high-dimensional problems.\nTo remedy this problem, we propose to amortize the learning process by incrementally learning the NML distribution while optimizing the iterate xt. In order to do this, we maintain one model per value of y, θ̂k, each corresponding to one element in the normalizing constant of the NML distribution. During each step of the algorithm, we sample a batch of datapoints, and train each model by appending the current iterate xt as well as a label yt,k to the batch with a weight w (which is typically set to w = 1/N ). We then perform a number of gradient step on each model, and use\nthe resulting models to form an estimate of the NML distribution pNML(yt|xt). We then compute a score from the NML distribution, such as the mean, and perform one step of gradient ascent on xt.\nWhile the incremental algorithm produces only an approximation to the true NML distribution, it brings the computational complexity of the resulting algorithm to just O(K) gradient steps per iteration, rather than solving entire inner-loop optimization problems. This brings the computational cost to be comparable to other baseline methods we evaluated for MBO." }, { "heading": "4.2 QUANTIZATION AND ARCHITECTURE", "text": "The next challenge towards developing a practical NML method is addressing issue b), the intractability of integrating over a continuous y. We propose to tackle this issue with quantization and a specialized architecture for modeling the ground truth function.\nQuantization. One situation in which the denominator is tractable is when the domain of y is discrete and finite. In such a scenario, we could train K models, where K is the size of the domain, and directly sum over the likelihood estimates to compute the normalizing factor.\nIn order to turn the NML distribution into a tractable, discrete problem, we quantize all outputs in the dataset by flooring each y value to the nearest bin bykc, with the size of each interval defined as B = (ymax − ymin)/K. While quantization has potential to induce additional rounding errors to the optimization process, we find in our experiments in Section 5 that using moderate value such as K = 20 or K = 40 provides both a reasonably accurate solution while not being excessively demanding on computation.\nThis scheme of quantization can be interpreted as a rectangular quadrature method, where the integral over y is approximated as:∫\ny p(y|x, θ̂D∪(x,y))dy ≈ B K∑ k=1 p(bykc|x, θ̂D∪(x,bykc))\nDiscretized logistic architecture. Quantization introduces unique difficulties into the optimization process for MBO. In particular, quantization results in flat regions in the optimization landscape, making using gradient-based algorithms to optimize both inputs x and models p(y|x, θ) challenging. In order to alleviate these issues, we propose to model the output using a discretized logistic archi-\ntecture, depicted in Fig. 2. The discretized logistic architecture transforms and input x into the mean parameter of a logistic distribution µ(x), and outputs one minus the CDF of a logistic distribution queried at regular intervals of 1/K (recall that the CDF of a logistic distribution is itself the logistic or sigmoid function). Therefore, the final output is a vector o of length K, where element k is equal to σ(µ(x) + k/K). We note that similar architectures have been used elsewhere, such as for modeling the output over pixel intensity values in images (Salimans et al., 2017).\nWe train this model by first encoding a label y as a vector ydisc, where ydisc[k ≤ bin(y)] = 1 and elements ydisc[k > bin(y)] = 0. bin(y) denotes the index of the quantization bin that y falls under. The model is then trained using a standard binary cross entropy loss, applied per-element across the entire output vector. Because the output represents the one minus the CDF, the expected value of the discretized logistic architecture can be computed as ymean(x) = Ey∼p(y|x)[g(y)] =∑ k[g(k) − g(k − 1)]o[k]. If we assume that g normalizes all output values to [0, 1] in uniform bins after quantization, the mean can easily be computed as a sum over the entire output vector, ymean = 1 K ∑ k o[k]\nOptimization The benefit of using such an architecture is that when optimizing for x, rather than optimizing the predicted output directly, we can compute gradients with respect to the logistic parameter µ. Because µ is a single scalar output of a feedforward network, it is less susceptible to flat gradients introduced by the quantization procedure. Optimizing with respect to µ is sensible as it shares the same global optimum as ymean, and gradients with respect to µ and ymean share a positive angle, as shown by the following theorem:\nProposition 4.1 (Discretized Logistic Gradients). Let µ(x) denote the mean of the discretized logistic architecture for input x, and ymean(x) denote the predicted mean. Then,\n1. If x ∈ arg maxx µ(x), then x ∈ arg maxx ymean(x).\n2. For any x, 〈∇xµ(x),∇xymean(x)〉 ≥ 0.\nProof. See Appendix. A.1.2." }, { "heading": "4.3 THEORETICAL RESULTS", "text": "We now highlight some theoretical motivation for using CNML in the MBO setting, and show that estimating the true function with the CNML distribution is close to an expert even if the test label is chosen adversarially, which makes it difficult for an optimizer to exploit the model. As discussed earlier, the CNML distribution minimizes a notion of regret based on the log-loss with respect to an adversarial test distribution. This construction leads the CNML distribution to be very conservative for out-of-distribution inputs. However, the notion of regret in conventional CNML does not easily translate into a statement about the outputs of the function we are optimizing. In order to reconcile these differences, we introduce a new notion of regret, the functional regret, which measures the difference between the output estimated under some model against an expert within some function class Θ.\nDefinition 4.1 (Functional Regret). Let q(y|x) be an estimated conditional distribution, x represent a query input, and y∗ represent a label for x. We define the functional regret of a distribution q as:\nRegretf (q,D,x, y∗) = |Ey∼q(y|x)[g(y)]− Ey∼p(y|x,θ̂)[g(y)]|\nWhere θ̂ is MLE estimator for the augmented dataset D ∪ (x, y∗) formed by appending (x, y∗) to D.\nA straightforward choice for the evaluation function g is the identity function g(y) = y, in which case the functional regret controls the difference in expected values between q and the MLE estimate p(y|x, θ̂). We now show that the functional regret is bounded for the CNML distribution: Theorem 4.1. Let pNML be the conditional NML distribution defined in Eqn. 2. Then,\n∀x max y∗\nRegretf (pNML,D, x, y∗) ≤ 2gmax √ Γ(D, x)/2.\nΓ(D, x) = log{ ∑ y p(y|x, θ̂D∪(x,y))} is the minimax individual regret, and gmax = maxy∈Y g(y).\nProof. See Appendix A.1.2.\nThis statement states that, for any test input x, the CNML estimator is close to the best possible expert if the test label y is chosen adversarially. Importantly, the expert is allowed to see the label of the test point, but the CNML estimator is not, which means that if the true function lies within the model class, this statement effectively controls the discrepancy in performance between the true function and pNML. The amount of slack is controlled by the minimax individual regret Γ (Fogel & Feder, 2018), which can be interpreted as a measure of uncertainty in the model. For large model classes Θ and data points x far away from the data, the individual regret is naturally larger as the NML estimator becomes more uncertain, but for data points x close to Θ the regret becomes very small. This behavior can be easily seen in Fig. 3, where the CNML distribution is very focused in regions close to the data but outputs large uncertainty estimates in out-of-distribution regions." }, { "heading": "5 EXPERIMENTS", "text": "In our experimental evaluation, we aim to 1) evaluate how well the proposed quantized NML estimator estimates uncertainty in an offline setting, and 2) compare the performance of NEMO to a number of recently proposed offline MBO algorithms on high-dimensional offline MBO benchmark problems. Our code is available at https://sites.google.com/view/nemo-anonymous" }, { "heading": "5.1 MODELING WITH QUANTIZED NML", "text": "We begin with an illustrative example of modeling a function with quantized NML. We compared a learned a quantized NML distribution with a bootstrapped ensemble method (Breiman, 1996) on a simple 1-dimensional problem, shown in Fig. 3. The ensemble method is implemented by training 32 neural networks using the same model class as NML, but with resampled datasets and randomized initializations. Both methods are trained on a discretized output, using a softmax cross-entropy loss function. We see that in areas within the support of the data, the NML distribution is both confident and relatively accurate. However, in regions outside of the support, the quantized NML outputs a highly uncertain estimate. In contrast, the ensemble method, even with bootstrapping and random initializations, tends to produce an ensemble of models that all output similar values. Therefore, in regimes outside of the data, the ensemble still outputs highly confident estimates, even though they may be wrong." }, { "heading": "5.2 HIGH-DIMENSIONAL MODEL-BASED OPTIMIZATION", "text": "We evaluated NEMO on a set of high-dimensional MBO problems. The details for the tasks, baselines, and experimental setup are as follows, and hyperparameter choices with additional implementation details can be found in Appendix A.2." }, { "heading": "5.2.1 TASKS", "text": "We evaluated on 6 tasks from the Design-bench (Anonymous, 2021), modeled after real-world design problems for problems in materials engineering (Hamidieh, 2018), biology (Sarkisyan et al., 2016), and chemistry (Gaulton et al., 2012), and simulated robotics. Because we do not have access to a real physical process for evaluating the material and molecule design tasks, Design-bench follows experimental protocol used in prior work (Brookes et al., 2019; Fannjiang & Listgarten, 2020) which obtains a ground truth evaluation function by training a separate regressor model to evaluate the performance of designs. For the robotics tasks, designs are evaluated using the MuJoCo physics simulator (Todorov et al., 2012).\nSuperconductor. The Superconductor task involves designing a superconducting material that has a high critical temperature. The input is space is an 81-dimensional vector, representing properties such as atomic radius and valence of elements which make up the material. This dataset contains a total of 21,263 superconductoring materials proposed by Hamidieh (2018).\nGFP. The goal of the green fluorescent protein (GFP) task is to design a protein with high fluorescence, based on work proposed by Sarkisyan et al. (2016). This task requires optimizing a 238-dimensional sequence of discrete variables, with each dimension representing one amino acid in the protein and taking on one of 20 values. We parameterize the input space as logits in order to make this discrete problem amenable to continuous optimization. In total, the dataset consists of 5000 such proteins annotated with fluorescence values.\nMoleculeActivity. The MoleculeActivity task involves designing the substructure of a molecule that exhibits high activity when tested against a target assay (Gaulton et al., 2012). The input space\nis represented by 1024 binary variables parameterized by logits, which corresponds to the Morgan radius 2 substructure fingerprints. This dataset contains a total of 4216 data points.\nThe final 3 tasks, HopperController, AntMorphology, and DKittyMorphology, involve designing robotic agents. HopperController involves learning the parameters of a 5126-parameter neural network controller for the Hopper-v2 task from OpenAI Gym (Brockman et al., 2016). The Ant and DKitty morphology tasks involve optimizing robot parameters such as size, orientation, and joint positions. AntMorphology has 60 parameters, and DKittyMorphology has 56 parameters." }, { "heading": "5.2.2 BASELINES", "text": "In addition to NEMO, we evaluate several baseline methods. A logical alternative to NEMO is a forward ensemble method, since both NEMO and ensemble methods maintain a list of multiple models in order to approximate a distribution over the function value, and ensembles are often used to obtain uncertainty-aware models. We implement an ensemble baseline by trainingK networks on the task dataset with random initializations and bootstrapping, and then optimizing the mean value of the ensemble with gradient ascent. In our results in Table 1, we label the ensemble as “Ensemble” and a single forward model as “Forward”. Additionally, we implement a Bayesian optimization baseline wth Gaussian processes (GP-BO) for the Superconductor, GFP, and MoleculeActivity tasks, where we fit the parameters of a kernel and then optimize the expected improvement according to the posterior. We use an RBF kernel for the Superconductor task, and an inner product kernel for the GFP and MoleculeActivity tasks since they have large, discrete input spaces. Note that the GP baseline has no variance between runs since the resulting method is completely deterministic.\nWe evaluate 3 state-of-the-art methods for offline MBO: model inversion networks (MINs) (Kumar & Levine, 2019), conditioning by adaptive sampling (CbAS) (Brookes et al., 2019), and autofocused oracles (Fannjiang & Listgarten, 2020). MINs train an inverse mapping from outputs y to inputs x, and generate candidate inputs by searching over high values of y and evaluating these on the inverse model. CbAS uses a generative model of p(x) as a trust region to prevent model exploitation, and autofocused oracles expands upon CbAS by iteratively updating the learned proxy function and iterates within a minimax game based on a quantity known as the oracle gap." }, { "heading": "5.3 RESULTS AND DISCUSSION", "text": "Our results are shown in Table 1. We follow an evaluation protocol used in prior work for design problems (Brookes et al., 2019; Fannjiang & Listgarten, 2020), where the algorithm proposes a set of candidate designs, and the 100th and 50th percentile of scores are reported. This mimics a realworld scenario in which a batch of designs can be synthesized in parallel, and the highest performing designs are selected for use.\nFor each experiment, we produced a batch of 128 candidate designs. MINs, CbAS, and autofocused oracles all learn a generative model to produce candidate designs, so we sampled this batch from the corresponding model. Ensemble methods and NEMO do not maintain generative models, so we instead optimized a batch of 128 particles. We report results averaged of 16 random seeds.\nNEMO outperforms all methods on the Superconductor task by a very large margin, under both the 100th and 50th percentile metrics, and in the HopperController task under the 100th percentile metric. For the remaining tasks (GFP, MoleculeActivity, AntMorphology, and HopperMorphology), NEMO also produces competitive results in line with the best performing algorithm for each task. These results are promising in that NEMO performs consistently well across all 6 domains evaluated, and indicates a significant number of designs found in the GFP and Superconductor task were better than the best performing design in the dataset. In Appendix A.3, we present learning curves for NEMO, as well as an ablation study demonstrating the the beneficial effect of NML compared to direct optimization on a proxy function. Note that unlike the prior methods (MINs, CbAS, Autofocused), NEMO does not require training a generative model on the data, only a collection of forward models." }, { "heading": "6 CONCLUSION", "text": "We have presented NEMO (Normalized Maximum Likelihood Estimation for Model-Based Optimization), an algorithm that mitigates model exploitation on MBO problems by constructing a conservative model of the true function. NEMO generates a tractable approximation to the NML distribution to provide conservative objective value estimates for out-of-distribution inputs. Our theoretical analysis also suggests that this approach is an effective way to estimate unknown objective functions outside the training distribution. We evaluated NEMO on a number of design problems in materials science, robotics, biology, and chemistry, where we show that it attains very large improvements on two tasks, while performing competitively with respect to prior methods on the other four.\nWhile we studied offline MBO in this paper, we believe that NML is a promising framework for building uncertainty-aware models which could have numerous applications in a variety of other problem settings, such as in off-policy evaluation, batch learning from bandit feedback, or offline reinforcement learning." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Brandon Trabucco and Aviral Kumar for assisting with the implementation and evaluation of baselines and helping with the benchmark. This research was funded by the Office of Naval Research, C3.ai, and computational resources from Amazon Web Services." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOFS", "text": "" }, { "heading": "A.1.1 PROOF OF PROPOSITION 4.1", "text": "This statement directly from monotonicity. Because ymean is a sum of monotonic functions in µ, ymean must also be a monotonic function of µ. This implies that the global maxima are the same, and that gradients must point in the same direction since 〈∇xµ,∇xymean〉 = 〈 ∇xµ, dymeandµ ∇xµ 〉 = dymean dµ ||∇xµ|| 2 2 ≥ 0." }, { "heading": "A.1.2 PROOF OF THEOREM 4.1", "text": "In this section we present the proof for Thm. 4.1, restated below:\n∀x max y∗\nRegretf (pNML,D, x, y∗) ≤ 2gmax √ Γ(D, x)/2\nThere are two lemmas we will use in our proof. First, the difference in expected value between two distributions p(x) and q(x) can be bounded by the total variation distance TV (p, q) and the maximum function value fmax = maxx f(x):\n|Ep(x)[f(x)]− Eq(x)[f(x)]| = | ∑ x [p(x)− q(x)]f(x)|\n≤ fmax| ∑ x [p(x)− q(x)]|\n= fmax2TV (p(x), q(x))\nSecond, Fogel & Feder (2018) show that the NML distribution obtains the best possible minimax individual regret of\nmax y Regretind(pNML,D, x, y) = log{ ∑ y p(y|x, θ̂D∪(x,y))} def = Γ(D, x)\nUsing these two facts, we can show:\nRegretf (pNML,D, x, y∗) = |Ey∼pNML(y|x)[g(y)]− Ey∼p(y|x,θ̂D∪(x,y∗))[g(y)]|\n≤ 2gmaxTV (p(y|x, θ̂D∪(x,y∗)), pNML(y|x))\n≤ 2gmax\n√ 1\n2 KL(p(y|x, θ̂D∪(x,y∗)), pNML(y|x)) ≤ 2gmax √ 1\n2 max q\nEy∼q(y|x)[log p(y|x, θ̂D∪(x,y))− log pNML(y|x)] = 2gmax √ Γ(D, x)/2\nWhere we apply the total variation distance lemma from lines 1 to 2. From lines 2 to 3, we used Pinsker’s inequality to bound total variation with KL, and from lines 3 to 4 we used the fact that the maximum regret is always greater than the KL, i.e.\nKL(p(y|x, θ̂D∪(x,y∗)), pNML(y|x)) = Ey∼p(y|x,θ̂)[log p(y|x, θ̂D∪(x,y∗))− log pNML(y|x)]\n≤ Ey∼p(y|x,θ̂)[log p(y|x, θ̂D∪(x,y))− log pNML(y|x)]\n≤ max q Ey∼q(y|x)[log p(y|x, θ̂D∪(x,y))− log pNML(y|x)]\nOn the final step, we substituted the definition of Γ(D, x) as the individual regret of the NML distribution pNML." }, { "heading": "A.2 EXPERIMENTAL DETAILS", "text": "" }, { "heading": "A.2.1 DETAILED PSEUDOCODE", "text": "There are a number of additional details we implemented in order to improve the performance of NEMO for high-dimensional tasks. These include:\n• Using the Adam optimizer (Kingma & Ba, 2014) rather than stochastic gradient descent. • Pretraining the models θ in order to initialize the procedure with an accurate initial model. • Optimizing over a batch of M designs, in order to follow previous evaluation protocols. • Optimizing with x with respect to the internal scores µ instead of EpNML [g(y)]. • Using target networks for the NML model, originally proposed in reinforcement learning\nalgorithms, to improve the stability of the method.\nWe present pseudocode for the practical implementation of NEMO below:\nAlgorithm 2 NEMO – Practical Instantiation Input Model class Θ, Dataset D = (x1:N , y1:N ), number of bins K, batch size M , learning rates αθ, αx, target update rate τ Initialize K models θ1:K0 Initialize batch of optimization iterates B0 = x1:M0 from the best performing x in D. Pretrain θ1:K0 using supervised learning on D. Initialize target networks θ̄1:K0 ← θ1:K0 . Quantize y1:N into K bins, denoted as bYc = {by1c, · · · bykc}. for iteration t in 1 . . . T do\nfor k in 1 . . .K do Sample x′t from batch Bt. Construct Augmented dataset: D′ ← D ∪ (x′t, bykc) Compute gradient gθ ← ∇θyt LogLikelihood(θ y t ,D′)\nUpdate model θyt using Adam and gθ with learning rate αθ. end for Update target networks θ̄yt+1 ← τθ y t+1 + (1− τ)θ̄ y t for m in 1 . . .M do Compute internal values µk(xmt ) from target networks θ̄ y t for all k ∈ 1, · · · ,K\nCompute gradient gx for xmt : ∇x 1K ∑ k µ\nk(xmt ) Update xmt using Adam and gradient gx with learning rate αx.\nend for end for" }, { "heading": "A.2.2 HYPERPARAMETERS", "text": "The following table lists the hyperparameter settings we used for each task. We obtained our hyperparameter settings by performing a grid search across different settings of αθ, αx, and τ . We used 2-layer neural networks with softplus activations for all experiments. We used a smaller networks for GFP, Hopper, Ant, and DKitty (64-dimensional layers) and a lower discretization K for computational performance reasons, but we did not tune over these parameters.\nSuperconductor MoleculeActivity GFP Hopper Ant DKitty Learning rate αθ 0.05 0.005 Learning rate αx 0.1 0.01 0.001\nNetwork Size 256,256 64,64 Discretization K 40 20\nBatch size M 128 Target update rate τ 0.05\nFor baseline methods, please refer to Anonymous (2021) for hyperparameter settings." }, { "heading": "A.3 ABLATION STUDIES", "text": "In this section, we present 3 ablation studies. The first is on the effect of NML training, by comparing NEMO to optimizing a pretrained baseline neural network. The second ablation study investigates the architecture choice, comparing the discretized logistic architecture to a standard feedforward neural network. The final ablation study investigates the ratio of model optimization steps to input optimization steps. Each logging iteration in these figures corresponds to 50 loop iterations as depicted in Algorithm A.2.1." }, { "heading": "A.3.1 EFFECT OF NML", "text": "In this section, we present learning curves for NML, as well as an ablation study comparing NML (orange curve) against a forward optimization algorithm without NML (blue curve), labeled “No NML”. The “No NML” algorithm is identical to the NML algorithm detailed in Alg. A.2.1, except the NML learning rate αθ is set to 0.0. This means that the only model training that happens is done during the pretraining step. For illustrative purposes, we initialize the iterates from the worstperforming scores in the dataset to better visualize improvement, rather than initializing from the best scores which we used in our final reported numbers.\nThe scores on the Superconductor task are shown in the following figure. Removing NML training makes it very difficult for training on most designs, as shown by the poor performance on the 50th percentile metric.\nThe scores on the MoleculeActivity task follow a similar trend.\nAnd finally, the scores on the GFP task also display the same trend." }, { "heading": "A.3.2 ARCHITECTURE CHOICE", "text": "In this ablation study, we investigate the efficacy of the discretized logistic architecture. As a baseline, we compared against a standard feedforward network, trained with a softmax cross-entropy loss to predict the discretized output y. We label this network as ”Categorical”, because the output of the network is a categorical distribution. All other hyperparameters, including network layer sizes, remain unchanged from those reported in Appendix. A.2.2.\nOn the Superconductor task, both the discretized logistic and categorical networks score well on the 100th percentile metric, but the Categorical architecture displays less consistency in optimizing designs, as given by poor performance on the 50th percentile metric.\nOn the MoleculeActivity task, the Categorical network performs comparatively better, but still underperforms the discretized logistic architecture." }, { "heading": "A.3.3 RATIO OF OPTIMIZATION STEPS", "text": "In this ablation study, we investigate the effect of the ratio of model optimization steps to input optimization step. For this experiment, we fix the learning rate αx to the hyperparameter values in Appendix. A.2.2, fix the input optimization steps to 1, and vary the number of model optimization steps we take.\nWe first investigate the Superconductor task, using a small model learning rate αθ = 0.0005. In this setting, we see a clear trend that additional model optimization steps are helpful and increase convergence speed.\nUsing a higher learning rate of αθ = 0.05, a smaller amount of steps works better, which suggests that it is easy to destabilize the learning process using a higher learning rate.\nA similar trend holds true in the MoleculeActivity task, albeit less pronounced. The following figure uses a learning rate of αθ = 0.0005, and we once again see that more model optimization steps leads to increased performance.\nAnd using a learning rate of αθ = 0.05, the advantage becomes less clear.\nOverall, while we performed a grid search over the learning rates to achieve the highest performance, using a large number of model optimization steps with a small learning rate αθ appears to be a consistent strategy which performs well." } ]
2,021
MALIZED MAXIMUM LIKELIHOOD ESTIMATION
SP:ce75f565c3c17363695c9e39f28b49a66e3731b8
[ "This paper proposes a simple approach to discover interpretable latent manipulations in trained text VAEs. The method essentially involves performing PCA on the latent representations to find directions that maximize variance. The authors argue that this results in more interpretable directions. The method is applied on top of a VAE model (OPTIMUS), and the authors argue that different directions discovered by PCA correspond to interpretable concepts." ]
Language generation models are attracting more and more attention due to their constantly increasing quality and remarkable generation results. State-of-the-art NLG models like BART/T5/GPT-3 do not have latent spaces, therefore there is no natural way to perform controlled generation. In contrast, less popular models with explicit latent spaces have the innate ability to manipulate text attributes by moving along latent directions. For images, properties of latent spaces are wellstudied: there exist interpretable directions (e.g. zooming, aging, background removal) and they can even be found without supervision. This success is expected: latent space image models, especially GANs, achieve state-of-the-art generation results and hence have been the focus of the research community. For language, this is not the case: text GANs are hard to train because of non-differentiable discrete data generation, and language VAEs suffer from posterior collapse and fill the latent space poorly. This makes finding interpetable text controls challenging. In this work, we make the first step towards unsupervised discovery of interpretable directions in language latent spaces. For this, we turn to methods shown to work in the image domain. Surprisingly, we find that running PCA on VAE representations of training data consistently outperforms shifts along the coordinate and random directions. This approach is simple, data-adaptive, does not require training and discovers meaningful directions, e.g. sentence length, subject age, and verb tense. Our work lays foundations for two important areas: first, it allows to compare models in terms of latent space interpretability, and second, it provides a baseline for unsupervised latent controls discovery.
[ { "affiliations": [], "name": "LANGUAGE VAES" } ]
[ { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Samuel R. Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Massimo Caccia", "Lucas Caccia", "William Fedus", "Hugo Larochelle", "Joelle Pineau", "Laurent Charlin" ], "title": "Language gans falling short", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alvin Chan", "Yew-Soon Ong", "Bill Pung", "Aston Zhang", "Jie Fu" ], "title": "Cocon: A self-supervised approach for controlled text generation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V. Le", "Christopher D. Manning" ], "title": "Electra: Pre-training text encoders as discriminators rather than generators", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sumanth Dathathri", "Andrea Madotto", "Janice Lan", "Jane Hung", "Eric Frank", "Piero Molino", "Jason Yosinski", "Rosanne Liu" ], "title": "Plug and play language models: A simple approach to controlled text generation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Emily Dinan", "Angela Fan", "Adina Williams", "Jack Urbanek", "Douwe Kiela", "Jason Weston" ], "title": "Queens are powerful too: Mitigating gender bias in dialogue generation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Le Fang", "Chunyuan Li", "Jianfeng Gao", "Wen Dong", "Changyou Chen" ], "title": "Implicit deep latent variable models for text generation", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Omar U. Florez" ], "title": "On the unintended social bias of training language generation models with data from local media, 2019", "venue": null, "year": 2019 }, { "authors": [ "Hao Fu", "Chunyuan Li", "Xiaodong Liu", "Jianfeng Gao", "Asli Celikyilmaz", "Lawrence Carin" ], "title": "Cyclical annealing schedule: A simple approach to mitigating KL vanishing", "venue": null, "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": "In Proceedings of ICLR,", "year": 2019 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Li Du", "Maxwell Forbes", "Yejin Choi" ], "title": "The curious case of neural text degeneration", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P. Xing" ], "title": "Toward controlled generation of text. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2017 }, { "authors": [ "Aapo Hyvärinen", "Erkki Oja" ], "title": "Independent component analysis: algorithms and applications", "venue": "Neural networks,", "year": 2000 }, { "authors": [ "Vineet John", "Lili Mou", "Hareesh Bahuleyan", "Olga Vechtomova" ], "title": "Disentangled representation learning for non-parallel text style transfer", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Nitish Shirish Keskar", "Bryan McCann", "Lav Varshney", "Caiming Xiong", "Richard Socher" ], "title": "CTRL - A Conditional Transformer Language Model for Controllable Generation", "venue": null, "year": 1909 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Guillaume Lample", "Sandeep Subramanian", "Eric Smith", "Ludovic Denoyer", "Marc’Aurelio Ranzato", "Y-Lan Boureau" ], "title": "Multiple-attribute text rewriting", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Veselin Stoyanov", "Luke Zettlemoyer" ], "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Chunyuan Li", "Xiang Gao", "Yuan Li", "Xiujun Li", "Baolin Peng", "Yizhe Zhang", "Jianfeng Gao" ], "title": "Optimus: Organizing sentences via pre-trained modeling of a latent space", "venue": "In EMNLP,", "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach, 2019", "venue": null, "year": 2019 }, { "authors": [ "L. Logeswaran", "H. Lee", "S. Bengio" ], "title": "Content preserving text generation with attribute controls", "venue": "ArXiv,", "year": 2018 }, { "authors": [ "William Peebles", "John Peebles", "Jun-Yan Zhu", "Alexei A. Efros", "Antonio Torralba" ], "title": "The hessian penalty: A weak prior for unsupervised disentanglement", "venue": "In Proceedings of European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Shrimai Prabhumoye", "A. Black", "R. Salakhutdinov" ], "title": "Exploring controllable text generation techniques", "venue": "ArXiv, abs/2005.01822,", "year": 2020 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": null, "year": 2019 }, { "authors": [ "Tianxiao Shen", "Tao Lei", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Style transfer from non-parallel text by cross-alignment", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Tianxiao Shen", "Jonas Mueller", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Educating text autoencoders: Latent representation guidance via denoising, 2020", "venue": null, "year": 2020 }, { "authors": [ "Andrey Voynov", "Artem Babenko" ], "title": "Unsupervised discovery of interpretable directions in the gan latent space, 2020", "venue": null, "year": 2020 }, { "authors": [ "Ke Wang", "Hang Hua", "Xiaojun Wan" ], "title": "Controllable unsupervised text attribute transfer via editing entangled latent representation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Peng Xu", "Jackie Chi Kit Cheung", "Yanshuai Cao" ], "title": "On variational learning of controllable representations for text without supervision", "venue": "In The 37th International Conference on Machine Learning (ICML", "year": 2020 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "title": "Improved variational autoencoders for text modeling using dilated convolutions", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Chris Dyer", "Eric P Xing", "Taylor Berg-Kirkpatrick" ], "title": "Unsupervised text style transfer using language models as discriminators", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Qile Zhu", "Wei Bi", "Xiaojiang Liu", "Xiyao Ma", "Xiaolin Li", "Dapeng Wu" ], "title": "A batch normalized inference network keeps the KL vanishing away", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2636–2649,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Transformer-based models yield state-of-the-art results on a number of tasks, including representation learning (Devlin et al., 2019; Liu et al., 2019; Clark et al., 2020) and generation (Radford et al.; Raffel et al., 2019; Lewis et al., 2020). Notably, large language models have been reported to produce outputs nearly indistinguishable from human-written texts (Brown et al., 2020).\nAlthough the predictions of autoregressive language models are fluent and coherent, it is not clear how to manipulate the model to get samples with desired properties. For example, make them shorter, more formal or more positive, or, alternatively, use the same model to rewrite human-written texts in a different tone. Current approaches often rely on external labels of target attributes and require modifications to the model. This involves retraining for new attributes or changing the decoding procedure, which is usually expensive.\nIn contrast, models with explicit latent spaces have the innate ability to manipulate text attributes by moving along latent directions. They, however, gained limited traction. One reason is that training a VAE on text data poses a number of optimization challenges, which have been tackled with a varying degree of success (He et al., 2019; Fu et al., 2019; Zhu et al., 2020). Additionally, language VAEs are mostly small LSTM-based models which goes against the current trend of using large pretrained Transformers. The first large-scale language VAE model is the recently introduced OPTIMUS (Li et al., 2020): it uses BERT as the encoder and GPT-2 as the decoder, and sets a new record on benchmark datasets.\nDifferently from texts, latent space models for images, especially GANs, achieve state-of-the-art generation results. Therefore, these models have been the focus of the research community, and the\nproperties of latent spaces are well-learned. For example, even early works on generative adversarial networks for images report that it is possible to have smooth interpolations between images in the latent space (Goodfellow et al., 2014). More recent studies show that the latent space directions corresponding to human-interpretable image transformations (from now on, ”interpretable directions”) can be discovered in an unsupervised way (Härkönen et al., 2020; Voynov & Babenko, 2020; Peebles et al., 2020).\nIn this paper, we show that for the language domain, much alike the well-studied visual domain, a sufficiently “good” latent space allows to manipulate sample attributes with relative ease. To avoid the known difficulties associated with training language GANs, we experiment with VAEs; more specifically, with the current state-of-the-art model OPTIMUS. We show that for this model, not only it is possible to produce meaningful and “smooth” interpolations between examples and to transfer specific properties via arithmetic operations in the latent space, but it is also possible to discover the interpretable latent directions in an unsupervised manner. We propose a method based on the PCA of latent representations of the texts in the training dataset. According to human evaluation, the proportion of interpretable directions among the ones found by our method is consistently larger than the proportion of interpretable directions among canonical co-ordinates or random directions in the latent space. The meaningful directions found by this method include, for example, subject age, subject gender, verb tense, and sentence length. Some of the directions, e.g. sentence length, are potentially useful: the ability to expand or shrink a text while preserving its content may be useful for tasks like summarization.\nNote that the proposed method is simple and fast. The method is simple because it requires only the forward pass of the encoder, without backpropagating through decoding steps. This is very important for the language domain, where backpropagation through samples is significantly more difficult than for images. Namely, generation is non-differentiable, and previous attempts to overcome this issue relied on noisy or biased gradient estimates, which is less reliable than the standard MLE training. Instead, we do not rely on generated samples at all: we operate directly in the latent space. Additionally, since sampling directly from the prior does not yield diverse samples in case of OPTIMUS, we use the representations of the training data without running a decoding procedure - this maked the method fast.\nTo summarize, our contributions are as follows:\n1. We propose the first method for unsupervised discovery of interpretable directions in latent spaces of language VAEs.\n2. This method is simple and fast: it is based on PCA of latent representations for texts in the training dataset.\n3. This method is effective: the proportion of interpretable directions among the ones found by our method is consistently larger than that of canonical co-ordinates or random directions in the latent space.\n4. Our work lays foundations for two important areas: first, it allows to compare models in terms of latent space interpretability, and second, it provides a baseline for unsupervised latent controls discovery." }, { "heading": "2 RELATED WORK", "text": "Finding interpretable directions in latent spaces of language VAEs is related to three lines of work. First, latent variable models for text and, more specifically, properties of latent spaces: for interpretable directions to exist, latent space has to be smooth (i.e. allow coherent interpolations). Then, since great part of the motivation for finding interpretable directions is manipulating generated texts, we discuss works on controllable text generation for different types of models, both VAE and standard autoregressive. Finally, we mention recent works trying to discover interpretable directions in image GANs." }, { "heading": "2.1 LATENT VARIABLE MODELS FOR TEXT", "text": "Latent variable models encode information about text into a probability distribution. In addition to sampling new sentences from the prior distribution, they potentially allow to explicitly encode\nspecific properties of text, such as sentiment or style. Even early works on VAEs show that a latent space obtained with the VAE objective can result in coherent interpolations (Bowman et al., 2016).\nWhile this is encouraging, training good VAEs with smooth and expressive latent spaces is challenging. Specifically, for interpretable directions to exist, we need a model which (i) does not ignore latent variable – to produce good samples, (ii) has continuous latent space – to allow controllable manipulation.\nIgnoring latent variable is a known problem of VAEs. It arises because of the the KL vanishing problem: over the course of training, the KL divergence part of the loss may drop to 0, which indicates that the model ignores the latent variable. There exist many ways to alleviate this issue (Yang et al., 2017; Fang et al., 2019; Fu et al., 2019; Zhu et al., 2020); one of the simpler ways is adjusting the weight of the KL loss component according to a specific schedule.\nAnother problem is the latent vacancy problem: differently from images, not all regions of the latent space are occupied by the posterior distribution (Xu et al., 2020). In simple words, text latent spaces tend to have “holes” where the decoding network fails to generalize. As a result, when the latent codes are manipulated, the modified codes often land in these holes or vacant regions in the posterior latent space. If this happens, a model can not decode properly.\nIn light of the above, discovery of interpretable directions in text latent spaces is possible only with a strong model. Therefore, we use the current state-of-the-art model OPTIMUS (Li et al., 2020). It is a recent large-scale variational autoencoder which initializes the encoder with BERT (Devlin et al., 2019) and the decoder with GPT2 (Radford et al.). In addition to the model’s high capacity, we use it because of the available checkpoints and reported results on latent space manipulation." }, { "heading": "2.2 CONTROLLABLE GENERATION FOR TEXT DATA", "text": "Latent variable models. A natural way to achieve text generation with required attributes is using latent variable text generation models. The idea is that information about the attribute value is encoded in the latent code, and to obtain samples with the desired property one has to fix the corresponding component (direction) of the code.\nFor example, several works learn latent spaces with disentangled representations of content and style (Hu et al., 2017; Logeswaran et al., 2018; Lample et al., 2019; Yang et al., 2018; Shen et al., 2017; John et al., 2019). After that, to generate sentences in a specific style, the style vector is fixed. Depending on the approach, this style vector can either be estimated by encoding sentences with the desired attribute or be directly produced by specifying the structured latent code (e.g. one-hot encoding of an attribute).\nAnother line of research shows that it is possible to achieve attribute manipulation by moving in the latent space along specific vectors. These vectors, however, are found using data labelled with the attribute, i.e. with supervision. For example, Shen et al. (2020) change tense of a sentence by adding to its latent representation a “tense vector” computed as a difference of averaged representations of sentences with different tenses; Wang et al. (2019) use gradients of the attribute classifier. One of the first successful methods that learns a disentangled latent space is the work by Xu et al. (2020): they use basis vectors in the constrained latent space; however, this involves training a model with a structured latent space, which is rather complicated.\nAutoregressive models. Controllable generation for standard autoregressive language models is usually achieved by either prepending an attribute to the input sequence as a prompt (Keskar et al., 2019), training an additional component of the model (Chan et al., 2020) or adjusting the decoding result with additional attribute-specific language models (Dathathri et al., 2020). A more thorough comparison of approaches to controlled text generation can be read in Prabhumoye et al. (2020).\nNote that all these approaches require supervision and substantial changes to either training or generation procedures, whereas our approach is applicable to any variational autoencoder." }, { "heading": "2.3 INTERPRETABLE DIRECTIONS MINING IN IMAGE GANS", "text": "To the best of our knowledge, there are only three works which discover interpretable latent directions in an unsupervised way, and all of them operate with GANs for images.\nTwo of them are not applicable to texts directly. In Voynov & Babenko (2020), the interpretable directions are trained. These directions are the ones which can be recognized by a separate reconstructor based on two samples, from the original and a shifted latent vector. Peebles et al. (2020) propose to learn disentangled attributes by minimizing the sum of squared off-diagonal terms of the generator Hessian matrix. Both approaches require backpropagation through sampling and therefore are not applicable directly for texts: unlike images, generated texts are not differentiable with respect to their latent representations.\nThe last approach, Härkönen et al. (2020), show that interpretable controls for image synthesis can be identified by finding principal components from layer outputs of the generator network for several samples from the prior. In our more challenging language domain, instead of sampling from the generator distribution, we take advantage of the availability of the encoder in VAEs and perform PCA on training data representations." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 VARIATIONAL AUTOENCODERS FOR TEXT DATA", "text": "Variational autoencoders (Kingma & Welling, 2013) are a class of generative models. They consist of an encoder and a decoder. For a given text, the encoder outputs a latent code: a d-dimensional probability distribution (usually a gaussian with diagonal covariance matrix). The decoder receives a vector z sampled from some distribution (in training, from the encoder output, in inference, from the prior) and generates an output text.\nVAEs are trained to minimize the sum of the reconstruction loss (for language, usually it is the cross-entropy loss) and the KL divergence loss, which forces the encoder outputs to stay close to the prior distribution. To avoid the KL vanishing problem (see section 2.1), the KL term of the loss is weighted with a β, which is annealed according to a predefined schedule (Bowman et al., 2016). Formally, the loss function is as follows:\nargmin φ,θ Lβ = argmin φ,θ (Lrec + βLKL) (1)\nFor more details on the specifics of loss and training, see e.g. (Li et al., 2020)." }, { "heading": "3.2 OPTIMUS: THE FIRST LARGE-SCALE TEXT VAE", "text": "The current state-of-the-art variational autoencoder is OPTIMUS (Li et al., 2020). This is the first large-scale VAE: it initializes encoder and decoder networks with pretrained weights of BERT (Devlin et al., 2019) and GPT-2 (Radford et al.) respectively. There are two strategies for including the latent vector in the generative process of the Transformer decoder: memory and embedding. In the memory strategy, latent vector is concatenated with the prefix tokens; in the embedding strategy, this latent vector is added to every token representation in the decoder.\nWe use OPTIMUS because for our study, we need a strong model with a good latent space. For a detailed explanation, see section 2.1." }, { "heading": "4 OUR METHOD", "text": "Each method for discovery of interpretable directions is inevitably paired with its specific underlying definition of the vague notion of “interpretability”: the former defines the latter, and vice versa. For example, in Voynov & Babenko (2020), interpretable directions are the ones that are easy to distinguish from each other by observing results of manipulations; in Peebles et al. (2020) — the ones that change the output image independently from each other.\nTo start, we formulate the desired properties we believe interpretable directions should have. In our setting, interpretable directions are the ones that:\n• are orthogonal: this is a weaker form of the standard attribute disentanglement requirement, which states that changing one attribute of the sentence should not affect others;\n• maximize variance, i.e. they “explain” the data distribution well, so that its most distinctive features are exposed as controllable attributes. This requirement might be formalized as follows: given the d-dimensional latent representations of n samples X ∈ Rn×d, we are interested in finding a set of vectors wi that maximize the sum of ||Xwi||2." }, { "heading": "4.1 PCA FOR ENCODED TRAINING DATA", "text": "Luckily, a common statistical technique Principal Component Analysis (PCA) satisfies both of these requirements by construction. This technique obtains principal components (vectors with highest variance) as eigenvectors of the covariance matrix XTX corresponding to the largest eigenvalues. Although PCA is mostly used as a dimensionality reduction method, we use it to obtain a linear transformation that corresponds to a set of disentangled controllable text attributes.\n1. Encode unlabeled training samples\n2. Compute PCA of encoded samples\n3. Use principal components to change sentence attributes\nThe resulting method is a straightforward procedure (see Figure 1):\n1. Encode examples from the training dataset (since only encoder is used, no generation is required). For each example, we will have a probability distribution.\n2. Take the expectation vector of each distribution, stack them to form a matrix.\n3. Compute the principal components of the resulting matrix.\n4. Take top-N principal components with the highest explained variance.\nNote that our method is very simple: it does not involve neither training any additional models nor generation from the model (which is slow for texts). Despite its simplicity, this approach discovers meaningful and often non-trivial latent manipulations (section 5.2) and the proportion of interpretable directions is greater compared to several baselines (section 5.3)." }, { "heading": "4.2 NOTES ON DESIGN CHOICES", "text": "In this part, we explain some of our design choices in light of related works on interpretable directions mining for image GANs (Voynov & Babenko, 2020; Peebles et al., 2020; Härkönen et al., 2020) and highlight why the fist two of these three methods are not directly applicable for texts.\nWe do not sample from the prior. An important distinction of our approach from prior work on latent discovery in image GANs is that we do not use sampling from the prior distribution of the trained model. This has several reasons. First, while image generators are convolutional and the entire image is generated in one pass, text decoders produce sentences token by token, which significantly slows down training. Second, we observed that sampling from the prior distribution of OPTIMUS does not result in high-quality samples. For example, when latent vectors are sampled from N (0, Id), the resulting texts are highly repetitive. Namely, on average over 40% of tokens in a generated text have the same token type. To achieve higher generation quality, we could estimate the variance σ(z) from training data and sample latent vectors fromN (0, σ(z)) instead ofN (0, Id). However, this requires mapping all training sentences to the latent space; at that point, one already has latent vectors encoding natural texts, so further sampling is redundant.\nTwo of the methods are not directly applicable for texts. While our method is similar to the one proposed in Härkönen et al. (2020) for images, there exist two other methods that show promising results. Here, we would like to stress that approaches by Voynov & Babenko (2020) and Peebles et al. (2020) for the visual domain are not directly applicable to language generation: they require backpropagation through discrete sampling. This problem has previously been studied in the context of language GAN training, and there is an evidence that the community has not yet proposed a solution that would consistently outperform maximum likelihood training (Caccia et al., 2020).\nOther matrix decompositions. There exist numerous ways to decompose the latent representation matrix into several orthogonal components. However, our preliminary evaluation of Independent Component Analysis (Hyvärinen & Oja, 2000), which is one of the most widely used alternatives to PCA, did not yield substantially better results in terms of direction interpretability." }, { "heading": "5 EXPERIMENTS", "text": "In this part, we first show qualitative results: categories of the interpretable directions found by our method along with some examples. Next, we show that the success of our method can not be attributed solely to the properties of the latent space. Namely, the proportion of interpretable directions among the ones found by our method is larger than that of the baselines. Since, as we will see, types of the revealed directions can be quite diverse, developing a metric for automatic evaluation of interpretability is challenging. Therefore, we resort to human evaluation." }, { "heading": "5.1 SETUP", "text": "Models. We evaluate three publicly available OPTIMUS checkpoints: one of the model trained on a dump of Wikipedia with β = 0.5 and latent vector size 32, and two of the models finetuned on the SNLI dataset (Bowman et al., 2015) with β = 0 and β = 1 and latent vector size 768.1\nIn preliminary experiments, we also examined other models from prior work. As expected, simple attributes are discoverable even in lower capacity models, but the overall reconstruction and generation quality is worse.\nFinding directions. We compute the PCA for latent representations on a subsample of 50000 training examples for each model; larger subsample sizes did not improve quality of the attributes.\nBaselines. Since mining of interpretable latent manipulations for texts is a new task, there are no estabilished methods. Therefore, we implement two simple but reasonable baselines.\n• Random directions: vectors sampled from the standard normal distribution and normalized to unit length. This is the easiest way to find meaningful directions in the latent space.\n• Coordinate directions: coordinate vectors in the latent space. To choose the “best” directions, we pick coordinates with the highest variance over the training dataset representations. This goes in line with the motivation of PCA: out of all possible coordinate directions, these are responsible for most of the variance in data. 2\nNote that the coordinate baseline can be potentially quite strong. The standard Gaussian distribution, which is the prior for the OPTIMUS model, has independent components. Since OPTIMUS is trained with KL divergence between the posterior and the prior distributions, canonical coordinates in its latent space are likely to be disentangled.\nEvaluating directions. To evaluate the directions, we sample sentences from the corresponding training set and apply the latent shift to the encoder output. Namely, we increase or decrease the\n1Ideally, we would examine the properties of a model with a high-dimensional latent space trained on a general corpus. Unfortunately, this was not possible: training such a model from scratch is prohibitively expensive, and OPTIMUS checkpoints from this kind of models are not publicly available.\n2Additionally, in preliminary experiments we also looked at top coordinates with the lowest variance, as well as randomly chosen ones. Coordinates with the highest variance tend to be more reasonable and interpretable.\nattribute presence by multiplying the shift vector by a scalar constant and adding it to the mean vector of the distribution. The value of the scalar varies from -m to m; in each case, we take 5 points, {−m,−0.5m, 0, 0.5m,m}, so that we observe the unchanged sentence and its 4 modifications with a varying direction and degree of change. For the PCA directions, m = 5; for the baselines, m = 10. These values of m are found empirically: lower m result in mostly no changes in model output, higher lead to degenenerate sequences with repeated tokens. For generation, we use nucleus sampling (Holtzman et al., 2020) with p = 0.9; in preliminary experiments, we also found the results do not change when using greedy decoding." }, { "heading": "5.2 QUALITATIVE RESULTS", "text": "First, we observe that almost all variance in latent representations of the SNLI is covered by a small fraction of principal components (see Appendix A). This hints at the presence of directions that capture the most important features of these texts.\nNext, we look at how the shifts along the top components influence generated texts. Figure 2 shows examples of directions discovered by our method for the model trained on the SNLI dataset. The discovered directions can be roughly categorized into four types:" }, { "heading": "5.3 HUMAN EVALUATION", "text": "" }, { "heading": "5.3.1 PROCEDURE", "text": "We evaluate the three available checkpoints of OPTIMUS and the three methods (PCA with two baselines) described above. For each model, we sample 20 sentences from the corresponding training dataset. Then, for each method we take 20 “best” directions and apply them to the gathered\nsentences. Note that some directions are applicable only to a fraction of sentences (e.g., genderspecific attributes require a human subject). If after applying direction shifts to a sentence at least three of the 5 versions are the same, we do not use this sentence for this direction. To form an annotation task, we randomly choose 5 sentences from the filtered results. For a single combination of model, method, and direction, we generate 5 such tasks.\nThe evaluation protocol for one annotation task is as follows:\n1. The annotator sees 5 sentences along with their modifications. They have to answer whether the direction corresponding to that example is interpretable.3\n2. If the direction is interpretable, the annotator has to specify its category: either choose one the described in section 5.2 or enter a description manually.\n3. If the direction is partially interpretable (e.g. content preservation is lacking or the results are “visible” only in part of the examples), the annotator has to indicate this.\nOur annotators are 12 people with the background in machine learning. On average, we give each participant about 75 annotation tasks. For a single combination of model, method, and direction, we aggregate the results with the majority vote." }, { "heading": "5.3.2 RESULTS AND DISCUSSION", "text": "The results are shown in Tables 1 and 2; more detailed statistics are given in Appendix C. Our method outperforms the baselines for all three models, being the only one that achieves more than 50% of output direction interpretability. Apparently, the Wikipedia model exhibits too low reconstruction quality to be interpretable; this is most likely because its latent space dimension is 32, which is very small.\nInterestingly, for the SNLI model with β = 1, coordinate directions perform worse than random direction or even coordinate directions for the model with β = 0. Let us recall that β affects the regularization strength: higher β forces the latent distribution to be closer to the prior, in this case, an isotropic Gaussian. This has two implications. First, the model with the stronger regularization (β = 1) has weaker representation capabilities. Second, differences in variance of the output distribution become less pronounced: as such, the fraction of explained variance becomes a non-informative coordinate selection criterion.\nIf we now look at the specific categories of the directions discovered by different methods (Table 2), we will see an interesting pattern: the baseline methods discover mainly basic text properties (e.g. length), but our method identifies mostly changes of word-level attributes (subject gender, age, number, etc.) Note that these properties are more useful for the SNLI task than basic sentence attributes. This suggests that the directions discovered by our method are more reasonable for a given model." }, { "heading": "6 CONCLUSION", "text": "We propose the first method for unsupervised discovery of interpretable attribute manipulations in text variational autoencoders. This method involves computing the principal components of training data representations. It is very simple, fast, and outperforms the baselines by a large margin. Future work may investigate the locality of directions in text: many modifications (e.g. age) are applicable to only a fraction of sentences, and this might influence the results in Table 1.\n3The instruction shown to the annotators can be seen in Appendix B.\nWe believe this work will encourage further research in two important areas. First, our approach may serve as a baseline for the new task of unsupervised discovery of interpretable latent controls in generative models. Second, it can be used to compare models in terms of latent space interpretability." }, { "heading": "B HUMAN EVALUATION INSTRUCTION", "text": "B.1 TASK DESCRIPTION\nIn each assignment, you will see 5 sentences with a certain textual characteristic (or attribute) changed by the model. There are 5 degrees of change of different intensity on the scale of “–” to “++”, the unchanged sentence is given in the middle.\n1. You need to answer whether it is possible to interpret the changing attribute from given examples. If the atribute is partially interpretable (for instance, because the nature of changes is the same in only 2 of 5 sentences), you should also state this.\n2. If you can interpret the attribute (even partially), you need to assign it to one of the attribute categories or specify your own. Do not hesitate to give your own name to the attribute in the corresponding field: the model might change sentence properties that are not given in the list or do not fit the provided categories4.\n3. If attribute interpretation was challenging or only partial, choose the option “Partially interpretable” and give the exact reason: whether this is because of several properties changing at once, one property being interpretable in only 1-2 sentences out of 5, or too drastic content changes. You can also explain partial interpretability using the text field.\n4. If the attribute is not interpretable because you cannot unambiguously formulate its essence so that it would fit all sentences, you should choose the option “No”. If changing this attribute leads to sentences becoming meaningless character sequences or repetitions of the same words, you should choose the option “Incorrect text”.\nB.2 EXAMPLES WITH DIFFERENT INTERPRETABILITY\nExample 1 (correct changes in the attribute):\n-- there is a man celebrating - there is a man celebrating\nthere is a man celebrating with friends + there is a man celebrating with friends and celebrating in front of a red building ++ there is a man celebrating with friends celebrating with a red flag about to celebrate with other peoples in <unk>\n-- a man is walking. - a man in jeans runs.\na man in jeans runs through a park. + a man in jeans runs through a park through jean shorts. ++ a man in jeans jeans runs through a park in a gray t-shirt through woods with runners past.\nIn this case, for both sentences changing the attribute leads only to changes in the sentence length, so the appropriate answer is “Yes”.\nExample 2 (content not preserved):\n-- In the aftermath of the war, the local population of the area, including the local inhabitants of the area, who had been evacuated from the area by the Germans.\n4Despite this sentence, the option of custom attribute type was mostly unused by all annotators.\n- In the aftermath of the war, the \"Carnivores\" were sent to the \"Carnivores\" camp in the \"Carnivores\" area, where they were killed.\nThe remaining members of the \"Cavaliers\" were killed in the battle, and the remaining members of the \"Cavaliers\" were killed in the battle. + The remaining were: ++\n-- In the early 1990s, after a series of interviews with the author of \"The Adventures of Dr. John D. MacDonald\", who had been diagnosed with Parkinson’s disease, and his wife, Dr. Louise MacDonald. - In \"The Adventures of Doctor Who\", he was introduced as a young man who was sent to the Earth by the Doctor.\nShe was played by John Hurt in the first film, \"The Man Who Wasn’t Born\". + She is played by John C. Reilly. ++\n-- In addition to her high school education, she received a $1,000 grant from the state of New York to provide a $1,000 grant to the New York State Department of Education - In addition to her high school diploma, she received a bachelor’s degree in business administration from the University of Michigan, and a master’s degree in business administration from the University of Michigan\nShe earned a bachelor’s degree in business administration, and a master’s degree in business administration. + 5. ++\nIn this case the sentence length also changes, but it also leads to a drastic change in content (it is not possible to say that all sentences were obtained from the one in the middle by adding or removing words). In such cases, the expected answer is “Partially” and the reason is “Too drastic content changes”.\nExample 3 (several attributes are changed at a time):\n-- a couple are embracing. - two people are embracing.\ntwo people are embracing each other. + two people are embracing each other across the surface. ++ two humans glide across each other as each other are embraced by the other.\n-- a couple with a umbrella - 2 people with a umbrella on a beach\n3 people with umbrellas on a street next to a park + 3 people with umbrellas on a street next to some pedestrians outside a park ++ 5 pedestrians all on green spots along side a curb with a street in front of some pedestrians in an area of nyc\n-- two men and is a birthday. - two men and three children are at the beach.\ntwo men and three children are at the beach. + two men and three children are at the beach three. ++ three men and three children all sit at the side of the road in green and yellow.\nIn this case, the meaning is mostly preserved, but there are two changes at the same time: sentence length and the number of subjects. Because of this, you should choose “Partially interpretable — More than one attribute changes”.\nExample 4 (non-interpretable changes):\n-- three girls watch a ballet dance. - four women watch a ballet practice from their chair.\nfour women watch a ballet practice from their chairs. + four women watch a ballet from their practice chairs. ++ women watch a ballet during their breaks at the computers.\n-- young boy using a vacuum cleaner. - young boy using a vacuum cleaner on a rug.\nyoung boy using a vacuum cleaner on a rug. + young boy using a vacuum cleaner on a rug. ++ young boy using a vacuum on a dry rug.\n-- a boy is jumping from a plank. - a boy jumps from a plank high.\na boy jumps from a plank up high. + a boy jumps from a plank up high. ++ a boy jumps from a plank up over the water.\nHere (at least, for the task authors) it is too hard to briefly explain the kind of changes, hence the answer should be “No”.\nExample 5 (incorrect text):\n-- \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" \" - \"The Merry Christmas\" was signed by the London Company and sold to the Royal Opera Company in 1885 for £1,000, and was the first of the \"Merry Christmas\" to be sold to the Royal Opera Company.\nThe casualties were estimated at 1,000,000 men, and the British were estimated at 1,000,000. + There are no casualties. ++\nIn this case significant changes to the attribute (“–” and “++”) lead to the sentence no longer being readable. You may choose “The text is incorrect” if it is impossible to understand the interpretability of attribute in this case.\nB.3 ATTRIBUTE TYPES\nEach attribute may be connected with simple sentence features (e.g length), presence of certain words in the sentence or even with sentence sentiment. You can see examples of different attributes below:\nLength:\n-- 3 people are outside - 3 people with a umbrella on street\n3 people with umbrellas on a street next to a park + 3 umbrellas with a person on top of a street for umbrellas outside a park\n++ 3 umbrellas with umbrellas on a street corner with a lot of people next to a green parka with a street sign on it in front of it for an area of a city in an asian city\nSingular/plural number:\n-- a worker standing on a high scaffold. - a worker standing on a high scaffold.\na worker standing on a high scaffold. + a worker standing on high scaffolding. ++ several workers standing high on the scaffold.\nChanging the order of words in a sentence (also, the number is changed):\n-- the are standing and reading a newspaper. - the man is standing and reading a newspaper.\na man is standing and reading a newspaper. + a man is standing and reading in a newspaper. ++ a man is standing and running in a newspaper.\n-- the naked men are holding the drum. - the naked man is playing drums.\nthe naked man is playing drums. + a naked man is playing drums. ++ a naked man is playing in wate\nAdding the word “in”:\n-- a man runs through jeans. - a man in jeans runs through a park.\na man in jeans runs through a park. + a man in jeans runs through a park. ++ a man in jeans runs in a park." }, { "heading": "C ADDITIONAL TABLES FROM THE HUMAN STUDY", "text": "Here, we provide full data on the results of the interpretability evaluation without aggregating results over different interpretability categories (Table 3) or different models for direction types (Table 4)." }, { "heading": "D EXAMPLES OF FOUND DIRECTIONS", "text": "Below we give additional examples of latent directions discovered by our method on the SNLI dataset (β = 0):\n1. Basic sentence attributes:\n-- children are playing a game. - children are playing a game.\nchildren are playing a game together. + children are playing a game together as a ball game. ++ children playing a game of board war together are playing a game of snowball together after a birthday.\n-- a man is looking down. - a fireman is looking down.\na fireman is looking down at the ground. + a fireman looking down at the ground is still raining down. ++ a firetruck driver looking down at the ground about to fall down from the ground is still burning up smoke.\n2. Word-level attributes: Singular/plural noun:\n-- family at a a wedding. - family at a wedding.\nfamily at a wedding. + family at a wedding. ++ families at wedding.\n-- a a person performing a a guitar for a little crowd. - a band performing for a little crowd.\na band performing for a little crowd. + the band performing for some big crowd. ++ the bands performing just for friends.\n-- a child is playing a game a ball in a room. - children are playing a game together.\nchildren are playing a game together. + children are playing a game together. ++ children are playing game together. Age:\n-- the man is is important. - the man is athletic.\nthe man is athletic. + the man is athletic. ++ the athletic boy.\n-- a woman is sitting at her positions to practice their positions.\n- a woman is practicing her positions at an yoga center. a woman is practicing her yoga positions.\n+ a female is practicing her yoga positions. ++ a yoga girl practicing her knees.\n-- the woman woman is patting volleyball. - the women are playing volleyball in the park.\nthe women are playing volleyball in the park. + the women are playing volleyball in the park. ++ the girls are playing volleyball in the park. Gender-neutral/gender-specific:\n-- children are playing a game together. - children are playing a game together.\nchildren are playing a game together. + children are playing a game together. ++ boys are play a game together.\n-- a person is practicing her standing muscles. - a woman is practicing her positions yoga.\na woman is practicing her yoga positions. + a woman is practicing her yoga positions. ++ two women practices her yoga positions.\n3. Insertion of particular words: “In”:\n-- children are playing a game to celebrate. - children are playing a game together.\nchildren are playing a game together.\n+ children are playing a game together. ++ two children in a game are sitting together.\n-- the women are practicing volleyball playing volleyball. - the women are playing volleyball during the beach.\nthe women are playing volleyball in the park. + the women are playing volleyball in the park. ++ the two women in volleyball are sitting in a park. “Man”:\n-- children are playing a game in full. - children are playing a game together.\nchildren are playing a game together. + two children are playing a game together. ++ two men are playing a game together.\n-- a person pouring a cup of concrete sidewalk in a window display. - a person pouring a wheelbarrow of cement on a sidewalk.\na man pouring a wheelbarrow of cement on a sidewalk. + a man pulling a wheelbarrow of cement on his bicycle. ++ a man pulling a wheelbarrow of two men and a hammer on the rail.\n-- the person is athletic. - the man is athletic.\nthe man is athletic. + the man is athletic. ++ the man and his man are athletic.\n4. Enforcing specific structure: “A is B”:\n-- children playing game together outside a house. - children are playing a game together.\nchildren are playing a game together. + children are playing a game together. ++ the child is a game having a winning.\n-- women playing volleyball in the park. - several women are playing volleyball in the park.\nthe women are playing volleyball in the park. + the women are playing volleyball in the park. ++ the woman is the volleyball player. “A is B in C”\n-- the women are playing the volleyball bar. - the women are playing volleyball in the park.\nthe women are playing volleyball in the park. + the woman are playing volleyball in the park. ++ a woman is playing volleyball in city.\n-- the fireman are looking down the table. - the fireman is looking down at the ground.\na fireman is looking down at the ground. + a fireman is looking down at ground. ++ a firefighter is looking down in fire." } ]
2,020
null
SP:b9d78677e836fddeab78615ad35e9545d9c1d08f
[ "This paper extends results of prior work by Steinke and Zakynthinou, by providing generalization bounds in the PAC-Bayesian and single-draw settings that depend on the conditional mutual information. The emphasis in this work is on obtaining fast rates ($1/n$ vs. $1/\\sqrt{n}$). The authors also conduct empirical experiments showing how the fast rate bounds they propose can be useful for obtaining non-vacuous generalization bounds in the context of over-parameterized neural networks." ]
We present a framework to derive bounds on the test loss of randomized learning algorithms for the case of bounded loss functions. This framework leads to bounds that depend on the conditional information density between the the output hypothesis and the choice of the training set, given a larger set of data samples from which the training set is formed. Furthermore, the bounds pertain to the average test loss as well as to its tail probability, both for the PAC-Bayesian and the single-draw settings. If the conditional information density is bounded uniformly in the size n of the training set, our bounds decay as 1/n, which is referred to as a fast rate. This is in contrast with the tail bounds involving conditional information measures available in the literature, which have a less benign 1/ √ n dependence. We demonstrate the usefulness of our tail bounds by showing that they lead to estimates of the test loss achievable with several neural network architectures trained on MNIST and Fashion-MNIST that match the state-of-the-art bounds available in the literature.
[]
[ { "authors": [ "A.R. Asadi", "E. Abbe", "S. Verdú" ], "title": "Chaining mutual information and tightening generalization bounds", "venue": "In Proc. Conf. Neural Inf. Process. Syst. (NeurIPS),", "year": 2018 }, { "authors": [ "R. Bassily", "S. Moran", "I. Nachum", "J. Shafer", "A. Yehudayoff" ], "title": "Learners that use little information", "venue": "J. of Mach. Learn. Res.,", "year": 2018 }, { "authors": [ "Y. Bu", "S. Zou", "V.V. Veeravalli" ], "title": "Tightening mutual information based bounds on generalization error", "venue": "In Proc. IEEE Int. Symp. Inf. Theory (ISIT),", "year": 2019 }, { "authors": [ "O. Catoni" ], "title": "PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning, volume 56", "venue": "IMS Lecture Notes Monogr. Ser.,", "year": 2007 }, { "authors": [ "G.K. Dziugaite", "D.M. Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": "In Proc. Conf. Uncertainty in Artif. Intell. (UAI),", "year": 2017 }, { "authors": [ "G.K. Dziugaite", "K. Hsu", "W. Gharbieh", "D.M. Roy" ], "title": "On the role of data in PAC-Bayes bounds, June 2020", "venue": "URL https://arxiv.org/abs/2006.10929", "year": 2006 }, { "authors": [ "A.R. Esposito", "M. Gastpar", "I. Issa" ], "title": "Generalization error bounds via Rènyi f -divergences and maximal leakage", "venue": null, "year": 2019 }, { "authors": [ "P.D. Grünwald", "N.A. Mehta" ], "title": "Fast rates for general unbounded loss functions: from ERM to generalized Bayes", "venue": "J. of Mach. Learn. Res.,", "year": 2020 }, { "authors": [ "B. Guedj" ], "title": "A primer on PAC-Bayesian learning", "venue": "URL http://arxiv.org/ abs/1901.05353", "year": 2019 }, { "authors": [ "B. Guedj", "L. Pujol" ], "title": "Still no free lunches: the price to pay for tighter PAC-Bayes bounds", "venue": null, "year": 2019 }, { "authors": [ "M. Haghifam", "J. Negrea", "A. Khisti", "D.M. Roy", "G.K. Dziugaite" ], "title": "Sharpened generalization bounds based on conditional mutual information and an application to noisy, iterative algorithms", "venue": "URL http://arxiv.org/abs/2004.12983", "year": 2020 }, { "authors": [ "F. Hellström", "G. Durisi" ], "title": "Generalization error bounds via mth central moments of the information density", "venue": "In Proc. IEEE Int. Symp. Inf. Theory (ISIT),", "year": 2020 }, { "authors": [ "F. Hellström", "G. Durisi" ], "title": "Generalization bounds via information density and conditional information density", "venue": "(June 2020),", "year": 2020 }, { "authors": [ "I. Issa", "S. Kamath", "A.B. Wagner" ], "title": "An operational approach to information leakage", "venue": "IEEE Trans. Inf. Theory,", "year": 2020 }, { "authors": [ "Gaël Letarte", "Pascal Germain", "Benjamin Guedj", "Francois Laviolette" ], "title": "Dichotomize and generalize: PAC-Bayesian binary activated deep neural networks", "venue": "In Proc. Conf. Neural Inf. Process. Syst. (NeurIPS),", "year": 2019 }, { "authors": [ "D.A. McAllester" ], "title": "Some PAC-Bayesian theorems", "venue": "In Proc. Conf. Learn. Theory (COLT),", "year": 1998 }, { "authors": [ "D.A. McAllester" ], "title": "PAC-Bayesian stochastic model selection", "venue": "Mach. Learn.,", "year": 2003 }, { "authors": [ "D.A. McAllester. A PAC-Bayesian tutorial with a dropout bound. July" ], "title": "URL http://arxiv", "venue": "org/abs/1307.2118.", "year": 2013 }, { "authors": [ "J. Negrea", "M. Haghifam", "G.K. Dziugaite", "A. Khisti", "D.M. Roy" ], "title": "Information-theoretic generalization bounds for SGLD via data-dependent estimates", "venue": "In Proc. Conf. Neural Inf. Process. Syst. (NeurIPS),", "year": 2019 }, { "authors": [ "Y. Polyanskiy", "Y. Wu" ], "title": "Lecture Notes On Information Theory. 2019", "venue": "URL http://www.stat. yale.edu/%7Eyw562/teaching/itlectures.pdf", "year": 2019 }, { "authors": [ "D. Russo", "J. Zou" ], "title": "Controlling bias in adaptive data analysis using information theory", "venue": "In Proc. Artif. Intell. Statist. (AISTATS),", "year": 2016 }, { "authors": [ "M. Seeger" ], "title": "PAC-Bayesian generalisation error bounds for Gaussian process classification", "venue": "J. of Mach. Learn. Res.,", "year": 2002 }, { "authors": [ "T. Steinke", "L. Zakynthinou" ], "title": "Reasoning about generalization via conditional mutual information", "venue": "Conf. Learn Theory (COLT),", "year": 2020 }, { "authors": [ "M. Talagrand" ], "title": "Sharper bounds for Gaussian and empirical processes", "venue": "Ann. Probab.,", "year": 1994 }, { "authors": [ "T. Van Erven", "P.D. Grünwald", "N.A. Mehta", "M.D. Reid", "R.C. Williamson" ], "title": "Fast rates in statistical and online learning", "venue": "J. of Mach. Learn. Res.,", "year": 2015 }, { "authors": [ "V. Vapnik" ], "title": "Statistical Learning Theory", "venue": null, "year": 1998 }, { "authors": [ "A. Xu", "M. Raginsky" ], "title": "Information-theoretic analysis of generalization capability of learning algorithms", "venue": "In Proc. Conf. Neural Inf. Process. Syst. (NeurIPS),", "year": 2017 }, { "authors": [ "C. Zhang", "S. Bengio", "M. Hardt", "B. Recht", "O. Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In Proc. Int. Conf. Learn. Representations (ICLR),", "year": 2017 }, { "authors": [ "W. Zhou", "V. Veitch", "M. Austern", "R.P. Adams", "P. Orbanz" ], "title": "Non-vacuous generalization bounds at the ImageNet scale: a PAC-Bayesian compression approach", "venue": "In Proc. Int. Conf. Learn. Representations (ICLR),", "year": 2019 } ]
[ { "heading": null, "text": "√ n dependence. We demonstrate\nthe usefulness of our tail bounds by showing that they lead to estimates of the test loss achievable with several neural network architectures trained on MNIST and Fashion-MNIST that match the state-of-the-art bounds available in the literature." }, { "heading": "1 INTRODUCTION", "text": "In recent years, there has been a surge of interest in the use of information-theoretic techniques for bounding the loss of learning algorithms. While the first results of this flavor can be traced to the probably approximately correct (PAC)-Bayesian approach (McAllester, 1998; Catoni, 2007) (see also (Guedj, 2019) for a recent review), the connection between loss bounds and classical information-theoretic measures was made explicit in the works of Russo & Zou (2016) and Xu & Raginsky (2017), where bounds on the average population loss were derived in terms of the mutual information between the training data and the output hypothesis. Since then, these average loss bounds have been tightened (Bu et al., 2019; Asadi et al., 2018; Negrea et al., 2019). Furthermore, the information-theoretic framework has also been successfully applied to derive tail probability bounds on the population loss (Bassily et al., 2018; Esposito et al., 2019; Hellström & Durisi, 2020a).\nOf particular relevance to the present paper is the random-subset setting, introduced by Steinke & Zakynthinou (2020) and further studied in (Hellström & Durisi, 2020b; Haghifam et al., 2020). In this setting, a random vector S is used to select n training samples Z(S) from a larger set Z̃ of 2n samples. Then, bounds on the average population loss are derived in terms of the conditional mutual information (CMI) I(W ;S|Z̃) between the chosen hypothesis W and the random vector S given the set Z̃. The bounds obtained by Xu & Raginsky (2017) depend on the mutual information I(W ;Z), a quantity that can be unbounded if W reveals too much about the training set Z. In contrast, bounds for the random-subset setting are always finite, since I(W ;S|Z̃) is never larger than n bits. Most information-theoretic population loss bounds mentioned thus far are given by the training loss plus a term with a √ IM(PWZ)/n-dependence, where IM(PWZ) denotes an information measure, such as mutual information or maximal leakage (Issa et al., 2020). Assuming that the information measure grows at most polylogarithmically with n, the convergence rate of the population loss to the training loss is Õ(1/ √ n), where the Õ-notation hides logarithmic factors. This is sometimes referred to as a slow rate. In the context of bounds on the excess risk, defined as the difference between the achieved population loss for a chosen hypothesis w and its infimum over the hypothesis class, it is known that slow rates are optimal for worst-case distributions and hypothesis classes (Talagrand,\n1994). However, it is also known that under the assumption of realizability (i.e., the existence of a w in the hypothesis class such that the population loss LPZ (w) = 0) and when the hypothesis class is finite, the dependence on the sample size can be improved to Õ(1/n) (Vapnik, 1998, Chapter 4). This is referred to as a fast rate. Excess risk bounds with fast rates for randomized classifiers have also been derived, under certain additional conditions, for both bounded losses (Van Erven et al., 2015) and unbounded losses (Grünwald & Mehta, 2020).\nNotably, Steinke & Zakynthinou (2020, Thm. 2(3)) derive a population loss bound whose dependence on n is I(W ;S|Z̃)/n. The price for this improved dependence is that the training loss that is added to the n-dependent term is multiplied by a constant larger than 1. Furthermore, (Steinke & Zakynthinou, 2020, Thm. 8) shows that if the Vapnik-Chervonenkis (VC) dimension of the hypothesis class is finite, there exists an empirical risk minimizer (ERM) whose CMI grows at most logarithmically with n. This implies that the CMI approach leads to fast-rate bounds in certain scenarios. However, the result in (Steinke & Zakynthinou, 2020, Thm. 2(3)) pertains only to the average population loss: no tail bounds on the population loss are provided. Throughout the paper, we will, with an abuse of terminology, refer to bounds with an n-dependence of the form IM(PWZ)/n as fast-rate bounds. Such bounds are also known as linear bounds (Dziugaite et al., 2020). Note that the n-dependence of the information measure IM(PWZ) has to be at most polylogarithmic for such bounds to actually achieve a fast rate in the usual sense.\nAn intriguing open problem in statistical learning is to find a theoretical justification for the capability of overparameterized neural networks (NNs) to achieve good generalization performance despite being able to memorize randomly labeled training data sets (Zhang et al., 2017). As a consequence of this behavior, classical population loss bounds that hold uniformly over a given hypothesis class, such as VC bounds, are vacuous when applied to overparameterized NNs. This has stimulated recent efforts aimed at obtaining tighter population loss bounds that are algorithm-dependent or data-dependent.\nIn the past few years, several studies have shown that promising bounds are attainable by using techniques from the PAC-Bayesian literature (Dziugaite & Roy, 2017; Zhou et al., 2019; Dziugaite et al., 2020). The PAC-Bayesian approach entails using the Kullback-Leibler (KL) divergence to compare the distribution on the weights of the NN induced by training to some reference distribution. These distributions are referred to as the posterior and the prior, respectively. Recently, Dziugaite et al. (2020) used data-dependent priors to obtain state-of-the-art bounds for LeNet-5 trained on MNIST and Fashion-MNIST. In their approach, the available data is used both for training the network and for choosing the prior. This leads to a bound that is tighter than previously available bounds. Furthermore, the bound can be further improved by minimizing the KL divergence between the posterior and the chosen prior during training. One drawback of the PAC-Bayesian approach is that it applies only to stochastic NNs, whose weights are randomly chosen each time the network is used, and not to deterministic NNs with fixed weights.\nInformation-theoretic bounds have also been derived for iterative, noisy training algorithms such as stochastic gradient Langevin dynamics (SGLD) (Bu et al., 2019). These bounds lead to nonvacuous estimates of the population loss of overparameterized NNs that are trained using SGLD through the use of data-dependent priors (Negrea et al., 2019). However, these bounds do not apply to deterministic NNs, nor to standard stochastic gradient descent (SGD) training. Furthermore, the bounds pertain to the average population loss, and not to its tails. Although the techniques yielding these estimates can be adapted to the PAC-Bayesian setting, as discussed by Negrea et al. (2019, App. I), the resulting bounds are generally loose." }, { "heading": "1.1 CONTRIBUTIONS", "text": "In this paper, we extend the fast-rate average loss bound by Steinke & Zakynthinou (2020) to the PAC-Bayesian and the single-draw settings. We then use the resulting PAC-Bayesian and single-draw bounds to characterize the test loss of NNs used to classify images from the MNIST and FashionMNIST data sets. The single-draw bounds can be applied to deterministic NNs trained through SGD but with Gaussian noise added to the final weights, whereas the PAC-Bayesian bounds apply only to randomized neural networks, whose weights are drawn from a Gaussian distribution each time the network is used. For the same setup, we also evaluate the slow-rate PAC-Bayesian and single-draw bounds from (Hellström & Durisi, 2020b). Our numerical experiments reveal that both the slow-rate\nbounds from (Hellström & Durisi, 2020b) and the newly derived fast-rate bounds are nonvacuous. Furthermore, for some settings, the fast-rate bounds presented in this paper are quantitatively stronger than the corresponding slow-rate ones from (Hellström & Durisi, 2020b), and essentially match the best bounds available in the literature for SGD-trained NNs (Dziugaite et al., 2020)." }, { "heading": "1.2 PRELIMINARIES", "text": "We now detail some notation and describe the random-subset setting introduced in (Steinke & Zakynthinou, 2020). Let Z be the instance space,W be the hypothesis space, and ` :W ×Z → R+ be the loss function. Throughout the paper, we will assume that the range of `(w, z) is restricted to [0, 1] for all w ∈ W and all z ∈ Z . A typical example of such a loss function is the classification error. In this setting, the sample Z consists of an example X ∈ X and a corresponding label Y ∈ Y . Then, the loss is given by `(W,Z) = 1{fW (X) 6= Y }, where fW (·) is the map from X to Y induced by the hypothesis W . We note that, when applying our bounds to NNs, the function `(·, ·) used to characterize the performance of the network does not necessarily need to coincide with the loss function used when training the NN. For instance, one could use the (unbounded) cross-entropy loss when training the NN, and apply the bounds for the scenario in which `(·, ·) is the classification error.\nIn the random-subset setting, 2n training samples Z̃ = (Z̃1, . . . , Z̃2n) are available, with all entries of Z̃ being drawn independently from some distribution PZ onZ . However, only a randomly selected subset of cardinality n is actually used for training. Following (Steinke & Zakynthinou, 2020), we assume that the training dataZ(S) is selected as follows. Let S = (S1, . . . , Sn) be an n-dimensional random vector, the elements of which are drawn independently from a Bern(1/2) distribution and are independent of Z̃. Then, for i = 1, . . . , n, the ith training sample in Z(S) is Zi(Si) = Z̃i+Sin. Thus, the binary variable Si determines whether the training set Z(S) will contain the sample Z̃i or the sample Z̃i+n. The selected training procedure, including the loss function used for training, will determine the conditional distribution PW |Z(S) on the hypothesis class given the training data. For a given W ∼ PW |Z(S), we let LZ(S)(W ) = 1n ∑n i=1 `(W,Zi(Si)) denote the training loss. Furthermore, we let S̄ denote the modulo-2 complement ofS. ThenLZ(S̄)(W ) can be interpreted as a test loss, sinceW is conditionally independent ofZ(S̄) givenZ(S). Finally, we note that the average over (Z̃,S) of the test loss is the population loss LPZ (W ) = EPZ̃S [LZ(S̄)(W )] = EPZ [`(W,Z)].\nOur bounds will depend on several different information-theoretic quantities, which we shall introduce next. The information density ı(W,Z) between W and Z is defined as ı(W,Z) = log dPWZdPWPZ , where dPWZdPWPZ is the Radon-Nikodym derivative of PWZ with respect to PWPZ . The information density is well-defined if PWZ is absolutely continuous with respect to PWPZ , denoted by PWZ PWPZ . The conditional information density ı(W,S|Z̃) between W and S given Z̃ is defined as ı(W,S|Z̃) = log dPWZ̃SdPW |Z̃PZ̃S , provided that PWZ̃S PW |Z̃PZ̃S . The mutual information can be obtained as I(W ;Z) = EPWZ [ı(W,Z)] and the conditional mutual information as I(W ;S|Z̃) = EPWZ̃S [ ı(W,S|Z̃) ] . We will also need the KL divergences D(PW |Z ||PW ) =\nEPW |Z [ı(W,Z)] and D(PW |Z̃S ||PW |Z̃) = EPW |Z̃S [ ı(W,S|Z̃) ] . In practical applications, the marginal distribution PW is not available, since PZ is unknown. Furthermore, PW |Z̃ is also difficult to compute, since marginalizing PSPW |Z̃S over S involves performing training 2 n times. Hence,\nbounds depending on ı(W,Z) or on ı(W,S|Z̃) cannot typically be evaluated. Therefore, it will be convenient to replace the information density ı(W,Z) with the proxy log dPWZdQWPZ and ı(W,S|Z̃) with log dPWZ̃SdQW |Z̃PZ̃S . Here, QW and QW |Z̃ are suitably chosen auxiliary distributions (priors) that are used in place of the intractable, true marginals." }, { "heading": "2 BACKGROUND", "text": "We next review the bounds available in the literature that are relevant for this paper. Then, in Section 3, we will present novel fast-rate bounds. The canonical PAC-Bayesian population-loss bound for a given posterior PW |Z and loss functions bounded between 0 and 1 states that the following holds\nwith probability at least 1− δ under PZ (Guedj & Pujol, 2019, Prop. 3):\nEPW |Z [LPZ (W )] ≤ EPW |Z [LZ(W )] +\n√ D(PW |Z ||QW ) + log 1δ\n2n . (1)\nHere, QW is the prior on the hypothesis spaceW , which has to be independent of Z. The version of the bound given in (1) slightly improves the dependence on the sample size n as compared to the bound reported in (McAllester, 2003, Thm. 1), at the cost of not holding uniformly for all posteriors. We note that, due to the square root, this is a slow-rate bound. By adapting a proof technique introduced by Catoni (2007, Thm. 1.2.6), McAllester (2013, Eq. (21)) derived the following alternative bound: for all γ ∈ R, and with probability at least 1− δ under PZ ,\ndγ(EPW |Z [LZ(W )] || EPW |Z [LPZ (W )]) ≤ 1\nn\n( D(PW |Z ||QW ) + log 1\nδ\n) . (2)\nHere, dγ(q || p) = γq − log(1− p+ peγ), and one can show that supγ dγ(q || p) = d(q || p), where d(q || p) indicates the KL divergence between two Bernoulli distributions with parameters q and p respectively. This bound, with dγ(q || p) replaced by d(q || p), slightly improves the dependence on the sample size n as compared to an earlier bound reported in (Seeger, 2002, Thm. 1), but again at the cost of losing uniformity over posteriors. Let q = EPW |Z [LZ(W )] and let c denote the right-hand side of (2). To use the result in (2) to bound the population loss, we need to find\np∗(q, c) = sup{p : p ∈ [0, 1], d(q || p) ≤ c}. (3) This is the largest population loss that satisfies the inequality (2). For small q and c, we have p∗(q, c) ≈ c, which gives us a fast-rate bound. More generally, for any permissible values of q and c, the bound in (2) can be weakened to obtain the following fast-rate bound (McAllester, 2013, Thm. 2): for all λ ∈ (0, 1), and with probability at least 1− δ under PZ ,\nEPW |Z [LPZ (W )] ≤ 1\nλ\n[ EPW |Z [LZ(W )] + D(PW |Z ||QW ) + log 1δ 2(1− λ)n ] . (4)\nNote that the faster decay in n of this bound comes at the price of a multiplication of the training loss and the KL term by a constant that is larger than 1. As a consequence, if the training loss or the KL term are large, this multiplicative constant may make the fast-rate bound in (4) quantitatively worse than the slow-rate bound in (1) for a fixed n. In the so called interpolating setting, where the training loss is 0, we can set λ = 1/2 in (4) and conclude that it is enough for the square-root term in (1) to be smaller than 1/4 for the fast-rate bound (4) to be tighter than the slow-rate bound (1). Additional insights on the tightness of these bounds are provided in (Letarte et al., 2019, Thm. 3).\nWe now turn to the random-subset setting introduced by Steinke & Zakynthinou (2020), and described in Section 1.2. In (Steinke & Zakynthinou, 2020, Thm. 2), several bounds on the average population loss are derived for loss functions bounded between 0 and 1, including the following slow-rate and fast-rate bounds:\nEPWZ̃S [LPZ (W )] ≤ EPWZ̃S [ LZ(S)(W ) ] +\n√ 2I(W ;S|Z̃)\nn (5)\nEPWZ̃S [LPZ (W )] ≤ 2EPWZ̃S [ LZ(S)(W ) ] + 3I(W ;S|Z̃) n . (6)\nSimilar to the bound in (4), the price for a fast rate is a multiplicative constant in front of the training loss and the mutual information term. The slow-rate bound in (5) was improved in (Haghifam et al., 2020, Thm. 3.4) by combining the samplewise approach from (Bu et al., 2019) with the disintegration approach in (Negrea et al., 2019), whereby the expectation over Z̃, which is implicit in the definition of CMI, is pulled outside of the square root. As we detail in the following proposition, the bound in (Haghifam et al., 2020, Thm. 3.4) can be further tightened by also pulling out the expectation with respect to the conditional distribution PW |Z̃ from the square root. The proof of the resulting bound, which is novel, is deferred to Appendix A.1. Proposition 1. Consider the random-subset setting described in Section 1.2. Then,\nEPWZ̃S [LPZ (W )] ≤ EPWZ̃S [ LZ(S)(W ) ] + 1\nn n∑ i=1 EPWZ̃ [√ 2D(PSi|WZ̃ ||PSi) ] . (7)\nWe recover (Haghifam et al., 2020, Thm. 3.4) by applying Jensen’s inequality to move the expectation with respect to PW |Z̃ inside the square root. Furthermore, we recover (5) by applying Jensen’s inequality once more to move the remaining expectation over PZ̃ and the the empirical average over i inside the square root and by upper-bounding the resulting sum of samplewise CMIs by I(W ;S|Z̃). In (Hellström & Durisi, 2020b), the slow-rate average loss bound (5) was extended to the PACBayesian setting and the single-draw setting through the use of an exponential inequality. Specifically, the following bounds on the test loss LZ(S̄)(W ) are derived: with probability at least 1−δ under PZ̃S ,\nEPW |Z̃S [ LZ(S̄)(W ) ] ≤ EPW |Z̃S [ LZ(S)(W ) ] +\n√ 2\nn\n( D(PW |Z̃S ||PW |Z̃) + log 1\nδ\n) . (8)\nFurthermore, with probability at least 1− δ under PWZ̃S ,\nLZ(S̄)(W ) ≤ LZ(S)(W ) +\n√ 2\nn\n( ı(W,S|Z̃) + log 1\nδ\n) . (9)\nWhile the bounds in (8) and (9) pertain to the test loss instead of the population loss, one can obtain population loss bounds by adding a penalty term to (8) and (9), as discussed in (Hellström & Durisi, 2020b, Thm. 2). However, when comparing the bounds to the empirical performance of learning algorithms, the population loss is unknown. Thus, in practice, one has to resort to evaluating a test loss. In Section 3 below, we will derive fast-rate analogues of the tail bounds (8) and (9), again at the price of multiplicative constants." }, { "heading": "3 FAST-RATE RANDOM-SUBSET BOUNDS", "text": "We now present an exponential inequality from which several test loss bounds can be derived, in a similar manner as was done in (Hellström & Durisi, 2020b). The derivation, which echos part of the proof of (Steinke & Zakynthinou, 2020, Thm. 2.(3)), is provided in Appendix A.2. This result and its proof illustrate how to combine the exponential-inequality approach from (Hellström & Durisi, 2020b) with fast-rate derivations, like those presented in (Steinke & Zakynthinou, 2020, Thm. 2.(3)) and (McAllester, 2013, Thm. 2). Theorem 1. Consider the random-subset setting introduced in Section 1.2. LetW ∈ W be distributed according to PW |Z(S) = PW |Z̃S . Also, assume that the joint distribution PWZ̃S = PW |Z̃SPZ̃PS is absolutely continuous with respect to QW |Z̃PZ̃PS for some conditional prior QW |Z̃ . Then, the following holds:\nEPWZ̃S\n[ exp ( n\n3 LZ(S̄)(W )−\n2n\n3 LZ(S)(W )− log dPWZ̃S dQW |Z̃PZ̃S\n)] ≤ 1. (10)\nNote that the exponential function in (10) depends linearly on the population loss. In contrast, the exponential inequality derived in (Hellström & Durisi, 2020b, Thm. 4) to establish slow-rate generalization bounds for the random-subset setting depends quadratically on the population loss (after the parameter λ therein is suitably optimized). This difference explains why Theorem 1 allows for the derivation of fast-rate bounds, whereas (Hellström & Durisi, 2020b, Thm. 4) unavoidably leads to slow-rate bounds. Also note that, since in the random-subset setting W and Z̃ are dependent both before and after any change of measure argument, the proof technique used in (McAllester, 2013, App. A) and, previously in (Seeger, 2002, Thm. 1), to derive (2) cannot be used in the random-subset setting.\nBy simple applications of Jensen’s inequality and Markov’s inequality, the exponential inequality (10) can be used to derive bounds on the population loss or test loss. In particular, as detailed in the proof of Corollary 1 below (see Appendix A.3), it can be used to recover (6), but also to establish novel PAC-Bayesian and single-draw versions of (6). Corollary 1. Consider the setting of Theorem 1. Then, the average population loss is bounded by\nEPWZ̃S [LPZ (W )] ≤ 2EPWZ̃S [ LZ(S)(W ) ] +\n3EPZ̃S [ D(PW |Z̃S ||QW |Z̃) ] n . (11)\nFurthermore, with probability at least 1− δ over PZ̃S , the PAC-Bayesian test loss is bounded by\nEPW |Z̃S [ LZ(S̄)(W ) ] ≤ 2EPW |Z̃S [ LZ(S)(W ) ] +\n3 ( D(PW |Z̃S ||QW |Z̃) + log 1 δ ) n . (12)\nFinally, with probability at least 1− δ over PWZ̃S , the single-draw test loss is bounded by LZ(S̄)(W ) ≤ 2LZ(S)(W ) + 3 ( log dPWZ̃S dQW |Z̃PZ̃S + log 1δ ) n . (13)\nSetting QW |Z̃ = PW |Z̃ in (11), we recover the CMI bound in (Steinke & Zakynthinou, 2020) since EPZ̃S [ D(PW |Z̃S ||PW |Z̃) ] = I(W ;S|Z̃). As illustrated in Corollary 2 below, the bound on the average population loss in (11) can be tightened by replacing the CMI with a sum of samplewise CMIs. The proof of this result, which involves the same argument used to establish Proposition 1, is presented in Appendix A.4.\nCorollary 2. Consider the setting of Theorem 1. Then, the average population loss is bounded by\nEPWZ̃S [LPZ (W )] ≤ 2EPWZ̃S [ LZ(S)(W ) ] + n∑ i=1 3I(W ;Si|Z̃) n . (14)\nThe bounds in (12) and (13) are data-dependent, i.e., they depend on the specific instances of Z̃ and S. They can be turned into data-independent bounds that are functions of the average of the information measures appearing in (12) and (13), at the cost of a less benign polynomial dependence on the confidence parameter δ. Alternatively, one can obtain bounds that have a more benign dependence on δ if one allows the bounds to depend on sufficiently high moments of the information measures appearing in (12) and (13), or if one replaces these measures by quantities such as conditional maximal leakage or conditional α-divergence. See (Hellström & Durisi, 2020b) for further discussion.\nWe conclude by noting that for the interpolating case where LZ(S)(W ) = 0, and under the additional assumption that QW |Z̃ = PW |Z̃ , one can obtain a different exponential inequality than the one reported in Theorem 1, which leads to tighter bounds than the ones reported in Corollary 1. Specifically, in these alternative bounds, the factor 3 is replaced with a factor of 1/ log(2) ≈ 1.44. These bounds are presented in Appendix B." }, { "heading": "4 EXPERIMENTS", "text": "To assess the ability of the bounds just discussed to predict the performance of overparameterized NNs, we next present the result of several numerical experiments for different NN architectures. Specifically, we consider fully connected NNs (FCNNs) and convolutional NNs (CNNs). The performance of the networks is evaluated on the MNIST and Fashion-MNIST data sets. The bounds that we will consider are (8), (9), (12) and (13), where we set the loss function to be the classification error defined in Section 1.2.\nThe following procedure is used to evaluate the bounds: from the 2n available training samples Z̃ (from MNIST or Fashion-MNIST), the training set Z(S) is constructed by selecting n training samples uniformly at random. A network is then trained on this data set using a standard SGD procedure, which is described in more detail in Appendix C.2. Let µ1 be the the vector containing the weights of the network after training. The posterior distribution PW |Z̃S is then chosen to be a Gaussian distribution with mean µ1 and covariance matrix equal to σ21Id, where d is the number of parameters in the network. The standard deviation σ1 is chosen as the largest real number, determined to some finite precision (see Appendix C.2), for which the absolute value of the difference between the training loss of the network with weights µ1 and the empirical average of the training loss achieved by 5 NNs with weights randomly drawn fromN (W | µ1, σ21Id) is less than some specified threshold. Unless otherwise specified, we use a threshold of 0.05 for MNIST and 0.10 for Fashion-MNIST. Note that this procedure is performed for a fixed Z(S). Consequently, σ1 depends on Z̃ and S.\nTo select the prior QW |Z̃ , we proceed as follows. We form 10 subsets of Z̃, each of size n. The first\nsubset contains the first n samples in Z̃, the last contains the last n samples in Z̃, and the remaining subsets contain the linearly spaced sequences in between. We then train one NN on each subset and denote the average of the final weights of these networks by µ2. Finally, we choose QW |Z̃ as a Gaussian distribution with mean µ2 and covariance matrix σ22Id. To select σ2, we proceed as follows. First, we determine the largest real number σ̃2, again to some finite precision, for which the absolute value of the difference between the training loss of a NN with weights µ2 and the empirical average of the training loss of 5 NNs with weights drawn from N (W | µ2, σ̃22Id) is below the selected threshold. Note that this time the training loss is evaluated over the entire data set Z̃, so that there is no dependence on S. We then use σ̃2 to form a set of 27 candidate values for σ2, from which we pick the one that results in the tightest bound on the test loss. This procedure, the details of which are given in Appendix C.2, typically results in σ2 = σ1. Note that the prior and the posterior distribution satisfy the assumptions needed for the bounds (8), (9), (12) and (13) to hold. Indeed, (µ1, σ1) depend on Z̃ only through Z(S), while (µ2, σ2) are independent of S but depend on Z̃.\nEquipped with these Gaussian distributions, we evaluate the bounds by noting that, for the chosen prior and posterior, the Radon-Nikodym derivatives in (9) and (13) reduce to likelihood ratios, and the KL divergences in (8) and (12) can be evaluated as\nD(PW |Z̃S ||QW |Z̃) = ‖µ1 − µ2‖22\nσ22 + d ( σ21 σ22 + log σ22 σ21 − 1 ) . (15)\nSince the MNIST and Fashion-MNIST data sets are fixed and we are unable to draw several data sets from some underlying data distribution, we evaluate our bounds for these particular instances of Z̃. We do however have control over S, so we run experiments for 10 instances of S and present the resulting mean as well as standard deviation. Note that since we pick the training set uniformly at random from Z̃, we implicitly randomize over the ordering of the elements of Z̃.\nOur results are obtained by setting δ ≈ 0.001 as the confidence parameter. However, since the bounds are optimized over the choice of σ2, we need to use a union bound argument (Dziugaite & Roy, 2017; Dziugaite et al., 2020) to guarantee that the final slow-rate and fast-rate bounds hold for all of these candidates simultaneously. As a consequence, the bounds depicted in the figure hold with probability at least 95%. The test loss and training loss are computed empirically by averaging the performance of 5 NNs whose weights are sampled from N (W | µ1, σ21Id). For the FCNNs, we use the notation WL to denote an architecture consisting of L hidden layers with width W . For the case of CNNs, we consider the modified LeNet-5 architecture used in (Zhou et al., 2019) and (Dziugaite et al., 2020). Detailed descriptions of these architectures are provided in Appendix C.1.\nIn Figure 1, we study the dependence of the bounds on the number of training epochs. The shaded areas around the curves correspond to two standard deviations. The differences between the PACBayesian and the single-draw bounds turn out to be negligible, so we include only the PAC-Bayesian bounds in the figure. The networks are optimized using SGD either with or without momentum. Specifically, in Figures 1a–d, we use SGD without momentum, while in Figures 1e–f, we use SGD with momentum. In Figures 1a–d, we look at the early training epochs, while in Figures 1e–f, we train the networks until a small training loss (on the order of 0.001) is achieved. More details about the training procedures are given in Appendix C.2.\nAs seen in Figures 1a–d, where SGD without momentum is used, the bounds on the test loss are fairly accurate for both architectures, and both MNIST and Fashion-MNIST. As previously discussed, the relative ordering of the slow-rate bounds and the fast-rate bounds from a quantitative standpoint depends on the details of the learning setup. In particular, higher values for the training loss and information measures tend to make the slow-rate bounds tighter due to their smaller constant factors. As a consequence, the fast-rate bounds are superior for the MNIST data set, for which low training loss and information measures are achieved, while the slow-rate bounds are tighter for the more challenging Fashion-MNIST data set. Note that, due to the training procedure used, the underlying deterministic NNs upon which Figures 1a–d are based never reach training errors below a few percent.\nTo shed light on the relationship between the results presented in Fig. 1a–d and previously obtained bounds, we compare our bounds on the test loss with those reported in (Dziugaite et al., 2020), which established the best available PAC-Bayesian bounds for the settings we consider. The approach used\ntherein is similar to the random-subset setting considered in this paper, in that the authors make use of a data-dependent prior. The key differences with respect to the framework considered in this paper is that the posterior in Dziugaite et al. (2020) is allowed to depend on the entire data set Z̃, whereas the training loss and prior depend on randomly selected disjoint subsets of Z̃. In contrast, in the random-subset setting considered in this paper, the prior is allowed to depend on the entire data set Z̃, whereas the training loss and posterior depend only on a randomly selected portion Z(S) of Z̃.\nFor the case in which training is performed using SGD, the minimum test loss bounds (averaged over 50 runs) for MNIST reported in (Dziugaite et al., 2020, Fig. 4) are approximately 0.13 for LeNet-5 and 0.18 for the 6002 FCNN. These values are similar to our best bounds, which are 0.15 for LeNet-5 and 0.19 for the 6002 FCNN. For LeNet-5 trained on Fashion-MNIST, our tightest bound on the test loss is 0.35, whereas the corresponding one in (Dziugaite et al., 2020, Fig. 4) is approximately 0.36. Taking error bars into account, our bounds are not clearly distinguishable from those reported in (Dziugaite et al., 2020, Fig. 4). It is important to mention that significantly tighter bounds are reported in (Dziugaite et al., 2020, Fig. 5) for the case in which the PAC-Bayesian bound considered therein is used as a regularizer during the training process. Such a direct optimization of the bound does not appear to be feasible for the random-subset setting considered in this paper.\nNext, we discuss the results presented in Figures 1e–f. As shown in the figure, while our bounds become tighter in the initial phase of training, they lose tightness as training progresses when momentum is used and smaller training errors (on the order of 0.001) are reached for the deterministic NNs. This is similar to what is noted by Dziugaite et al. (2020, p. 12). Specifically, when the underlying deterministic NN therein is trained to achieve very low errors (or equivalently, is trained for many epochs), the PAC-Bayesian bound they consider becomes loose, and the corresponding stochastic NN has a significantly higher test error than the underlying deterministic NN.\nFinally, the difference in behavior of our bounds in Figure 1e and Figure 1f illustrates the role played by the variances σ1 and σ2. In Figure 1e, we set the threshold used to determine σ1 and σ2 to 0.05, which leads to small values for σ1 and σ2. In Figure 1f, we use a threshold of 0.15 instead, which allows for larger variances. The results illustrate the intuitive fact that larger variances yield better generalization bounds at the cost of a higher true test error.\nFurther numerical experiments, in which we study how the bounds evolve as a function of the size of the training set, and how they are affected by randomized labels, are reported in Appendix D." }, { "heading": "5 CONCLUSION", "text": "We have studied information-theoretic bounds on the test loss in the random-subset setting, in which the posterior and the training loss depend on a randomly selected subset of the available data set, and the prior is allowed to depend on the entire data set. In particular, we derived new fast-rate bounds for the PAC-Bayesian and single-draw settings. Provided that the information measures appearing in the bounds scale sublinearly with n, these fast-rate bounds have a better asymptotic dependence on n than the slow-rate PAC-Bayesian and single-draw bounds previously reported in (Hellström & Durisi, 2020b), at the price of larger multiplicative constants. We also presented improvements on previously presented bounds on the average loss by using samplewise information measures and disintegration.\nThrough numerical experiments, we show that our novel fast-rate PAC-Bayesian bound, as well as its slow-rate counterpart, result in test-loss bounds for some overparameterized NNs trained through SGD that essentially match the best available bounds in the literature (Dziugaite et al., 2020). Furthermore, the single-draw counterparts of these bounds, which are as tight as the PAC-Bayesian bounds, are applicable also to deterministic NNs trained through SGD and with Gaussian noise added to the final weights. On the negative side, as illustrated in Fig. 1e, the bounds turn out to be loose when applied to NNs trained to achieve very small training errors. Moreover, the additional experiments described in Appendix D reveal that the bounds overestimate the number of training samples needed to guarantee generalization, and that they become vacuous when randomized labels are introduced.\nStill, the results demonstrate the value of the random-subset approach in studying the generalization capabilities of NNs, and show that fast-rate versions of the available information-theoretic bounds can be beneficial in this setting. In particular, the random-subset setting provides a natural way to select data-dependent priors, namely by marginalizing the learning algorithm PW |Z̃S over S, either exactly or approximately. Such data-dependent priors are a key element in obtaining tight information-theoretic generalization bounds (Dziugaite et al., 2020)." }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PROOF OF PROPOSITION 1", "text": "Consider a fixed hypothesis w ∈ W and a supersample z̃ ∈ Z2n. Due to the boundedness of `(·, ·), the random variable ĝeni(w, z̃, Si) = `(w,Zi(S̄i)) − `(w,Zi(Si)) is bounded to [−1, 1] for i = 1, . . . , n, and it has zero mean. Subgaussianity then implies that the following holds for all λ > 0:\nEPSi [exp(λĝeni(w, z̃, Si))] ≤ exp ( λ2\n2\n) . (16)\nNow, let E(w, z̃) ≡ supp(PSi|wz̃) denote the support of PSi|wz̃ , where PSi|wz̃ is shorthand for the distribution PSi|W=w,Z̃=z̃ . Then, with 1E(w,z̃) denoting the indicator function of E(w, z̃),\nEPSi [ 1E(w,z̃) · exp(λĝeni(w, z̃, Si)) ] ≤ exp\n( λ2\n2\n) . (17)\nThrough a change of measure from PSi to PSi|wz̃ (Polyanskiy & Wu, 2019, Prop. 17.1), we get (after reorganizing terms)\nEPSi|wz̃\n[ exp ( λĝeni(w, z̃, Si)− λ2\n2 − log\ndPSi|wz̃\nPSi\n)] ≤ 1. (18)\nWe now have a disintegrated, samplewise exponential inequality. Next, we use Jensen’s inequality and then minimize over λ to find that\nEPSi|wz̃ [ĝeni(w, z̃, Si)] ≤ minλ>0\nλ2 2 + EPSi|wz̃ [ log dPSi|wz̃ PSi ] λ = √ 2EPSi|wz̃ [ log dPSi|wz̃\nPSi\n] . (19)\nWe now use that EPSi|wz̃ [ log dPSi|wz̃ PSi ] = D(PSi|wz̃ ||PSi) and then take the expectation with respect to PWZ̃ to find that\nEPWZ̃Si [ ĝeni(W, Z̃, Si) ] ≤ EPWZ̃ [√ 2D(PSi|WZ̃ ||PSi) ] . (20)\nThe desired bound then follows because\nEPWZ̃S [ LPZ (W )− LZ(S)(W ) ] = 1\nn n∑ i=1 EPWZ̃Si [ ĝeni(W, Z̃, Si) ] (21)\n≤ 1 n n∑ i=1 EPWZ̃ [√ 2D(PSi|WZ̃ ||PSi) ] . (22)" }, { "heading": "A.2 PROOF OF THEOREM 1", "text": "The proof essentially mimics parts of the derivation of (Steinke & Zakynthinou, 2020, Thm. 2.(3)). For convenience, we begin by proving an exponential inequality for a binary random variable X satisfying P (X = a) = P (X = b) = 1/2 where a, b ∈ [0, 1]. Also, let X̄ = b if X = a and X̄ = a if X = b. Finally, let λ, γ > 0 and c = eλ − 1− λ. Then,\nE [ eλ(X−γX̄) ] ≤ E [ 1 + λ ( X − γX̄ ) + c ( X − γX̄ )2] (23)\n= 1 + λ(1− γ)\n2 (a+ b) +\nc 2 (a− γb)2 + c 2 (b− γa)2 . (24)\nHere, the first inequality follows because ey ≤ 1 + y + cy2/λ2 for all y ≤ λ. Expanding the squares and removing negative terms, we find that\nE [ eλ(X−γX̄) ] ≤ 1 + λ(1− γ)\n2 (a+ b) +\nc(1 + γ2)\n2\n( a2 + b2 ) (25)\n≤ 1 + λ(1− γ) + (eλ − 1− λ)(1 + γ2). (26)\nIn view of (10), we are interested in values of λ and γ such that λ(1−γ)+(eλ−1−λ) ·(1+γ2) ≤ 0, so that the left-hand side of (26) is no larger than 1. Furthermore, it will turn out convenient to select pairs (λ, γ) so that λ is as large as possible and γ is as small as possible. A possible choice is λ = 1/3 and γ = 2.1 Thus, we conclude that\nE [ e 1 3 (X−2X̄) ] ≤ 1. (27)\nNext, we will apply (27) with X = `(w,Zi(S̄i)) and X̄ = `(w,Zi(Si)) for fixed w and z̃. Note that these random variables satisfy the required assumptions on X and X̄ , since the loss function is supported on [0, 1] and the random variables Si are Bernoulli distributed. Let QWZ̃ = QW |Z̃PZ̃ . It then follows that\nEQWZ̃PS [ e n 3 ( LZ(S̄)(W )−2LZ(S)(W ) )] = EQWZ̃ [ n∏ i=1 EPSi [ e 1 3 ( `(W,Zi(S̄i))−2`(W,Zi(Si)) )]] ≤ 1.\n(28) Now let E = supp(PWZ̃S). Then,\nEQWZ̃PS [ 1E · e n 3 ( LZ(S̄)(W )−2LZ(S)(W ) )] ≤ 1. (29)\nThe desired result follows after a change of measure to PWZ̃S (Polyanskiy & Wu, 2019, Prop. 17.1)." }, { "heading": "A.3 PROOF OF COROLLARY 1", "text": "We begin by applying Jensen’s inequality to (10) to move the expectation inside the exponential. We then obtain (11) by simply taking the logarithm of both sides and reorganizing terms.\nTo derive (12), we first apply Jensen’s inequality in (10), this time only with respect only PW |Z̃S , to get\nEPZ̃S\n[ exp ( EPW |Z̃S [ n\n3 LZ(S̄)(W )−\n2n\n3 LZ(S)(W )\n] −D(PW |Z̃S ||QW |Z̃) )] ≤ 1. (30)\n1Another permissible choice is λ = 1/2.98 and γ = 1.795. It turns out that this choice leads to tighter bounds for the setup considered in Section 4. Hence, it will be used in that section.\nWe now use Markov’s inequality in the following form. Let U ∼ PU be a nonnegative random variable satisfying E[U ] ≤ 1. Then,\nPU [U ≤ 1/δ] ≥ 1− E[U ] δ ≥ 1− δ. (31)\nApplying (31) to (30) we find that, with probability at least 1− δ under PZ̃S ,\nexp ( EPW |Z̃S [ n\n3 LZ(S̄)(W )−\n2n\n3 LZ(S)(W )\n] −D(PW |Z̃S ||QW |Z̃) ) ≤ 1 δ . (32)\nTaking the logarithm and reorganizing terms, we obtain (12).\nFinally, to derive (13), we apply (31) to (10) immediately to conclude that, with probability at least 1− δ under PWZ̃S ,\nexp\n( n\n3 LZ(S̄)(W )−\n2n\n3 LZ(S)(W )− log dPWZ̃S dQW |Z̃PZ̃S ) ≤ 1 δ . (33)\nThe desired bound (13) follows after taking the logarithm and reorganizing terms." }, { "heading": "A.4 PROOF OF COROLLARY 2", "text": "Consider a fixed w ∈ W and z̃ ∈ Z2n. As shown in Appendix A.2, EPSi [ e 1 3 (`(w,Zi(S̄i))−2`(w,Zi(Si))) ] ≤ 1. (34)\nLet E = supp(PSi|wz̃), where PSi|wz̃ is short for PSi|W=w,Z̃=z̃ . By changing measure we get\nEPSi [ 1E · e 1 3 (`(w,Zi(S̄i))−2`(w,Zi(Si))) ] = EPSi|wz̃ [ e 1 3 (`(w,Zi(S̄i))−2`(w,Zi(Si)))−log dPSi|wz̃ dPSi ] ≤ 1.\n(35) Moving the expectation inside the exponential through the use of Jensen’s inequality and taking the logarithm, we get\nEPSi|wz̃ [ `(w,Zi(S̄i)) ] ≤ 2EPSi|wz̃ [`(w,Zi(Si))] + 3EPSi|wz̃ [ log dPSi|wz̃\ndPSi\n] . (36)\n= 2EPSi|wz̃ [`(w,Zi(Si))] + 3I(W ;Si|Z̃). (37)\nThe desired result now follows by noting that\nEPWZ̃S [LPZ (W )] = EPWZ̃\n[ 1\nn n∑ i=1 EPSi|wz̃ [ `(w,Zi(S̄i))\n]] (38)\nand applying (37) to each term in the sum in (38)." }, { "heading": "B FAST-RATE BOUNDS FOR THE INTERPOLATING CASE", "text": "In this section, we discuss how to tighten the bound in Corollary 1 under the additional assumption that the training loss LZ(S)(W ) is 0 for all W ∼ PW |Z(S) (interpolating assumption), and for the special case QW |Z̃ = PW |Z̃ .\nWe begin by proving the following exponential inequality, the derivation of which is similar to part of the proof of the realizable fast-rate bound from (Steinke & Zakynthinou, 2020).\nProposition 2. Consider the setting of Theorem 1, with the additional assumption that LZ(S)(W ) = 0 for all W ∼ PW |Z(S) and that QW |Z̃ = PW |Z̃ . Then,\nEPWZ̃S [ exp ( n log 2 · LZ(S̄)(W )− ı(W,S|Z̃) )] ≤ 1. (39)\nProof. Let λ, γ > 0. Furthermore, let S′ be independent of W , Z̃, and S, and distributed as S. Then,\nEPWZ̃S [ n∏ i=1 ( 1 2 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) + 1 2 eλ`(W,Zi(Si))−γ`(W,Zi(S̄i)) )] (40)\n=EPWZ̃SPS′ [ n∏ i=1 eλ`(W,Zi(S̄ ′ i))−γ`(W,Zi(S ′ i)) ] = EPWZ̃PS [ n∏ i=1 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) ] . (41)\nLet E = supp(PWZ̃S). It now follows from (41) that\nEPWZ̃PS [ 1E · en(λLZ(S̄)(W )−γLZ(S)(W )) ] ≤ EPWZ̃PS [ n∏ i=1 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) ] (42)\n=EPWZ̃S [ n∏ i=1 ( 1 2 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) + 1 2 eλ`(W,Zi(Si))−γ`(W,Zi(S̄i)) )] . (43)\nWe now change measure to PWZ̃S to conclude that EPWZ̃S [ en(λLZ(S̄)(W )−γLZ(S)(W ))−ı(W,S|Z̃) ] ≤EPWZ̃S [ n∏ i=1 ( 1 2 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) + 1 2 eλ`(W,Zi(Si))−γ`(W,Zi(S̄i)) )] .\n(44)\nWe now use the interpolating assumption, set λ = log 2, and let γ →∞. These steps, together with the assumption that `(W,Zi(S̄i)) ∈ [0, 1], imply that the right-hand side of (44) is no larger than 1. From this, the desired result follows.\nUsing Proposition 2, we can derive bounds that are analogous to those of Corollary 1. We present these bounds below without proof, since they can be established following steps similar to the ones used to prove Corollary 1. Corollary 3. Consider the setting of Proposition 2. Then, the average population loss is bounded by\nEPWZ̃S [LPZ (W )] ≤ I(W ;S|Z̃) n log 2 . (45)\nFurthermore, with probability at least 1− δ over PZ̃S , the PAC-Bayesian population loss is bounded by\nEPW |Z̃S [ LZ(S̄)(W ) ] ≤ D(PW |Z̃S ||PW |Z̃) + log 1 δ\nn log 2 . (46)\nFinally, with probability at least 1− δ over PWZ̃S , the single-draw population loss is bounded by\nLZ(S̄)(W ) ≤ ı(W,S|Z̃) + log 1δ\nn log 2 . (47)\nFinally, we present a samplewise bound that tightens Corollary 2 under the interpolating assumption. Its derivation is inspired by the techniques used to establish Proposition 1 and Proposition 2. Corollary 4. Consider the setting of Proposition 2. Then, the average population loss is bounded by\nEPWZ̃S [LPZ (W )] ≤ n∑ i=1 I(W ;Si|Z̃) n log 2 . (48)\nProof. Let λ, γ > 0 and let S′i be independent of W , Z̃, and Si, and distributed as Si. Then, for all i,\nEPWZ̃S\n[( 1\n2 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) +\n1 2 eλ`(W,Zi(Si))−γ`(W,Zi(S̄i))\n)] (49)\n=EPWZ̃SiPS′i\n[ eλ`(W,Zi(S̄ ′ i))−γ`(W,Zi(S ′ i)) ] = EPWZ̃PSi [ eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) ] . (50)\nWe now let E = supp(PWZ̃Si). It follows from (49)–(50) that EPWZ̃PSi [ 1E · eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) ] ≤EPWZ̃S [( 1 2 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) + 1 2 eλ`(W,Zi(Si))−γ`(W,Zi(S̄i)) )] .\n(51)\nBy performing a change of measure from PWZ̃PSi to PWZ̃Si we conclude that EPWZ̃Si [ eλ`(W,Zi(S̄i))−γ`(W,Zi(Si))−ı(W,Si|Z̃) ] ≤EPWZ̃S [( 1 2 eλ`(W,Zi(S̄i))−γ`(W,Zi(Si)) + 1 2 eλ`(W,Zi(Si))−γ`(W,Zi(S̄i)) )] .\n(52)\nHere, ı(W,Si|Z̃) = log dPWZ̃Si\ndPWZ̃PSi . We now use the interpolating assumption, set λ = log 2, and\nlet γ →∞. These steps, together with the assumption that `(·, ·) ∈ [0, 1], imply that the right-hand side of (52) is no larger than 1. Thus,\nEPWZ̃Si [ elog 2·`(W,Zi(S̄i))−ı(W,Si|Z̃) ] ≤ 1. (53)\nNext, we use Jensen’s inequality to move the expectation in (53) inside the exponential. Taking the logarithm and reorganizing terms, we get\nEPWZ̃Si [ `(W,Zi(S̄i)) ] ≤ EPWZ̃Si\n[ ı(W,Si|Z̃)\nlog 2\n] = I(W ;Si|Z̃)\nlog 2 . (54)\nThe result now follows because\nEPWZ̃S [LPZ (W )] = EPWZ̃S\n[ 1\nn n∑ i=1 `(W,Zi(S̄i))\n] ≤\nn∑ i=1 I(W ;Si|Z̃) n log 2 . (55)" }, { "heading": "C EXPERIMENT DETAILS", "text": "Here, we provide a detailed description of the network architectures and training procedures considered in this paper. We also note that, when evaluating the fast-rate bounds (12) and (13), we use the constants 1.975 and 2.98 in place of 2 and 3, respectively. This choice leads to valid bounds, as pointed out in Appendix A.2." }, { "heading": "C.1 NETWORK ARCHITECTURES", "text": "The LeNet-5 architecture used in the numerical results is described in Table 1. This is different from most standard implementations of LeNet-5, but coincides with the architecture used by Zhou et al. (2019) and Dziugaite et al. (2020). It has 431 080 parameters. The fully connected neural network denoted by 6002 consists of an input layer with 784 units, 2 fully connected layers with 600 units and ReLU activations, followed by an output layer with 10 units and softmax activations. It has 837 610 parameters." }, { "heading": "C.2 TRAINING PROCEDURES", "text": "We now provide additional details on the training procedures described in Section 4. The initial weights of all the networks used for each instance of Z(S) were set to the same randomly selected values drawn from a zero-mean normal distribution with standard deviation 0.01. All networks were trained using the cross-entropy loss, optimized using either SGD with momentum and a fixed learning rate or SGD without momentum and a decaying learning rate. First, we describe the details of SGD with momentum. For MNIST, we used a learning rate of 0.001, and for Fashion-MNIST, we used 0.003. In all experiments, the momentum parameter is set to 0.9. We used a batch size of 512.\nFor SGD without momentum, we used a decaying learning rate schedule, where the learning rate α for a given epoch E is given by\nα(E) = α0\n1 + γ · bE/E0c . (56)\nHere, α0 is the initial learning rate, γ is the decay rate, and E0 is the number of epochs between each decay. In all experiments, we used α0 = 0.01, γ = 2, and E0 = 20. Again, we used a batch size of 512.\nTo choose σ1, we pick the largest value with one significant digit (i.e., of the form a · 10−b with a ∈ [1 : 9] and b ∈ Z) such that the absolute value of the difference between the training loss on Z(S) of the deterministic network with weights µ1 and empirical average of the training loss of 5 NNs with weights drawn independently from N (W | µ1, σ21Id) was no larger than some specified threshold. When producing the results reported in Figure 2 and Figures 1a–d, we used a threshold of 0.05 for MNIST, while for Fashion-MNIST, we used a threshold of 0.10. In Appendix D, we perform additional experiments with other thresholds. Specifically, for Figure 1e, we use a threshold of 0.05, while we use a threshold of 0.15 for Figure 1f. For the randomized label experiment in Table 2, we use a threshold of 0.10.\nTo find σ2, we use as starting point the same procedure as for determining σ1, but with µ2 in place of µ1 and the training loss evaluated on all of Z̃. Let us call the value found by this procedure σ′2 = a′ · 10−b′ . Then, among the values of the form a · 10−b with a ∈ [1 : 9] and b ∈ {b′ − 1, b′, b′ + 1}, we choose σ2 to be the one that minimizes the bound on the test loss. In all our experiments, this procedure resulted in σ2 = σ1. To guarantee that the final bound holds with a given confidence level, all 27 bounds resulting from all possible choices of a and b need to hold with the same confidence level. Since we consider both slow-rate and fast-rate bounds, a total of 54 bounds need to hold simultaneously. We ensure that this is the case via the union bound. Thus, if each individual bound holds with probability at least 1− δ, the optimized bounds hold with probability at least 1− 54δ. We compute the bounds with δ = 0.05/54, so the optimized bounds hold with 95% confidence." }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "D.1 DEPENDENCE ON THE SIZE OF THE TRAINING SET", "text": "In this section, we study the dependence of the bounds on the size n of the training set. We perform experiments for different values of n by restricting Z̃ to be a 2n-dimensional randomly chosen subset of the set of 6 · 104 training samples available in MNIST and Fashion-MNIST. The training set Z(S) is then formed by selecting n of these samples at random. We then train a network, either LeNet-5 or the 6002 FCNN, on this restricted training set, until the training error is lower than some target error. For MNIST, we use a target training error of 0.05, while we use 0.15 for Fashion-MNIST. The results are shown in Figure 2.\nAs seen in Figure 2, the bounds on the test loss for large values of n are fairly accurate for both of these architectures, especially so for MNIST. However, they are loose for smaller values of n. As previously discussed, the relative ordering of the slow-rate bounds and the fast-rate bounds from a quantitative standpoint depends on the details of the learning setup. In particular, higher values for the training loss and information measures tend to make the slow-rate bounds tighter due to their smaller constant factors." }, { "heading": "D.2 RANDOMIZED LABELS", "text": "In order to examine the behavior of our bounds in an overfitting scenario, we consider data sets with partially randomized labels. Specifically, we set the labels of a fixed proportion of both the training and test sets of MNIST uniformly at random, and then perform training using SGD with momentum as described in Appendix C. In order to simplify training with randomized labels, we consider a binarized version of MNIST where the digits 0, . . . , 4 are merged into one class and 5, . . . , 9 into another. The results are shown in Table 2. The slow-rate bound is computed using (8), while the fast-rate bound is based on (12). As usual, the quantitative difference between these bounds and the corresponding single-draw bounds in (9) and (13) is negligible.\nAs shown in Table 2, our bounds become vacuous when randomized labels are used. The fast-rate bound is significantly worse than its slow-rate counterpart, which is to be expected: when the prior and posterior are selected using randomized labels, a larger discrepancy between them arises. This increases the value of the KL divergence in (8) and (12), which, as previously discussed, penalizes the fast-rate bound more. We note, though, that the qualitative behavior of the bounds is in agreement with the empirically evaluated test error: an increased proportion of randomized labels, and thus an increased test error, is accompanied by an increase in the values of our bounds. Furthermore, the slow-rate bound consistently overestimates the test error by a factor of approximately 25.\nTo the best of our knowledge, all bounds available in the literature for overfitting situations such as the one considered in this section are vacuous. The best result can be found in (Dziugaite & Roy, 2017, Tab. 1), where an FCNN with one hidden layer is trained on a binarized version of MNIST with fully randomized labels. Despite directly optimizing the evaluated PAC-Bayesian bound as part of the training procedure, the obtained test error bound of 1.365 is vacuous." } ]
2,020
null
SP:29a7b851d3edc2176467adc75ba67cc973a11a37
[ "This work proposes a sequence-to-sequence approach for learning the time evolution of PDEs. The method employs a bi-directional LSTM to predict solutions of a PDE-based formulation for a chosen number of time steps. By itself this is an interesting, and important goal, but the method does not seem to contain any novel components apart from demonstrating that LSTMs can be used to learn data from PDEs. The paper only compares to a simple form of PINNs, but not to a variety of other time forecasting algorithms available in the deep learning field (LSTM are just one of many methods used these days, a more state of the art one being e.g. transformers). In addition, the examples only contain single cases with relatively simple model equations." ]
Partial differential equations (PDEs) play a crucial role in studying a vast number of problems in science and engineering. Numerically solving nonlinear and/or highdimensional PDEs is frequently a challenging task. Inspired by the traditional finite difference and finite elements methods and emerging advancements in machine learning, we propose a sequence-to-sequence learning (Seq2Seq) framework called Neural-PDE, which allows one to automatically learn governing rules of any timedependent PDE system from existing data by using a bidirectional LSTM encoder, and predict the solutions in next n time steps. One critical feature of our proposed framework is that the Neural-PDE is able to simultaneously learn and simulate all variables of interest in a PDE system. We test the Neural-PDE by a range of examples, from one-dimensional PDEs to a multi-dimensional and nonlinear complex fluids model. The results show that the Neural-PDE is capable of learning the initial conditions, boundary conditions and differential operators defining the initial-boundary-value problem of a PDE system without the knowledge of the specific form of the PDE system. In our experiments, the Neural-PDE can efficiently extract the dynamics within 20 epochs training and produce accurate predictions. Furthermore, unlike the traditional machine learning approaches for learning PDEs, such as CNN and MLP, which require great quantity of parameters for model precision, the Neural-PDE shares parameters among all time steps, and thus considerably reduces computational complexity and leads to a fast learning algorithm.
[]
[ { "authors": [ "Uri M Ascher", "Steven J Ruuth", "Raymond J Spiteri" ], "title": "Implicit-explicit runge-kutta methods for time-dependent partial differential equations", "venue": "Applied Numerical Mathematics,", "year": 1997 }, { "authors": [ "Fischer Black", "Myron Scholes" ], "title": "The pricing of options and corporate liabilities", "venue": "Journal of political economy,", "year": 1973 }, { "authors": [ "Ricky TQ Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Bernardo Cockburn", "George E Karniadakis", "Chi-Wang Shu" ], "title": "Discontinuous Galerkin methods: theory, computation and applications, volume 11", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Richard Courant", "Kurt Friedrichs", "Hans Lewy" ], "title": "On the partial difference equations of mathematical physics", "venue": "IBM journal of Research and Development,", "year": 1967 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Johannes Forster" ], "title": "Mathematical modeling of complex fluids. Master’s", "venue": "University of Wurzburg,", "year": 2013 }, { "authors": [ "Alex Graves", "Jürgen Schmidhuber" ], "title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures", "venue": "Neural networks,", "year": 2005 }, { "authors": [ "Jean-Luc Guermond", "Peter Minev", "Jie Shen" ], "title": "An overview of projection methods for incompressible flows. Computer methods in applied mechanics and engineering", "venue": null, "year": 2006 }, { "authors": [ "Jiequn Han", "Arnulf Jentzen", "E Weinan" ], "title": "Solving high-dimensional partial differential equations using deep learning", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "F. Hecht" ], "title": "New development in freefem++", "venue": "J. Numer. Math.,", "year": 2012 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George E Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara N Sainath" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Zhiheng Huang", "Wei Xu", "Kai Yu" ], "title": "Bidirectional lstm-crf models for sequence tagging", "venue": "arXiv preprint arXiv:1508.01991,", "year": 2015 }, { "authors": [ "Martin Hutzenthaler", "Arnulf Jentzen", "Thomas Kruse", "Tuan Anh Nguyen" ], "title": "A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations, 2019", "venue": null, "year": 2019 }, { "authors": [ "Yunkyong Hyon", "Chun Liu" ], "title": "Energetic variational approach in complex fluids: maximum dissipation principle", "venue": "Discrete & Continuous Dynamical Systems-A,", "year": 2010 }, { "authors": [ "Claes Johnson" ], "title": "Numerical solution of partial differential equations by the finite element method", "venue": "Courier Corporation,", "year": 2012 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Zachary C Lipton", "John Berkowitz", "Charles Elkan" ], "title": "A critical review of recurrent neural networks for sequence learning", "venue": "arXiv preprint arXiv:1506.00019,", "year": 2015 }, { "authors": [ "Geert Litjens", "Thijs Kooi", "Babak Ehteshami Bejnordi", "Arnaud Arindra Adiyoso Setio", "Francesco Ciompi", "Mohsen Ghafoorian", "Jeroen Awm Van Der Laak", "Bram Van Ginneken", "Clara I Sánchez" ], "title": "A survey on deep learning in medical image analysis", "venue": "Medical image analysis,", "year": 2017 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "Pde-net: Learning pdes from data", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tomáš Mikolov", "Stefan Kombrink", "Lukáš Burget", "Jan Černockỳ", "Sanjeev Khudanpur" ], "title": "Extensions of recurrent neural network language model", "venue": "In 2011 IEEE international conference on acoustics, speech and signal processing (ICASSP),", "year": 2011 }, { "authors": [ "Stanley Osher", "James A Sethian" ], "title": "Fronts propagating with curvature-dependent speed: algorithms based on hamilton-jacobi formulations", "venue": "Journal of computational physics,", "year": 1988 }, { "authors": [ "Grace CY Peng", "Mark Alber", "Adrian Buganza Tepole", "William R Cannon", "Suvranu De", "Savador Dura-Bernal", "Krishna Garikipati", "George Karniadakis", "William W Lytton", "Paris Perdikaris" ], "title": "Multiscale modeling meets machine learning: What can we learn", "venue": "Archives of Computational Methods in Engineering,", "year": 2020 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George E Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Mike Schuster", "Kuldip K Paliwal" ], "title": "Bidirectional recurrent neural networks", "venue": "IEEE transactions on Signal Processing,", "year": 1997 }, { "authors": [ "Alex Sherstinsky" ], "title": "Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network", "venue": "Physica D: Nonlinear Phenomena,", "year": 2020 }, { "authors": [ "Justin Sirignano", "Konstantinos Spiliopoulos" ], "title": "Dgm: A deep learning algorithm for solving partial differential equations", "venue": "Journal of computational physics,", "year": 2018 }, { "authors": [ "James William Thomas" ], "title": "Numerical partial differential equations: finite difference methods, volume 22", "venue": "Springer Science & Business Media,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "The research of time-dependent partial differential equations (PDEs) is regarded as one of the most important disciplines in applied mathematics. PDEs appear ubiquitously in a broad spectrum of fields including physics, biology, chemistry, and finance, to name a few. Despite their fundamental importance, most PDEs can not be solved analytically and have to rely on numerical solving methods. Developing efficient and accurate numerical schemes for solving PDEs, therefore, has been an active research area over the past few decades (Courant et al., 1967; Osher & Sethian, 1988; LeVeque; Cockburn et al., 2012; Thomas, 2013; Johnson, 2012). Still, devising stable and accurate schemes with acceptable computational cost is a difficult task, especially when nonlinear and(or) high-dimensional PDEs are considered. Additionally, PDE models emerged from science and engineering disciplines usually require huge empirical data for model calibration and validation, and determining the multidimensional parameters in such a PDE system poses another challenge (Peng et al., 2020).\nDeep learning is considered to be the state-of-the-art tool in classification and prediction of nonlinear inputs, such as image, text, and speech (Litjens et al., 2017; Devlin et al., 2018; LeCun et al., 1998; Krizhevsky et al., 2012; Hinton et al., 2012). Recently, considerable efforts have been made to employ deep learning tools in designing data-driven methods for solving PDEs (Han et al., 2018; Long et al., 2018; Sirignano & Spiliopoulos, 2018; Raissi et al., 2019). Most of these approaches are based on fully-connected neural networks (FCNNs), convolutional neural networks(CNNs) and multilayer perceptron (MLP). These neural network structures usually require an increment of the layers to improve the predictive accuracy (Raissi et al., 2019), and subsequently lead to a more complicated model due to the additional parameters. Recurrent neural networks (RNNs) are one type of neural network architectures. RNNs predict the next time step value by using the input data from the current\nand previous states and share parameters across all inputs. This idea (Sherstinsky, 2020) of using current and previous step states to calculate the state at the next time step is not unique to RNNs. In fact, it is ubiquitously used in numerical PDEs. Almost all time-stepping numerical methods applied to solve time-dependent PDEs, such as Euler’s, Crank-Nicolson, high-order Taylor and its variance Runge-Kutta (Ascher et al., 1997) time-stepping methods, update numerical solution by utilizing solution from previous steps.\nThis motivates us to think what would happen if we replace the previous step data in the neural network with numerical solution data to PDE supported on grids. It is possible that the neural network behaves like a time-stepping method, for example, forward Euler’s method yields the numerical solution at a new time point as the current state output (Chen et al., 2018). Since the numerical solution on each of the grid point (for finite difference) or grid cell (for finite element) computed at a set of contiguous time points can be treated as neural network input in the form of one time sequence of data, the deep learning framework can be trained to predict any time-dependent PDEs from the time series data supported on some grids if the bidirectional structure is applied (Huang et al., 2015; Schuster & Paliwal, 1997). In other words, the supervised training process can be regarded as a practice of the deep learning framework to learn the numerical solution from the input data, by learning the coefficients on neural network layers.\nLong Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) is a neural network built upon RNNs. Unlike vanilla RNNs, which suffer from losing long term information and high probability of gradient vanishing or exploding, LSTM has a specifically designed memory cell with a set of new gates such as input gate and forget gate. Equipped with these new gates which control the time to preserve and pass the information, LSTM is capable of learning long term dependencies without the danger of having gradient vanishing or exploding. In the past two decades, LSTM has been widely used in the field of natural language processing (NLP), such as machine translation, dialogue systems, question answering systems (Lipton et al., 2015).\nInspired by numerical PDE schemes and LSTM neural network, we propose a new deep learning framework, denoted as Neural-PDE. It simulates multi-dimensional governing laws, represented by time-dependent PDEs, from time series data generated on some grids and predicts the next n time steps data. The Neural-PDE is capable of intelligently processing related data from all spatial grids by using the bidirectional (Schuster & Paliwal, 1997) neural network, and thus guarantees the accuracy of the numerical solution and the feasibility in learning any time-dependent PDEs. The detailed structures of the Neural-PDE and data normalization are introduced in Section 3.\nThe rest of the paper is organized as follows. Section 2 briefly reviews finite difference method for solving PDEs. Section 3 contains detailed description of designing the Neural-PDE. In Section 4 and Appendix A of the paper, we apply the Neural-PDE to solve four different PDEs, including the 1-dimensional(1D) wave equation, the 2-dimensional(2D) heat equation, and two systems of PDEs: the invicid Burgers’ equations and a coupled Navier Stokes-Cahn Hilliard equations, which widely appear in multiscale modeling of complex fluid systems. We demonstrate the robustness of the Neural-PDE, which achieves convergence within 20 epochs with an admissible mean squared error, even when we add Gaussian noise in the input data." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 TIME DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS", "text": "A time-dependent partial differential equation is an equation of the form:\nut = f(x1, · · · , u, ∂u ∂x1 , · · · , ∂u ∂xn ,\n∂2u\n∂x1∂x1 , · · · , ∂\n2u\n∂x1∂xn , · · · , ∂\nnu\n∂x1 · · · ∂xn ) , (2.1.1)\nwhere u = u(t, x1, ..., xn) is known, xi ∈ R are spatial variables, and the operator f maps R 7→ R. For example, consider the parabolic heat equation: ut = α2∆u, where u represents the temperature and f is the Laplacian operator ∆. Eq. (2.1.1) can be solved by finite difference methods, which is briefly reviewed below for the self-completeness of the paper." }, { "heading": "2.2 FINITE DIFFERENCE METHOD", "text": "Consider using a finite difference method (FDM) to solve a two-dimensional second-order PDE of the form: ut = f(x, y, ux, uy, uxx, uyy), (x, y) ∈ Ω ⊂ R2, t ∈ R+ ∪ {0} , (2.2.1) with some proper boundary conditions. Let Ω be Ω = [xa, xb]× [ya, yb], and\nuni,j = u(xi, yj , tn) (2.2.2)\nwhere tn = nδt, 0 ≤ n ≤ N , and δt = TN for t ∈ [0, T ], and some large integer N . xi = iδx, 0 ≤ i ≤ Nx, δx = xa−xbNx for x ∈ [xa, xb]. yj = jδy, 0 ≤ j ≤ Ny, δy = ya−yb Ny\nfor y ∈ [ya, yb]. Nx and Ny are integers.\nThe central difference method approximates the spatial derivatives as follows (Thomas, 2013):\nux(xi, yj , t) = 1\n2δx (ui+1,j − ui−1,j) +O(δx2) , (2.2.3)\nuy(xi, yj , t) = 1\n2δy (ui,j+1 − ui,j−1) +O(δy2) , (2.2.4)\nuxx(xi, yj , t) = 1\nδx2 (ui+1,j − 2ui,j + ui−1,j) +O(δx2) , (2.2.5)\nuyy(xi, yj , t) = 1\nδy2 (ui,j+1 − 2ui,j + ui,j−1) +O(δy2) . (2.2.6)\nTo this end, the explicit time-stepping scheme to update next step solution un+1 is given by:\nuni,j ≈ Un+1i,j = U n i,j + δtf(xi, yj , U n i,j , U n i,j−1, U n i,j+1, U n i+1,j , U n i−1,j) , (2.2.7)\n≡ F(xi, yj , δx, δy, δt, Uni,j , Uni,j−1, Uni,j+1, Uni+1,j , Uni−1,j) , (2.2.8)\nApparently, the finite difference method (2.2.7) for updating un+1 on a grid point relies on the previous time steps’ solutions, supported on the grid point and its neighbours. The scheme (2.2.7) updates un+1i,j using four points of un values (see Figure 1). Similarly, the finite element method (FEM) approximates the new solution by calculating the corresponded mesh cell coefficient (see Appendix), which is updated by its related nearby coefficients on the mesh. From this perspective, one may regard the numerical schemes for solving time-dependent PDEs as methods catching the information from neighbourhood data of interest." }, { "heading": "3 PROPOSED METHOD", "text": "" }, { "heading": "3.1 MATHEMATICAL MOTIVATION", "text": "Recurrent neural network including LSTM is an artificial neural network structure of the form (Lipton et al., 2015):\nht = σ(Whxxt + Whhht−1 + bh) ≡ σa(xt,ht−1) ≡ σb(x0,x1,x2, · · · ,xt) , (3.1.1) where xt ∈ Rd is the input data of the tth state and ht−1 ∈ Rh denotes the processed value in its previous state by the hidden layers. The output yt of the current state is updated by the current state value ht:\nyt = σ(Whyht + by) (3.1.2)\n≡ σc(ht) ≡ σd(x0,x1,x2, · · · ,xt) . (3.1.3)\nHere Whx ∈ Rh×d, Whh ∈ Rh×h, Why ∈ Rh×h are the matrix of weights, vectors bh, by ∈ Rh are the coefficients of bias, and σ, σa, σb, σc, σd are corresponded activation and mapping functions.\nWith proper design of input and forget gate, LSTM can effectively yield a better control over the gradient flow and better preserve useful information from long-range dependencies (Graves & Schmidhuber, 2005).\nNow consider a temporally continuous vector function u ∈ Rn given by an ordinary differential equation with the form:\ndu(t)\ndt = g(u(t)) . (3.1.4)\nLet un = u(t = nδt), a forward Euler’s method for solving u can be easily derived from the Taylor’s theorem which gives the following first-order accurate approximation of the time derivative:\ndun\ndt =\nun+1 − un\nδt +O(δt) . (3.1.5)\nThen we have: du\ndt = g(u)\n(3.1.5)−−−−→ un+1 = un + δt g(un) +O(δt2)\n→ ûn+1 = f1(ûn) = f1 ◦ f1 ◦ · · · f1(û0)︸ ︷︷ ︸ n\n(3.1.6)\nHere ûn ≈ u(nδt) is the numerical approximation and f1 ≡ un+δt g(un) : Rn → Rn. Combining equations (3.1.1) and (3.1.6) one may notice that the residual networks, recurrent neural network and also LSTM networks can be regarded as a numerical scheme for solving time-dependent differential equations if more layers are added and smaller time steps are taken. (Chen et al., 2018)\nCanonical structure for such recurrent neural network usually calculate the current state value by its previous time step value ht−1 and current state input xt. Similarly, in numerical PDEs, the next step data at a grid point is updated from the previous (and current) values on its nearby grid points (see Eq. 2.2.7).\nThus, what if we replace the temporal input ht−1 and xt with spatial information? A simple sketch of the upwinding method for a 1d example of u(x, t):\nut + νux = 0 (3.1.7) will be:\nun+1i = u n i − ν\nδt δx (uni − uni−1) +O(δx, δt)→ ûn+1i = f2(û n i−1, û n i ) (3.1.8)\n≡ fθ ( fη(xi,hi−1(u)) ) = fθ,η ( ûn0 , û n 1 , · · · , ûni−1, ûni ) = vn+1i (3.1.9)\nxi = û n i , hi−1(û) = σ(û n i−1,hi−2(û)) ≡ fη(ûn0 , ûn1 , ûn2 , · · · , ûni−1). (3.1.10)\nHere let vn+1i be the prediction of û n+1 i processed by neural network. We replace the temporal previous state ht−1with spacial grid value hi−1 and input the numerical solution ûni ≈ u(iδx, nδt) as current state value, which indicates the neural network could be seen as a forward Euler method for equation 3.1.7 (Lu et al., 2018). Function f2 ≡ ûni − ν δtδx (û n i − ûni−1) : R→ R and the function fθ represents the dynamics of the hidden layers in decoder with parameters θ, and fη specifies the dynamics of the LSTM layer (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2005) in encoder withe parameters η. The function fθ,η simulates the dynamics of the Neural-PDE with paramaters θ and η. By applying Bidirectional neural network, all grid data are transferred and it enables LSTM to simulate the PDEs as :\nvn+1i = fθ ( fη(hi+1(ˆ̂u), û n i ,hi−1(û)) ) (3.1.11)\nhi+1(û) ≡ fη(ûni+1, ûni+2, ûni+3, · · · , ûnk ). (3.1.12) For a time-dependent PDE, if we map all our grid data into an input matrix which contains the information of δx, δt, then the neural network would regress such coefficients as constants and will learn and filter the physical rules from all the k mesh grids data as:\nvn+1i = fθ,η ( ûn0 , û n 1 , û n 2 , · · · , ûnk ) (3.1.13)\nThe LSTM neural network is designed to overcome the vanishing gradient issue through hidden layers, therefore we use such recurrent structure to increase the stability of the numerical approach in deep learning. The highly nonlinear function fθ,η simulates the dynamics of updating rules for un+1i , which works in a way similar to a finite difference method (section 2.2) or a finite element method.\n3.2 NEURAL-PDE\nIn particular, we use the bidirectional LSTM (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2005) to better retain the state information from data on grid points which are neighbourhoods in the mesh but far away in input matrix.\nThe right frame of Figure 3 shows the overall design of the Neural-PDE. Denote the time series data at collocation points as aN1 ,a N 2 , · · · ,aNk with aNi = [û0i , û 1 i , · · · , ûNi ] at ith point. The superscript represents different time points. The Neural-PDE takes the past states {aN1 ,aN2 , · · · ,aNk } of all collocation points, and outputs the predicted future states {bM1 , bM2 , · · · , bMk }, where bMi = [v N+1 i , v N+2 i , · · · , v N+M i ] is the Neural-PDE prediction for the ith collocation point at time points from N + 1 to N +M . The data from time point 0 to N are the training data set.\nThe Neural-PDE is an encoder-decoder style sequence model that first maps the input data to a low dimensional latent space that\nhi = −−−−→ LSTM(ai)⊕ ←−−−− LSTM(ai), (3.2.1)\nwhere ⊕ denotes concatenation and hi is the latent embedding of point ai under the environment.\nOne then decoder, another bi-lstm with a dense layer: vi = (−−−−→ LSTM(hi)⊕ ←−−−− LSTM(hi) ) ·W, (3.2.2)\nwhere W is the learnable weight matrix in the dense layer.\nDuring training process, mean squared error (MSE) loss L is used as we typically don’t know the specific form of the PDE.\nL = N+M∑ t=N+1 k∑ i=1 ||ûti − vti ||2 , (3.2.3)" }, { "heading": "3.3 DATA INITIALIZATION AND GRID POINT RESHAPE", "text": "In order to feed data into our sequence model framework, we map the PDE solution data onto a K × N matrix, where K ∈ Z+ is the dimension of the grid points and N ∈ Z+ is the length of the time series data on each grid point. There is no regularization for the input order of the grid points data in the matrix because of the bi-directional structure of the Neural-PDE. For example, a 2d\nheat equation at some time t is reshaped into a 1d vector (See Fig. 2). Then the matrix is formed accordingly.\nFor a n-dimensional time-dependent partial differential equation with K collocation points, the input and output data for t ∈ (0, T ) will be of the form:\nA(K,N) = aN0 ... aN`\n... aNK\n = û00 û 1 0 · · · ûn0 · · · ûN0 ... ... . . . ... . . . ... û0` û 1 ` · · · ûn` · · · ûN` ... ... . . . ... . . . ...\nû0K û 1 K · · · ûnK · · · ûNK\n (3.3.1)\nB(K,M) = bM0 ... bM`\n... bMK\n = vN+10 v N+2 0 · · · v N+m 0 · · · v N+M 0 ... ... . . . ... . . . ... vN+1` v N+2 ` · · · v N+m ` · · · v N+M k ... ... . . . ... . . . ...\nvN+1K v N+2 K · · · v N+m K · · · v N+M K\n (3.3.2)\nHere N = Tδt and each row ` represents the time series data at the ` th mesh grid, and M is the time length of the predicted data.\nBy adding Bidirectional LSTM encoder in the Neural-PDE, it will automatically extract the information from the time series data as:\nB(K,M) = PDESolver(A(K,N)) = PDESolver(aN0 ,a N 1 , · · · aNi , · · · ,aNK) (3.3.3)" }, { "heading": "4 COMPUTER EXPERIMENTS", "text": "Since the Neural-PDE is a sequence to sequence learning framework which allows one to predict within any time period by the given data. One may test the Neural-PDE using different permutations of training and predicting time periods for its efficiency, robustness and accuracy. In the following examples, the whole dataset is randomly splitted in 80% for traning and 20% for testing. We will predict the next tp ∈ [31× δt, 40× δt] PDE solution by using its previous ttr ∈ [0, 30× δt] data as:\nB(K, 10) = PDESolver(A(K, 30)) (4.0.1)\nTable 1 summaries the experimental results of the Neural-PDE model on 4 different PDEs, which achieve extremely small MSEs from ∼ 10−5 to ∼ 10−7. Table 2 shows the comparison results of our proposed Neural-PDE with the state-of-the-art method Physically Informed Artificial Neural Networks (PINN) (Raissi et al., 2019) on two PDEs (1d Allen-Cahn and 1d Burgers’ equation). Neural-PDE is able to outperform PINN while having much less parameters, where PINN contains 4 hidden layers with 200 neurons per layer and Neural-PDE only consists of 3 layers (2 bi-lstm with 20 neurons per layer and 1 dense output layer with 10 neurons).\nEXAMPLE: INVISCID BURGERS’ EQUATION\nInviscid Burgers’ equation is a classical nonlinear PDE in fluid dynamics. In this example, we consider an invicid Burgers’ equation which has the following hyperbolic form:\n∂u ∂t + u ∂u ∂x + v ∂u ∂y = 0, ∂v ∂t + u ∂u ∂x + v ∂u ∂y = 0 (4.0.2)\nΩ = [0, 1]× [0, 1], t ∈ [0, 1], (4.0.3)\nand with initial and boundary conditions:\nu(0.25 ≤ x ≤ 0.75, 0.25 ≤ y ≤ 0.75, t = 0) = 0.9 (4.0.4) v(0.25 ≤ x ≤ 0.75, 0.25 ≤ y ≤ 0.75, t = 0) = 0.5 (4.0.5) u(0, y, t) = u(1, y, t) = v(x, 0, t) = v(x, 1, t) = 0 (4.0.6)\nThe invicid Burgers’ equation is hard to deal with in numerical PDEs due to the discontinuities (shock waves) in the solutions. We use a upwinding finite difference scheme to create the training data and put the velocity u, v in to the input matrix. Let δx = δy = 10−2, δt = 10−3, our empirical results (see Figure 4) show that the Neural-PDE is able to learn the shock waves, boundary conditions and the rules of the equation, and predict u and v simultaneously with an overall MSE of 1.4018× 10−5. The heat maps of exact solution and predicted solution are shown in Figure 5." }, { "heading": "EXAMPLE: MULTISCALE MODELING: COUPLED CAHN–HILLIARD–NAVIER–STOKES SYSTEM", "text": "Finally, let’s consider the following 2d Cahn–Hilliard–Navier–Stokes system widely used for modeling complex fluids:\nut + u · ∇u = −∇p+ ν∆u− φ∇µ , (4.0.7) φt +∇ · (uφ) = M∆µ , (4.0.8)\nµ = λ(−∆φ+ φ η2 (φ2 − 1)) (4.0.9)\n∇ · u = 0 (4.0.10)\nIn this complicated example we will use the following initial condition:\nφ(x, y, 0) = ( 1\n2 − 50 tanh(f1 − 0.1)) + (\n1 2 − 50 tanh(f2 − 0.1)), I.C. (4.0.11)\nf1 = √ (x+ 0.12)2 + (y)2, f2 = √\n(x− 0.12)2 + (y)2 (4.0.12) with x ∈ [−0.5, 0.5], y ∈ [−0.5, 0.5], t ∈ [0, 1], M = 0.1, ν = 0.01 (4.0.13)\nThis fluid system can be derived by the energetic variational approach (Forster, 2013). The complex fluids system has the following features: the micro-structures such as the molecular configurations, the interaction between different scales and the competition between multi-phase fluids (Hyon et al., 2010). Here u is the velocity and φ(x, y, t) ∈ [0, 1] denotes the volume fraction of one fluid phase. M is the diffusion coefficient and µ is the chemical potential of φ. Equation (4.0.10) indicates the incompressibility of the fluid. Solving such PDE system is notorious because of its high nonlinearity and multi-physical and coupled features. One may use the decoupled projection method (Guermond et al., 2006) to numerically solve it efficiently or an implicit method which however is computationally\nexpensive. Another challenge of deep learning in solving a system like this is how to process the data to improve the learning efficiency when the input matrix consists of variables such as φ ∈ [0, 1] with large magnitude value and variable of very small values such as p ∼ 10−5. For the Neural-PDE to better extract and learn the physical features of variables in different scales, we normalized the p data with a sigmoid function. Set δt = 5× 10−4, and here the training dataset is generated by a FEM solver FreeFem++ (Hecht, 2012) using a Crank-Nicolson finite element scheme. Our Neural-PDE prediction shows that the physical features of p and φ have been successfully captured with an overall MSE: 6.1631× 10−7 (see Figure 7)." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we proposed a novel sequence recurrent deep learning framework: Neural-PDE, which is capable of intelligently filtering and learning solutions of time-dependent PDEs. One key innovation of our method is that the time marching method from the numerical PDEs is applied in the deep learning framework, and the neural network is trained to explore the accurate numerical solutions for prediction.\nThe state-of-the-art researches have shown the promising power of deep learning in solving highdimensional nonlinear problems in engineering, biology and finance with efficiency in computation and accuracy in prediction. However, there are still unresolved issues in applying deep learning in PDEs. For instance, the stability and convergence of the numerical algorithms have been rigorously studied by applied mathematicians. Due to the high nonlinearity of the neural network system and the curse of dimensionality (Hutzenthaler et al., 2019), theorems guiding stability and convergence of solutions predicted by the neural network are to be revealed.\nLastly, it would be helpful and interesting if one can theoretically characterize a numerical scheme from the neural network coefficients and learn the forms or mechanics from the scheme and prediction. We leave these questions for further study." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 FINITE ELEMENT METHOD", "text": "Finite element method (FEM) is a powerful numerical method in solving PDEs. Consider a 1D wave equation of u(x, t):\nutt − v2uxx = f, x ∈ [a, b] ≡ Ω ⊂ R, t ∈ R+ ∪ {0} , (A.1.1) ux(a, t) = ux(b, t) = 0 . (A.1.2)\nThe function u is approximated by a function uh :\nu(x, t) ≈ uh(x, t) = N∑ i=1 ai(t)ψi(x) (A.1.3)\n(A.1.4) where ψi ∈ V , is the basis functions of some FEM space V , and ani denotes the coefficients. N denotes the degrees of freedom. Multiply the equation with an arbitrary test function ψj and integral over the whole domain we have:∫\nΩ\nuttψj dx+ v 2 ∫ Ω ∇u∇ψj dx = ∫ Ω fψj dx (A.1.5)\n(A.1.6) and approximate u(x, t) by uh:\nN∑ i ∂2ai(t) ∂t2 ∫ Ω\nψiψj dx︸ ︷︷ ︸ Mi,j\n+v2 N∑ i ai(t) ∫ Ω\n∇ψi∇ψj dx︸ ︷︷ ︸ Ai,j =\n∫ Ω\nfψj︸ ︷︷ ︸ b dx , (A.1.7)\n≡MTatt + v2ATa = b . (A.1.8) Here M is the mass matrix and A is the stiffness matrix, a = (a1, .., aN )t is a N × 1 vector of the coefficients at time t. The central difference method for time discretization indicates that (Johnson, 2012):\nan+1 = 2an − an−1 + M−1(b− v2ATan) , (A.1.9) un+1 ≈ un+1h = N∑ i an+1i ψi(x) . (A.1.10)" }, { "heading": "A.2 LONG SHORT-TERM MEMORY", "text": "Long Short-Term Memory Networks (LSTM) (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2005) are a class of artificial recurrent neural network (RNN) architecture that is commonly used for processing sequence data and can overcome the gradient vanishing issue in RNN. Similar to most RNNs (Mikolov et al., 2011), LSTM takes a sequence {x1,x2, · · · ,xt} as input and learns hidden vectors {h1,h2, · · · ,ht} for each corresponding input. In order to better retain long distance information, LSTM cells are specifically designed to update the hidden vectors. The computation process of the forward pass for each LSTM cell is defined as follows:\nit = σ(W (x) i xt + W (h) i ht−1 + W (c) i ct−1 + bi) ,\nft = σ(W (x) f xt + W (h) f ht−1 + W (c) f ct−1 + bf ) ,\nct = ftct−1 + it tanh(W (x) c xt + W (h) c ht−1 + bc) ,\not = σ(W (x) o xt + W (h) o ht−1 + W (c) o ct + bo),\nht = ot tanh(ct) ,\nwhere σ is the logistic sigmoid function, Ws are weight matrices, bs are bias vectors, and subscripts i, f , o and c denote the input gate, forget gate, output gate and cell vectors respectively, all of which have the same size as hidden vector h.\nThis LSTM structure is used in the paper to simulate the numerical solutions of partial differential equations." }, { "heading": "A.3 EXAMPLES", "text": "" }, { "heading": "A.3.1 WAVE EQUATION", "text": "Consider the 1d wave equation:\nutt = cuxx, x ∈ [0, 1], t ∈ [0, 2] , (A.3.1) u(x, 0) = sin(4πx) (A.3.2) u(0, t) = u(1, t) (A.3.3)\nLet c = 116π2 and use the analytical solution given by the characteristics for the training and testing data:\nu(x, t) = 1\n2 (sin(4πx+ t) + sin(4πx− t)) (A.3.4)" }, { "heading": "A.3.2 HEAT EQUATION", "text": "The heat equation describes how the motion or diffusion of a heat flow evolves over time. The Black–Scholes model (Black & Scholes, 1973) is also developed based on the physical laws behind the heat equation. Rather than the 1D case that maps the data into a matrix (??) with its original spatial locations, the high dimensional PDEs grids are mapped into matrix without regularization of the position, and the experimental results show that Neural-PDE is able to capture the valuable features regardless of the order of the mesh grids in the matrix. Let’s start with a 2D heat equation as follows:\nut = uxx + uyy, (A.3.5)\nu(x, y, 0) = { 0.9, if (x− 1)2 + (y − 1)2 < 0.25 0.1, otherwise (A.3.6)\nΩ = [0, 2]× [0, 2], t ∈ [0, 0.15] (A.3.7)" } ]
2,020
null
SP:797b07cd8142a35333037bb573db0dfe5dde65ac
[ "In this paper, the authors develop a data selection scheme aimed to minimize a notion of Bayes excess risk for overparametrized linear models. The excess Bayes risk is the expected squared error between the prediction and the target. The authors note that solutions such as V-optimality exist for the underparametrized cases (linear regression), and offer extensions to ridge regression. After the development of a greedy schemes and a tentative extension to deep learning models, the authors show that their selection scheme can outperform random selection on MNIST with a specific model." ]
The impressive performance exhibited by modern machine learning models hinges on the ability to train such models on a very large amounts of labeled data. However, since access to large volumes of labeled data is often limited or expensive, it is desirable to alleviate this bottleneck by carefully curating the training set. Optimal experimental design is a well-established paradigm for selecting data point to be labeled so to maximally inform the learning process. Unfortunately, classical theory on optimal experimental design focuses on selecting examples in order to learn underparameterized (and thus, non-interpolative) models, while modern machine learning models such as deep neural networks are overparameterized, and oftentimes are trained to be interpolative. As such, classical experimental design methods are not applicable in many modern learning setups. Indeed, the predictive performance of underparameterized models tends to be variance dominated, so classical experimental design focuses on variance reduction, while the predictive performance of overparameterized models can also be, as is shown in this paper, bias dominated or of mixed nature. In this paper we propose a design strategy that is well suited for overparameterized regression and interpolation, and we demonstrate the applicability of our method in the context of deep learning by proposing a new algorithm for single shot deep active learning.
[]
[ { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Russ R Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems, pp. 8141–8150,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang", "Dingli Yu" ], "title": "Harnessing the power of infinitely wide deep nets on small-data tasks", "venue": "arXiv preprint arXiv:1910.01663,", "year": 2019 }, { "authors": [ "Jordan T Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": "arXiv preprint arXiv:1906.03671,", "year": 1906 }, { "authors": [ "Haim Avron", "Christos Boutsidis" ], "title": "Faster subset selection for matrices and applications", "venue": "SIAM Journal on Matrix Analysis and Applications,", "year": 2013 }, { "authors": [ "André Bardow" ], "title": "Optimal experimental design of ill-posed problems: The METER approach", "venue": "Computers & Chemical Engineering,", "year": 2008 }, { "authors": [ "Peter L. Bartlett", "Philip M. Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Mikhail Belkin", "Siyuan Ma", "Soumik Mandal" ], "title": "To understand deep learning we need to understand kernel learning", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Ji Xu" ], "title": "Two models of double descent for weak features", "venue": "arXiv preprint arXiv:1903.07571,", "year": 1903 }, { "authors": [ "Christos Boutsidis", "Michael W. Mahoney", "Petros Drineas" ], "title": "An improved approximation algorithm for the column subset selection problem", "venue": "In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2009 }, { "authors": [ "Kathryn Chaloner", "Isabella Verdinelli" ], "title": "Bayesian experimental design: A review", "venue": "Statistical Science,", "year": 1995 }, { "authors": [ "Luiz Chamon", "Alejandro Ribeiro" ], "title": "Approximate supermodularity bounds for experimental design", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "arXiv preprint arXiv:1703.02910,", "year": 2017 }, { "authors": [ "Yonatan Geifman", "Ran El-Yaniv" ], "title": "Deep active learning over the long tail", "venue": "arXiv preprint arXiv:1711.00941,", "year": 2017 }, { "authors": [ "Quanquan Gu", "Tong Zhang", "Jiawei Han", "Chris Ding" ], "title": "Selective labeling via error bound minimization", "venue": "Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Eldad Haber", "Lior Horesh", "Luis Tenorio" ], "title": "Numerical methods for experimental design of largescale linear ill-posed inverse problems", "venue": "Inverse Problems,", "year": 2008 }, { "authors": [ "Eldad Haber", "Zhuojun Magnant", "Christian Lucero", "Luis Tenorio" ], "title": "Numerical methods for A-optimal designs with a sparsity constraint for ill-posed inverse problems", "venue": "Computational Optimization and Applications,", "year": 2012 }, { "authors": [ "Steven CH Hoi", "Rong Jin", "Jianke Zhu", "Michael R Lyu" ], "title": "Batch mode active learning and its application to medical image classification", "venue": "In Proceedings of the 23rd International Conference on Machine Learning,", "year": 2006 }, { "authors": [ "Lior Horesh", "Eldad Haber", "Luis Tenorio" ], "title": "Optimal experimental design for the large-scale nonlinear ill-posed problem of impedance imaging", "venue": "Large-Scale Inverse Problems and Quantification of Uncertainty,", "year": 2010 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Mina Karzand", "Robert D Nowak" ], "title": "Maximin active learning in overparameterized model classes", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 2020 }, { "authors": [ "Dmitry Kobak", "Jonathan Lomond", "Benoit Sanchez" ], "title": "Optimal ridge penalty for real-world highdimensional data can be zero or negative due to the implicit ridge regularization", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "MNIST handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel Schoenholz", "Yasaman Bahri", "Roman Novak", "Jascha SohlDickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Marco Loog", "Tom Viering", "Alexander Mey" ], "title": "Minimizers of the empirical risk and risk monotonicity", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Preetum Nakkiran" ], "title": "More data can hurt for linear regression: Sample-wise double descent", "venue": "arXiv preprint arXiv:1912.07242,", "year": 1912 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Preetum Nakkiran", "Prayaag Venkat", "Sham Kakade", "Tengyu Ma" ], "title": "Optimal regularization can mitigate double descent", "venue": "arXiv preprint arXiv:2003.01897,", "year": 2020 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Jiri Hron", "Jaehoon Lee", "Alexander A. Alemi", "Jascha Sohl-Dickstein", "Samuel S. Schoenholz" ], "title": "Neural tangents: Fast and easy infinite neural networks in Python", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Robert Pinsler", "Jonathan Gordon", "Eric Nalisnick", "José Miguel Hernández-Lobato" ], "title": "Bayesian batch active learning as sparse subset approximation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Luc Pronzato", "Andrej Pázman" ], "title": "Design of experiments in nonlinear models", "venue": "Lecture Notes in Statistics,", "year": 2013 }, { "authors": [ "Friedrich Pukelsheim" ], "title": "Optimal design of experiments", "venue": "SIAM, 2006", "year": 2006 }, { "authors": [ "R Tyrrell Rockafellar", "Roger J-B Wets" ], "title": "Variational analysis, volume 317", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jamshid Sourati", "Ali Gholipour", "Jennifer G Dy", "Sila Kurugol", "Simon K Warfield" ], "title": "Active deep learning with Fisher information for patch-wise semantic segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support", "venue": null, "year": 2018 }, { "authors": [ "Yazhou Yang", "Marco Loog" ], "title": "Single shot active learning using pseudo annotators", "venue": "Pattern Recognition,", "year": 2019 }, { "authors": [ "Kai Yu", "Jinbo Bi", "Volker Tresp" ], "title": "Active learning via transductive experimental design", "venue": "In Proceedings of the 23rd International Conference on Machine Learning, pp. 1081–1088", "year": 2006 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The impressive performance exhibited by modern machine learning models hinges on the ability to train the aforementioned models on a very large amounts of labeled data. In practice, in many real world scenarios, even when raw data exists aplenty, acquiring labels might prove challenging and/or expensive. This severely limits the ability to deploy machine learning capabilities in real world applications. This bottleneck has been recognized early on, and methods to alleviate it have been suggested. Most relevant for our work is the large body of research on active learning or optimal experimental design, which aims at selecting data point to be labeled so to maximally inform the learning process. Disappointedly, active learning techniques seem to deliver mostly lukewarm benefits in the context of deep learning.\nOne possible reason why experimental design has so far failed to make an impact in the context of deep learning is that such models are overparameterized, and oftentimes are trained to be interpolative (Zhang et al., 2017), i.e., they are trained so that a perfect fit of the training data is found. This raises a conundrum: the classical perspective on statistical learning theory is that overfitting should be avoided since there is a tradeoff between the fit and complexity of the model. This conundrum is exemplified by the double descent phenomena (Belkin et al., 2019b; Bartlett et al., 2020), namely when fixing the model size and increasing the amount of training data, the predictive performance initially goes down, and then starts to go up, exploding when the amount of training data approaches the model complexity, and then starts to descend again. This runs counter to statistical intuition which says that more data implies better learning. Indeed, when using interpolative models, more data can hurt (Nakkiran et al., 2020a)! This phenomena is exemplified in the curve labeled “Random Selection” in Figure 1. Figure 1 explores the predictive performance of various designs when learning a linear regression model and varying the amount of training data with responses.\nThe fact that more data can hurt further motivates experimental design in the interpolative regime. Presumably, if data is carefully curated, more data should never hurt. Unfortunately, classical optimal experimental design focuses on the underparameterized (and thus, noninterpolative) case. As such, the theory reported in the literature is often not applicable in the interpolative regime. As our analysis shows (see Section 3), the prediction error of interpolative models can either be bias dominated (the first descent phase, i.e., when training size is very small compared to the number of parameters), variance dominated (near equality of size and parameters) or of mixed nature. However, properly trained underparameterized models tend to have prediction error which is variance dominated, so classical experimental design focuses on variance reduction. As such, naively using\nclassical optimality criteria, such as V-optimality (the one most relevant for generalization error) or others, in the context of interpolation, tends to produce poor results when prediction error is bias dominated or of mixed nature. This is exemplified in the curve labeled “Classical OED” in Figure 1.\nThe goal of this paper is to understand these regimes, and to propose an experimental design strategy that is well suited for overparameterized models. Like many recent work that attempt to understand the double descent phenomena by analyzing underdetermined linear regression, we too use a simple linear regression model in our analysis of experimental design in the overparameterized case (however, we also consider kernel ridge regression, not only linear interpolative models). We believe that understanding experimental design in the overparameterized linear regression case is a prelude to designing effective design algorithms for deep learning. Indeed, recent theoretical results showed a deep connection between deep learning and kernel learning via the so-called Neural Tangent Kernel (Jacot et al., 2018; Arora et al., 2019a; Lee et al., 2019). Based on this connection, and as a proof-of-concept, we propose a new algorithm for single shot deep active learning.\nLet us now summarize our contributions:\n• We analyze the prediction error of learning overparameterized linear models for a given fixed design, revealing three possible regimes that call for different design criteria: bias dominated, variance dominated, and mixed nature. We also reveal an interesting connection between overparameterized experimental design and the column subset selection problem (Boutsidis et al., 2009), transductive experimental design (Yu et al., 2006), and coresets (Sener & Savarese, 2018). We also extend our approach to kernel ridge regression. • We propose a novel greedy algorithm for finding designs for overparameterized linear\nmodels. As exemplified in the curve labeled “Overparameterized OED”, our algorithm is sometimes able to mitigate the double descent phenomena, while still performing better than classical OED (though no formal proof of this fact is provided). • We show how our algorithm can also be applied for kernel ridge regression, and report\nexperiments which show that when the number of parameters is in a sense infinite, our algorithm is able to find designs that are better than state of the art. • We propose a new algorithm for single shot deep active learning, a scaracly treated problem\nso far, and demonstrate its effectiveness on MNIST.\nRelated Work. The phenomena of benign overfitting and double descent was firstly recognized in DNNs (Zhang et al., 2017), and later discussed and analyzed in the context of linear models (Zhang et al., 2017; Belkin et al., 2018; 2019a;b; Bartlett et al., 2020). Recently there is also a growing interest in the related phenomena of “more data can hurt” (Nakkiran et al., 2020a; Nakkiran, 2019; Nakkiran et al., 2020b; Loog et al., 2019). A complementary work discussed the need to consider zero or negative regularization coefficient for large real life linear models (Kobak et al., 2020).\nExperimental design is an well established paradigm in statistics, extensively covered in the literature for the linear case (Pukelsheim, 2006) and the non linear case (Pronzato & Pázman, 2013). The application of it to pool based active learning with batch acquisitions was explored by Yu et al. (2006) for linear models and by Hoi et al. (2006) for logistic regression. It was also proposed in the context of deep learning (Sourati et al., 2018). Another related line of work is recent work by Haber and Horesh on experimental design for ill-posed inverse problems(Haber et al., 2008; 2012; Horesh et al., 2010). Active learning in the context of overparameterized learning was explored by Karzand & Nowak (2020), however their approach differs from ours significantly since it is based on artificially completing the labels using a minimax approach.\nI the context of Laplacian regularized Least Squares (LapRLS), which is a generalization of ridge regression, Gu et al. (2012) showed rigorously that Yu et al. (2006) criterion is justified as a bound for both the bias and variance components of the expected error. We farther show that this bound is in some sense tight only if the parameter norm is oneand the noise variance equals the l2 penalty coefficient. In addition we postulate and show experimentally that in the overparameterized case using a bias dominant criterion is preferable. Another case in which the bias term idoes not vanish is when the model is misspecified. For linear and generalized linear models this case has been tackled with reweighing of the loss function.\nA popular modern approach for pool based active learning with batch acquisition is coresets (Sener & Savarese, 2018; Geifman & El-Yaniv, 2017; Ash et al., 2019; Pinsler et al., 2019). This approach has been used in the context of active learning for DNNs." }, { "heading": "2 UNDERPARAMETERIZED V-OPTIMAL EXPERIMENTAL DESIGN", "text": "Consider a noisy linear response model y = xTw + , where ∼ N (0, σ2) and w ∈ Rd and assume we are given with some data points x1, . . . ,xn, for which we obtained independent responses, yi = x T iw + i. Consider the underparameterized case, i.e. n ≥ d, and furthermore assume that the set {x1, . . . ,xn} contains at least d independent vectors. The best linear unbiased estimator ŵ of w according to the Gauss-Markov theorem is given by: ŵ = arg minw ‖Xw − y‖22 = X +y where X ∈ Rn×d is a matrix whose rows are x1, . . . ,xn, y = [y1 . . . yn]T ∈ Rn and X+ is the Moore-Pensrose pseudoinverse of X. It is well known that ŵ −w is a normal random vector with zero mean and covariance matrix σ2M−1, where M = XTX is the Fisher information matrix. This implies that ŷ(x)− y(x) is also a normal variable with zero mean and variance equal to σ2xTM−1x. Assume also that x comes from a distribution ρ. With that we can further define the excess risk R(ŵ) = Ex∼ρ [ (xTw − xTŵ)2 ] and its expectation:\nE [R(ŵ)] = Ex∼ρ [Var [y(x)− ŷ(x)]] = Ex∼ρ [ σ2xTM−1x ] = Tr ( σ2M−1Cρ ) (1)\nwhere Cρ is the uncentered second moment matrix of ρ: Cρ := Ex∼ρ [ xxT ] .\nEq. (1) motivates the so-called V-optimal design criterion: select the dataset x1, . . . ,xn so that ϕ(M) := Tr ( M−1Cρ ) is minimized (if we do not have access to Cρ then it is possible to estimate it by drawing samples from ρ). In doing so, we are trying to minimize the expected (with respect to the noise ) average (with respect to the data x) prediction variance, since the risk is composed solely from it (due to the fact that the estimator is unbiased). As we shall see, this is in contrast with the overparameterized case, in which the estimator is biased.\nV-optimality is only one instance of various statistical criteria used in experimental design. In general experimental design, the focus is on minimizing a preselected criteria ϕ (M) (Pukelsheim, 2006). For example in D-optimal design, ϕ(M) = det(M−1) and in A-optimal design ϕ(M) = Tr ( M−1 ) . However, since minimizing the V-optimality criterion corresponds to minimizing the risk, it is more appropriate when assessing the predictive performance of machine learning models." }, { "heading": "3 OVERPARAMETERIZED EXPERIMENTAL DESIGN CRITERIA", "text": "In this section we derive an expression for the risk in the overparameterized case, i.e. like Eq. (1) but also for the case that n ≤ d (our expressions also hold for n > d). This, in turn, leads to an\nexperimental design criteria analogous to V-optimality, but relevant for overparamterized modeling as well. We design a novel algorithm based on this criteria in subsequent sections." }, { "heading": "3.1 OVERPARAMETERIZED REGRESSION AND INTERPOLATION", "text": "When n ≥ d there is a natural candidate for ŵ: the best unbiased linear estimator X+y1. However, when d > n there is no longer a unique minimizer of ‖Xw − y‖22 as there is an infinite amount of interpolating w’s, i.e. w’s such that Xw = y (the last statement makes the mild additional assumption that X has full row rank). One natural strategy for dealing with the non-uniqueness is to consider the minimum norm interpolator:\nŵ := arg min ‖w‖22 s.t. Xw = y\nIt is still the case that ŵ = X+y. Another option for dealing with non-uniqueness of the minimizer is to add a ridge term, i.e., add and additive penalty λ‖w‖22. Let:\nŵλ := arg min ‖Xw − y‖22 + λ‖w‖22 One can show that\nŵλ = X + λ y (2) where for λ ≥ 0 we define X+λ := ( XTX + λId )+ XT (see also Bardow (2008)). Note that Eq. (2) holds both for the overparameterized (d ≥ n) and underparameterized (d < n) case. Proposition 1. The function λ 7→ X+λ is continuous for all λ ≥ 0.\nThe proof, like all of our proofs, is delegated to the appendix. Thus, we also have that the minimum norm interpolator ŵ is equal to ŵ0, and that λ 7→ ŵλ is continuous. This implies that the various expressions for the expected risk of ŵλ hold also when λ = 0. So, henceforth we analyze the expected risk of ŵλ and the results also apply for ŵ.\n3.2 EXPECTED RISK OF ŵλ\nThe following proposition gives an expression for the expected risk of the regularized estimator ŵλ. Note that it holds both for the overparameterized (d ≥ n) and underparameterized (d < n) case. Proposition 2. We have\nE [R(ŵλ)] = ‖C 1/2 ρ ( I−M+λM ) w‖22︸ ︷︷ ︸\nbias\n+σ2Tr ( CρM +2 λ M )\n︸ ︷︷ ︸ variance\nwhere Mλ := XTX + λId = M + λId. The expectation is with respect to the training noise .\nThe last proposition motivates the following design criterion, which can be viewed as a generalization of classical V-optimality:\nϕλ(M) := ‖C 1/2 ρ ( I−M+λM ) w‖22 + σ2Tr ( CρM +2 λ M ) .\nFor λ = 0 the expression simplifies to the following expression:\nϕ0 (M) = ‖C 1/2 ρ (I−PM)w‖22 + σ2Tr ( CρM + )\nwhere PM = M+M is the projection on the row space of X. Note that when n ≥ d and X has full column rank, ϕ0(M) reduces to the variance of underparameterized linear regression, so minimizing ϕλ(M) is indeed a generalization of the V-optimality criterion.\nNote the bias-variance tradeoff in ϕλ(M). When the bias term is much larger than the variance, something we should expect for small n, then it make sense for the design algorithm to be bias oriented. When the variance is larger, something we should expect for n ≈ d or n ≥ d, then the design algorithm should be variance oriented. It is also possible to have mixed nature in which both bias and variance are of the same order.\n1In practice, when n is only mildly bigger than d it is usually better to regularize the problem." }, { "heading": "3.3 PRACTICAL CRITERION", "text": "As is, ϕλ is problematic as an experimental design criterion since it depends both on w and on Cρ. We discuss how to handle an unknown Cρ in Subsection 3.5. Here we discuss how to handle an unknown w. Note that obviously w is unknown: it is exactly what we want to approximate! If we have a good guess w̃ for the true value of w, then we can replace w with w̃ in ϕλ. However, in many cases, such an approximation is not available. Instead, we suggest to replace the bias component with an upper bound: ‖C1/2ρ ( I−M+λM ) w‖22 ≤ ‖w‖22 · ‖C 1/2 ρ ( I−M+λM ) ‖2F .\nLet us now define a new design criterion which has an additional parameter t ≥ 0:\nϕ̄λ,t(M) = ‖C 1/2 ρ ( I−M+λM ) ‖2F︸ ︷︷ ︸\nbias bound (divided by ‖w‖22)\n+ tTr ( CρM + λ 2 M )\n︸ ︷︷ ︸ variance (divided by ‖w‖22) .\nThe parameter t captures an a-priori assumption on the tradeoff between bias and variance: if we have t = σ2/‖w‖22, then ϕλ(M) ≤ ‖w‖22 · ϕ̄λ,t(M) . Thus, minimizing ϕ̄λ,t(M) corresponds to minimizing an upper bound of ϕλ, if t is set correctly.\nAnother interpretation of ϕ̄λ,t(M) is as follows. If we assume that w ∼ N (0, γ2Id), then\nEw [ϕλ(M)] = γ2‖C 1/2 ρ ( I−M+λM ) ‖2F + σ2Tr ( CρM +2 λ M )\nso if we set t = σ2/γ2 then γ2ϕ̄λ,t(M) = Ew [ϕλ(M)], so minimizing ϕ̄λ,t(M) corresponds to minimizing the expected expected risk if t is set correctly. Again, the parameter t captures an a-priori assumption on the tradeoff between bias and variance. Remark 1. One alternative strategy for dealing with the fact that w is unknown is to consider a sequential setup where batches are acquired incrementally based on increasingly refined approximations of w. Such a strategy falls under the heading of Sequential Experimental Design. In this paper, we focus on single shot experimental design, i.e. examples are chosen to be labeled once. We leave sequential experimental design to future research. Although, we decided to focus on the single shot scenario for simplicity, the single shot scenario actually captures important real-life scenarios." }, { "heading": "3.4 COMPARISON TO OTHER GENERALIZED V-OPTIMALITY CRITERIA", "text": "Consider the case of λ = 0. Note that we can write: ϕ̄0,t(M) = ‖C 1/2 ρ (I−PM) ‖2F +tTr ( CρM + ) .\nRecall that the classical V-optimal experimental design criterion is Tr ( CρM −1), which is only applicable if n ≥ d (otherwise, M is not invertible). Indeed, if n ≥ d and M is invertible, then PM = Id and ϕ̄0,t(M) is equal to Tr ( CρM\n−1) up to a constant factor. However, M is not invertible if n < d and the expression Tr ( CρM\n−1) does not make sense. One naive generalization of classical V-optimality for n < d would be to simply replace the inverse with pseudoinverse, i.e. Tr ( CρM + ) . This corresponds to minimizing only the variance term, i.e. taking t → ∞. This is consistent with classical experimental design which focuses on variance reduction, and is appropriate when the risk is variance dominated.\nAnother generalization of V-optimality can be obtained by replacing M with its regularized (and invertible) version Mµ = M+µId for some chosen µ > 0, obtaining Tr ( CρM −1 µ ) . This is exactly the strategy employed in transductive experimental design (Yu et al., 2006), and it also emerges in a Bayesian setup (Chaloner & Verdinelli, 1995). One can try to eliminate the parameter µ by taking the limit of the minimizers when µ→ 0. The following proposition shows that this is actually equivalent to taking t = 0. Proposition 3. For a compact domain Ω ⊂ Rd×d of symmetric positive semidefinite matrices:\nlim µ→0 argmin M∈Ω\nTr ( CρM −1 µ ) ⊆ argmin\nM∈Ω Tr (Cρ (I−PM)) .\nWe see that the aforementioned generalizations of V-optimality correspond to either disregarding the bias term (t =∞) or disregarding the variance term (t = 0). However, using ϕ̄0,t(M) allows much better control over the bias-variance tradeoff (see Figure 1.)\nLet us consider now the case of λ > 0. We now show that the regularized criteria Tr ( CρM −1 µ ) used in transductive experimental design (See Proposition 3) when µ = λ corresponds to also using t = λ.\nProposition 4. For any matrix space Ω, λ > 0: argminX∈Ω Tr ( CρM −1 λ ) = argminX∈Ω ϕ̄λ,λ(M)\nSo, transductive experimental design corresponds to a specific choice of bias-variance tradeoff. Another interesting relation with transductive experimental design is given by next proposition which is a small modification of Theorem 1 due to Gu et al. (2012) .\nProposition 5. For any λ > 0 and t ≥ 0: ϕ̄λ,t(M) ≤ (λ+ t)Tr ( CρM −1 λ ) In the absence of a decent model of the noise, which is a typical situation in machine learning, Prop. 5 suggests the to perhaps minimize only Tr ( CρM −1 λ ) without need to set t. However, this approach may be suboptimal in the overparameterized regime. This approach implicitly considers t = λ (see Prop. 4) which in a bias dominated regime can put too much emphasis on minimizing the variance. A sequential approach for experimental design can lead to better modeling of the noise, thereby assisting in dynamically setting t during acquisition-learning cycles. However, in a single shot regime, noise estimation is difficult. Arguably, there exists better values for t than using a default rule-of-thumb t = λ. In particular, we conjecture that t = 0 is a better rule-of-thumb then t = λ for severely overparameterized regimes as it suppresses the potential damage of choosing a too large λ and it is reasonale also if λ is small (since anyway we are in a bias dominated regime), so we can focus on minimizing the bias only. In the experiment section we show an experiment that supports this assumption. Notice that t =∞ corresponds to minimizing the variance, while t = 0 corresponds to minimizing the bias.\n3.5 APPROXIMATING Cρ\nOur criteria so far depended on Cρ. Oftentimes Cρ is unknown. However, it can be approximated using unlabeled data. Suppose we have m unlabeled points (i.e. drawn form ρ), and suppose we write them as the rows of V ∈ Rm×d. Then E [ m−1VTV ] = Cρ. Thus, we can write\nmϕλ(M) ≈ ψλ(M) := ‖V ( Id −M+λM ) w‖22 + σ2Tr ( VM+ 2 λ MV T ) , λ ≥ 0.\nand use ψλ(M) instead of ϕλ(M). For minimum norm interpolation we have ψ0(M) = ‖V (Id −PM)w‖22 + σ2Tr ( VM+VT ) .\nAgain, let us turn this into a practical design criteria by introducing an additional parameter t:\nψ̄λ,t(M) := ‖V ( Id −M+λM ) ‖2F + tTr ( VM+ 2 λ MV T ) . (3)" }, { "heading": "4 POOL-BASED OVERPARAMETERIZED EXPERIMENTAL DESIGN", "text": "In the previous section we defined design criteria ϕ̄λ,t and ψ̄λ,t that are appropriate for overparameterized linear regression. While one can envision a situation in which such we are free to choose X so to minimize the design criteria, in much more realistic pool-based active learning we assume that we are given in advance a large pool of unlabeled data x1, . . . ,xm. The training set is chosen to be a subset of the pool. This subset is then labeled, and learning performed. The goal of pool-based experimental design algorithms is to chose the subset to be labeled.\nWe formalize the pool-based setup as follows. Recall that to approximate Cρ we assumed we have a pool of unlabeled data written as the rows of V ∈ Rm×d. We assume that V serves also as the pool of samples from which X is selected. For a matrix A and index sets S ⊆ [n], T ⊆ [d], let AS,T be the matrix obtained by restricting to the rows whose index is in S and the columns whose index is in T . If : appears instead of an index set, that denotes the full index set corresponding to that dimension. Our goal is to select a subset S of cardinality n such that ψ̄λ,t(VTS,:VS,:) is minimized (i.e., setting X = VS,:). Formally, we pose following problem:\nProblem 1. (Pool-based Overparameterized V-Optimal Design) Given a pool of unlabeled examples V ∈ Rm×d, a regularization parameter λ ≥ 0, a bias-variance tradeoff parameter t ≥ 0, and a design size n, find a minimizer of\nmin S⊆[m], |S|=n\nψ̄λ,t(V T S,:VS,:).\nProblem 1 is a generalization of the Column Subset Selection Problem (CSSP) (Boutsidis et al., 2009). In the CSSP, we are given matrix U ∈ Rd×m and target number of columns n, and our goal is to select a subset T which is a minimizer of\nmin T ⊆[m], |T |=n\n‖(Id −U:,TU+:,T )U‖2F\nWhen λ = 0 and t = 0, Problem 1 reduces to the CSSP for U = VT. The λ = t = 0 case is also somewhat related to the coreset approach for active learning Sener & Savarese (2018); Pinsler et al. (2019); Ash et al. (2019); Geifman & El-Yaniv (2017). See Appendix B." }, { "heading": "5 OPTIMIZATION ALGORITHM", "text": "In this section we propose an algorithm for overparameterized experimental design. Our algorithm is based on greedy minimization of a kernalized version of ψ̄λ,t(VTS,:VS,:). Thus, before presenting our algorithm, we show how to handle feature spaces defined by a kernel.\nKernelization. If |S| ≤ d and VS,: has full row rank we have (VS,:)+λ = VTS,: ( VS,:V T S,: + λI|S| )−1 which allows us to write\nψ̄λ,t(V T S,:VS,:) = Tr ( V [ I− 2VTS,: ( VS,:V T S,: + λI|S| )−1 VS,: ] VT )\n+Tr ( VVTS,: ( VS,:V T S,: + λI|S| )−1 VS,:V T S,: ( VS,:V T S,: + λI|S| )−1 VS,:V T )\n+tTr ( VVTS,: ( VS,:V T S,: + λI|S| )−2 VS,:V T )\nLet now K := VVT ∈ Rm×m. Then VS,:VTS,: = KS,S and VV T S,: = K:,S . Since Tr (K) is constant, minimizing ψ̄λ,t(VTS,:VS,:) is equivalent to minimizing\nJλ,t(S) := Tr ( K:,S [( KS,S + λI|S| )−1 (−2I|S| + KS,S (KS,S + λI|S|)−1)+ t (KS,S + λI|S|)−2]KT:,S) . (4)\nFor λ = 0 we have a simpler form: J0,t(S) = Tr ( K:,S [ −K−1S,S + tK −2 S,S ] KT:,S ) .\nInterestingly, when λ = 0 and t = 0, minimizing J0,0(S) is equivalent to maximizing the trace of the Nystrom approximation of K. Another case for which Eq. (4) simplifies is t = λ (this equation was already derived in Yu et al. (2006)):\nJλ,λ(S) = Tr ( −K:,S ( KS,S + λI|S| )−1 KT:,S ) .\nEq. (4) allows us, via the kernel trick, to perform experimental design for learning of nonlinear models defined using high dimensional feature maps. Denote our unlabeled pool of data by z1, . . . , zm ∈ RD, and that we are using a feature map φ : Rd → H whereH is some Hilbert space (e.g.,H = Rd), i.e. the regression function is y(z) = 〈φ(z),w〉H. We can then envision the pool of data to be defined by xj = φ(zj), j = 1, . . . ,m. If we assume we have a kernel function k : RD × RD → RD such that k(x, z) = 〈φ(x), φ(z)〉H then Jλ,t(S) can be computed without actually forming x1, . . . ,xm since entries in K can be computed via k. IfH is the Reproducing Kernel Hilbert Space of k then this is exactly the setting that corresponds to kernel ridge regression (possibly with a zero ridge term).\nGreedy Algorithm. We now propose our algorithm for overparameterized experimental design, which is based on greedy minimization of Jλ,t(S). Greedy algorithms have already been shown to be effective for classical experimental design (Yu et al., 2006; Avron & Boutsidis, 2013; Chamon & Ribeiro, 2017), and it is reasonable to assume this carries on to the overparameterized case.\nOur greedy algorithm proceeds as follows. We start with S(0) = ∅, and proceed in iteration. At iteration j, given selected samples S(j−1) ⊂ [m] the greedy algorithm finds the index i(j) ∈ [m]− S(j−1) that minimizes Jλ,t ( S(j−1) ∪ {i(j)} ) . We set S(j) ← S(j−1) ∪ {i(j)}. We continue iterating until S(j) reaches its target size and/or Jλ,t(S) is small enough.\nThe cost of iteration j in a naive implementation isO ( (m− j) ( mj2 + j3 )) . Through careful matrix\nalgebra, the cost of iteration j can be reduced to O((m− j)(mj + j2)) = O(m2j) (since j ≤ m). The cost of finding a design of size n is then O(m2(n2 +D)) assuming the entire kernel matrix K is formed at the start and a single evaluation of k takes O(D). Details are delegated to Appendix C." }, { "heading": "6 SINGLE SHOT DEEP ACTIVE LEARNING", "text": "There are few ways in which our proposed experimental design algorithm can be used in the context of deep learning. For example, one can consider a sequential setting where current labeled data are used to create a linear approximation via the Fisher information matrix at the point of minimum loss (Sourati et al., 2018). However, such a strategy falls under the heading of Sequential Experimental Design, and, as we previously stated, in this paper we focus on single shot active learning, i.e. no labeled data is given neither before acquisition nor during acquisition (Yang & Loog, 2019).\nIn order to design an algorithm for deep active learning, we leverage a recent breakthrough in theoretical analysis of deep learning - the Neural Tangent Kernel (NTK) (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019a). A rigorous exposition of the NTK is beyond the scope of this paper, but a short and heuristic explanation is sufficient for our needs.\nConsider a DNN, and suppose the weights of the various layers can be represented in a vector θ ∈ Rd. Given a specific θ, let fθ(·) denote the function instantiated by that network when the weights are set to θ. The crucial observation is that when the network is wide (width in convolutional layers refers to the number of output channels) enough, we use a quadratic loss function (i.e., l(fθ(x), y) = 1/2(fθ(x)− y)2), and the initial weights θ0 are initialized randomly in a standard way, then when training the DNN using gradient descent, the vector of parameters θ stays almost fixed. Thus, when we consider θ1,θ2, . . . formed by training, a first-order Taylor approximation is:\nfθk(x) ≈ fθ0(x) +∇θfθ0(x)T(θk − θ0) Informally speaking, the approximation becomes an equality in the infinite width limit. The Taylor approximation implies that if we further assume that θ0 is such that fθ0(x) = 0, the learned prediction function of the DNN is well approximated by the solution of a kernel regression problem with the (Finite) Neural Tangent Kernel, defined as\nkf,θ0(x, z) := ∇θfθ0(x)T∇θfθ0(z) We remark that there are few simple tricks to fulfill the requirement that fθ0(x) = 0.\nIt has also been shown that under certain initialization distribution, when the width goes to infinity, the NTK kf,θ0 converges in probability to a deterministic kernel kf - the infinite NTK. Thus, in a sense, instead of training a DNN on a finite width network, we can take the width to infinity and solve a kernel regression problem instead.\nAlthough, it is unclear whether the infinite NTK can be an effective alternative to DNNs in the context of inference, one can postulate that it can be used for deep active learning. That is, in order to select examples to be labeled, use an experimental design algorithm for kernel learning applied to the corresponding NTK. Specifically, for single shot deep active learning, we propose to apply the algorithm presented in the previous section to the infinite NTK. In the next section we present preliminary experiments with this algorithm. We leave theoretical analysis to future research." }, { "heading": "7 EMPIRICAL EVALUATION", "text": "Transductive vs ψ̄λ,0 Criterion (i.e., variance-oriented vs. bias-oriented designs) ψ̄λ,0 and ψ̄λ,λ are simplified version of ψ̄λ,t criterion. Our conjecture is that in the overparameterized regime ψ̄λ,0 is preferable, at least for relatively large λ. Table 1. empirically supports our conjecture. In this experiment, we performed an experimental design task on 112 classification datasets from UCI database (similar to the list that was used by Arora et al. (2019b) ). Learning is performed using kernel ridge regression with standard RBF kernel. We tried different values of λ and checked which criterion brings to a smaller classification error on a test set when selecting 50 samples. Each entry in Table 1 counts how many times ψ̄λ,λ , won ψ̄λ,0 won or the error was the same. We consider an equal error when the difference is less the 5%.\nDeep Active Learning Here we report preliminary experiments with the proposed algorithm for single shot deep active learning (Section 6). Additional experiments are reported in the appendix. We used the MNIST dataset, and used the square loss for training. As for the network architecture, we used a version of LeNet5 (LeCun et al., 1998) that is widen by a factor of 8. we refer to this network as “Wide-LeNet5”.\nThe setup is as follows. We use Google’s open source neural tangents library (Novak et al., 2020) to compute Gram matrix of the infinite NTK using 59,940 training samples (we did not use the full 60,000 training samples due to batching related technical issues). We then used the algorithm proposed in Section 5 to incrementally select greedy designs of up to 800 samples, where we set the parameters to λ = t = 0. We now trained the original neural network with different design sizes, each design with five different random initial parameters. Learning was conducted using SGD, with fixed learning rate of 0.1, batch size of 128, and no weight decay. Instead of counting epochs, we simply capped the number of SGD iterations to be equivalent to 20 epochs of the full trainning set. We computed the accuracy of the model predictions on 9963 test-set samples (again, due to technical issues related to batching).\nFigure 2 report the mean and standard deviation (over the parameters initialization) of the final accuracy. We see a consistent advantage in terms of accuracy for designs selected via our algorithm, though as expected the advantage shrinks as the training size increase. Notice, that comparing the accuracy of our design with 400 training samples, random selection required as many as 600 for Wide-LeNet5 to achieve the same accuracy!\nTwo remarks are in order. First, to prevent overfitting and reduce computational load, at each iteration of the greedy algorithm we computed the score for only on a subset of 2000 samples from the pool. Second, to keep the experiment simple we refrained from using tricks that ensure fθ0 = 0." }, { "heading": "A PROOFS", "text": "A.1 PROOF OF PROPOSITION 1\nProof. We prove the case of d ≥ n (for X ∈ Rn×d). The proof for d < n is similar. It is enough to show that limλ→0 X+λ =X +. For a scalar γ let\nγ+ := { γ−1 γ 6= 0 0 γ = 0\nLet X = UΣVT be the SVD of X with\nΣ = σ1 0 · · · 0. . . ... ... σd 0 · · · 0 ∈ Rd×n where σ1, . . . , σd are the singular values of X. We have X+ = VΣ+UT where we have\nΣ+ = σ+1 . . . σ+d 0 · · · 0 ...\n... 0 · · · 0 On the other hand, simple matrix algebra shows that\nX+λ = V (σ21 + λ) +σ1 . . . (σ2d + λ) +σd 0 · · · 0 ...\n... 0 · · · 0\n UT (5)\nNow clearly for i = 1, . . . , d, lim λ→0+ (σ2i + λ) +σi = σ + i\nSo the limit of the diagonal matrix in Eq. (5) when λ→ 0+ is Σ+. Since matrix product is a linear, and thus continuous function, the proposition follows.\nA.2 PROOF OF PROPOSITION 2\nProof. Let us write\n:= 1... n so\ny = Xw + .\nThus, ŵλ = X + λy = X + λXw + X + λ = M + λMw + X + λ\nand xTw − xTŵλ = xT(Id −M+λM)w + xTX + λ\nFor brevity we denote Pλ⊥X = Id −M + λM. Note that this is not really a projection, but rather (informally) a “soft projection”. So:\n(xTw − xTŵλ)2 = wTPλ⊥X(xxT)P λ ⊥Xw + w TPλ⊥X(xx T)X+λ + T(X+λ) T(xxT)X+λ\nFinally, E [R(ŵλ)] = Ex, [( xTw − xTŵλ )2] = E [ Ex [( xTw − xTŵλ\n)2 | ]] = E [ Ex [ wTPλ⊥X(xx T)Pλ⊥Xw + w TPλ⊥X(xx T)X+λ + T(X+λ) T(xxT)X+λ | ]]\n= E [ wTPλ⊥XCρP λ ⊥Xw + w TPλ⊥XCρX + λ + T(X+λ) TCρX + λ ]\n= wTPλ⊥XCρP λ ⊥Xw + σ 2Tr ( (X+λ) TCρX + λ ) = ‖C1/2ρ P λ ⊥Xw‖22 + σ2Tr ( CρX + λ(X + λ)\nT) = ‖C1/2ρ ( I−M+λM ) w‖22 + σ2Tr ( CρM +2 λ M )\nA.3 PROOF OF PROPOSITION 3\nBefore proving Proposition 3 we need the following definition and theorem.\nDefinition 1. For a family of sets {Aλ}λ∈R, A ⊂ Rd we write limλ→λ̄Aλ = A if w ∈ A if and only if there exists sequence λn → λ and a sequence wn → w where wn ∈ Aλn for sufficiently large n.\nTheorem 1. (A restricted version of Theorem 1.17 in Rockafellar & Wets (2009)) Consider f : Ω×Ψ→ R where Ω ⊆ Rd and Ψ ⊆ R are compact and f is continuous. Then\nlim λ→λ̄ argmin w f (w, λ) ⊆ argmin w\nf ( w, λ̄ ) .\nProof. Suppose w̄ ∈ limλ→λ̄ argminw f (w, λ). The implies that there exits λn → λ̄ such that wn ∈ argminw f (w, λn) and wn → w̄. From the continuity of f we have that f (wn, λn) → f ( w̄, λ̄ ) . Now suppose for the sake of contradiction that w̄ /∈ argminw f ( w, λ̄ ) . So there is u\nsuch that f(u, λ̄) < f ( w̄, λ̄ ) . From the continuity of f in λ there is n0 such that for all n > n0\nf(u, λn) < f ( w̄, λ̄ ) . Then from the continuity of f in w, and wn → w̄, for sufficiently large n, f (wn, λn) > f(u, λn), which contradicts wn ∈ argminw f (w, λn).\nWe are now ready to prove Proposition 3.\nProof. Consider the function f(M, µ) = Tr ( CρM −1 µ ) f(M, µ) = { Tr ( µCρ(M + µI) −1) µ > 0 Tr (Cρ (I−M+M)) µ = 0\ndefined over Ω × R≥0 where R≥0 denotes the set of non-negative real numbers. Note that this function is well-defined since Ω is a set of positive semidefinite matrices.\nWe now show that f is continuous. For µ > 0 it is clearly continuous for every M, so we focus on the case that µ = 0 for an arbitrary M. Consider a sequence R>0 3 µn → 0 (where R>0 is the set of positive reals) and Ω 3Mn →M. Since Ω is compact, M ∈ Ω. Let us write a spectral decomposition of Mn (recall that Ω is a set of symmetric matrices)\nMn = UnΛnU T n\nwhere Λn is diagonal with non-negative diagonal elements (recall that Ω is a set of positive definite matrices). Let M = UΛUT be a spectral decomposition of M. Without loss of generality we may assume that Un → U and Λn → Λ. Now note that\n(Mn + µnI) −1Mn = Un(Λn + µnI) −1ΛnU T n\nOne can easily show that (Λn +µnI)−1Λn → sign(Λ) where sign is taken entry wise, which implies that (Mn + µnI)−1Mn → U sign(Λ)UTsince matrix multiplication is continuous. Next, note that M+M = UΛ+ΛUT = U sign(Λ)UT so (Mn + µnI)−1Mn → M+M. The Woodbury formula implies that\nµnCρ(Mn + µnI) −1 = Cρ ( I− (Mn + µnI)−1Mn ) so the continuity of the trace operator implies that\nTr ( µnCρ(Mn + µnI) −1) = Tr (Cρ (I− (Mn + µnI)−1Mn))→ Tr (Cρ (I−M+M)) which shows that f is continuous.\nTheorem 1 now implies the claim since for µ > 0 we have\nargmin M∈Ω\nTr ( CρM −1 µ ) = argmin M∈Ω Tr ( µCρM −1 µ ) .\nA.4 PROOF OF PROPOSITION 4 Proof. Let A = ( I− (M + λId)−1M )2 +λ(M+λId)\n−2M , so ϕ̄λ,λ(M) = Tr (CρA). We now have for λ > 0:\nA = ( I− (M + λId)−1(M + λId) + λ(M + λId)−1 )2 + λ(M + λId) −2M\n= λ2 (M + λId) −2 + λ(M + λId) −2M = λ(M + λId) −2 (M + λId) = λ (M + λI) −1\nso:\nTr (CρA) = λTr ( Cρ (M + λI) −1 ) .\nSince λ > 0 it doesn’t affect the minimizer." }, { "heading": "B RELATION TO CORESETS", "text": "The idea in the coreset approach for active learning is to find an S such that\nC(S) = ∣∣∣∣∣ 1m m∑ i=1 l(xi, yi | S)− 1 |S| ∑ i∈S l(xi, yi | S) ∣∣∣∣∣\nis minimized. In the above l(x, y | S) is a loss function, and the conditioning on S denotes that the parameters of the loss function are the ones obtained when training only using indices selected in S. For linear regression the conditioning on S is not relevant (since the parameters do not affect the loss). The motivation for minimizing C(S) is that the expected test loss can be broken to the generalization loss on the entire dataset (which is fixed), the training loss (which is 0 in the presence of overparameterization) and the coreset loss.\nOne popular approach to active learning using coresets is to find a coverset. A δ-coverset of a set of points A is a set of points B such that for every x ∈ A there exists a y ∈ B such that ‖x− y‖2 ≤ δ (other metrics can be used as well). Sener and Savarese Sener & Savarese (2018) showed that under suitable Lipschitz and boundness conditions, if {xi}i∈S is a δ-coverset of {xi}i∈[m] then\nC(S) ≤ O(δ +m−1/2) which motivates finding a S that minimizes δS , where δS denotes the minimal δ for which {xi}i∈S is a δ-coverset of {xi}i∈[m].\nSince for a x in the training set (which is a row of V) ‖x(Id − PM)‖22, for M = V T S,:VS,: is the minimal distance from x to the span of {xi}i∈S , and as such is always smaller than the distance between x and it’s closest point in {xi}i∈S , it is easy to show that\nn−1ψ̄0,0(V T S,:VS,:) ≤ δ2S .\nThus, minimizing δS can be viewed as minimizing an upper bound on the bias term when λ = 0.\nUnder the setup of the experiment in Section 7 we tried to replace our design with k-centers algorithm, which often used as approximated solution for the problem of finding S that minimizes δS . How ever the result we got were much worse then random design, probably due to the problem of outliers. We did not try more sophisticated versions of the k-center algorithm that tackle the problem of outliers." }, { "heading": "C DETAILS ON THE ALGORITHM", "text": "We discuss the case of λ = 0. The case of λ > 0 requires some more careful matrix algebra, so we omit the details.\nLet us define Aj := K −1 S(j),S(j) , Bj := K T :,S(j)K:,S(j)\nand note that Jλ,t(S(j)) = −Tr ( Bj(Aj − tA2j ) ) . We also denote by Ãj and B̃j the matrices obtained from Aj and Bj (respectively) by adding a zero row and column.\nOur goal is to efficiently compute Jλ,t(S(j−1) ∪ {i}) for any i ∈ [m]− S(j−1) so find i(j) and form S(j). We assume that at the start of iteration j we already have in memory Aj−1 and Bj−1. We show later how to efficiently update Aj and Bj once we have found i(j). For brevity, let us denote\nS(j)i := S (j−1) ∪ {i}, Aji := K−1S(j)i ,S(j)i , Bji := K T :,S(j)i K :,S(j)i\nLet us also define\nCj−1 := B̃j−1Ãj−1, Dj−1 := B̃j−1Ã 2 j−1, Ej−1 := Ã 2 j−1\nAgain, we assume that at the start of iteration j we already have in memory Cj−1, Dj−1 and Ej−1, and show how to efficiently update these.\nLet\nWji :=\n[ 0j−1 K T :,S(j−1)K:,i\nKT:,iK:,S(j−1) K T :,iK:,i ] and note that\nBji = B̃j−1 + Wji.\nAlso important is the fact that Wji has rank 2 and that finding the factors takes O(mj) discounting the cost of computing columns of K. Next, let us denote\nrji = 1\n(Kii −KTS(j),iAj−1KS(j),i)\nand\nQji := rji ·\n[ Aj−1KS(j),iK T S(j),iA −1 j−1 −Aj−1KS(j),i\n−KTS(j),iAj−1 1 ] A well known identity regarding Schur complement implies that\nAji = Ãj−1 + Qji\nAlso important is the fact that Qji has rank 2 and that finding the factors takes O(j 2) discounting the cost of computing entries of K.\nSo Jλ,t(S(j)i ) = −Tr ( Bji(Aji − tA2ji) ) = −Tr ( (B̃j−1 + Wji)(Ãj−1 + Qji − t(Ãj−1 + Qji)2)\n) = −Tr ( (B̃j−1 + Wji)(Ãj−1 + Qji)− t(B̃j−1 + Wji)(Ã 2 j−1 + Q 2 ji + Ãj−1Qji + QjiÃj−1\n) = −Tr ( Cj−1 + B̃j−1Qji + Wji(Ãj−1 + Qji)\n) +tTr ( Dj−1 + B̃j(Ãj−1Qji + QjiÃj−1 + Q 2 ji) )\n+Tr ( Wi(Ej−1 + Q 2 ji + Ãj−1Qji + QjiÃj−1 ) Now, Cj−1 is already in memory so Tr (Cj−1) can be computed in O(j), Qji has rank 2 and B̃j−1 is in memory so Tr ( B̃j−1Qji ) can be compute in O(j2), and Wji has rank 2 and Ãj−1 is in\nmemory so Tr ( Wi(Ãj−1 + Qji) ) can be computed in O(j2). Using a similar rationale, all the\nother terms of Jλ,t(S(j)i ) can also be computed in O(j) or O(j2), and overall Jλ,t(S (j) i ) can be computed in O(j2). Thus, scanning for i(j) takes O((m− j)j2).\nOnce i(j) has been identified, we set S(j) = S(j) i(j)\n, Aj = Aji(j) = Ãj−1 +Qji(j) and Bj = Bji(j) = B̃j−1 + Wji(j) . The last two can be computed in O(j2) once we form Qi(j) and Wi(j) . Computing the factors of these matrices takes O(mj). As for updating Cj−1, we have\nCj = C̃j−1 + B̃j−1Qji(j) + Wji(j)Ãj−1 + Wji(j)Qji(j)\nwhere C̃j−1 is obtained from Cj−1 be adding a zero row and column. Since Cj−1 is in memory and both Qji(j) and Wi(j) have rank O(1), we can compute Cj is O(j\n2). Similar reasoning can be used to show that Dj and Ej can also be computed in O(j2).\nOverall, the cost of iteration j is O((m − j)(mj + j2)) = O(m2j) (since j ≤ m). The cost of finding a design of size n is O(m2(n2 +D)) assuming the entire kernel matrix K is formed at the start and a single evaluation of k takes O(D)." }, { "heading": "D EXPERIMENTAL PARAMETERS EXPLORATION AND COMPARISON TO TRANSDUCTIVE EXPERIMENTAL DESIGN", "text": "In this subsection we report a set of experiments on a kernel ridge regression setup (though in one experiment we set the ridge term to 0, so we are using interpolation). We use the MNIST handwriting dataset (LeCun et al., 2010), where the regression target response was computed by applying one-hot function on the labels 0-9. Nevertheless, we still measure the MSE, and do not use the learnt models as classifiers. We use the RBF kernel k(x, z) = exp(−γ‖x− z‖22) with parameter γ = 1/784. From the dataset, we used the standard test set of 10000 images and selected randomly another 10000 images from the rest of the 60000 images as a pool. We used our proposed greedy algorithm to select a training set of sizes 1 to 100. We use two values of λ: λ = 0 (interpolation), and λ = 0.752. The optimal λ according to cross validation was the smallest we checked so we just used λ = 0. However, in some cases having a λ > 0 is desirable from a computational perspective, e.g. it caps the condition\nnumber of the kernel matrix, making the linear system easier to solve. Furthermore, in real world scenarios, oftentimes we do not have any data before we start to acquire labels, and if we do, it is not always distributed as in the test data, so computing the optimal λ can be a challenging.\nResults are reported in Figure 3. The left panel show the results for λ = 0. We report results for t = 0 and t = 0.5. The choice of t = 0 worked better. Kernel models with the RBF kernel are highly overparameterized (the hypothesis space is infinite dimensional), so we expect the MSE to be bias dominated, in which case a small t (or t = 0) might work best. Recall that the option of λ = t = 0 is equivalent to the Column Subset Selection Problem, is the limit case of transductive experimental design (Yu et al., 2006), and can be related to the coreset approach (specifically Sener & Savarese (2018)).\nThe case of λ = 0.752 is reported in the right panel of Figure 3. We tried t = 0 and t = λ = 0.752. Here too, using a purely bias oriented objective (i.e., t = 0) worked better. Note that this is in contrast with classical OED which use variance oriented objectives. The choice of t = λ worked well, but not optimally. In general, in the reported experiments, and other experiments conducted but not reported, it seems that the choice of t = λ, which is, as we have shown in this paper, equivalent to transductive experimental design, usually works well, but is not optimal." }, { "heading": "E EXPERIMENTAL SETUP FOR RESULT REPORTED IN FIGURE 1", "text": "First, w ∈ R100 was sampled randomly from N (0, I) . Then a pool (the set from which we later choose the design) of 500 samples and a test set of 100 samples were randomly generated according to x ∼ N (0,Σ), ∼ N (0, σ2I) and y = xTw + , where Σ ∈ R100×100 is diagonal with Σii = exp(−2.5i/100), and σ = 0.2. We then created three incremental designs (training sets) of size 120 according to three different methods:\n• Random design - at each iteration we randomly choose the next training sample from the remaining pool. • Classical OED (variance oriented) - at each iteration we choose the next training sample\nfrom the remaining pool with a greedy step that minimizes the variance term in Eq. (3). • Overparameterized OED - at each iteration we chose the next training sample from the\nremaining pool with a greedy step that minimizes Eq. (3), with λ = 0 and t = σ2 .\nWith the addition of each new training sample we computed the new MSE achieved on the test set with minimum norm linear regression." }, { "heading": "F EXPERIMENT: SINGLE SHOT ACTIVE LEARNING FOR NARROW NETWORKS", "text": "In Figure 4 we compare the result of our method on LeNet5 with the result of our method on Wide-LeNet5. We see that while the result on\nthe wide version are generally better, both for random designs and our design, our method brings a consistent advantage over random design. In both the narrow and the wide versions it requires about 600 training samples for the random design to achieve the accuracy achieved using our algorithm with only 400 training samples!\nThe parameters used by our algorithm to select the design are λ = t = 0. For the network training we used SGD with batch size 128, leaning rate 0.1 and no regularization. The SGD number iterations is equivalent to 20 epochs of the full trainning set." }, { "heading": "G SEQUENTIAL VS SINGLE SHOT ACTIVE LEARNING", "text": "While in this work focus on the single shot active learning, an interesting question is how does it compare to sequential active learning. In sequential active learning we alternate between a model improving step and a step of new labels acquisition,. This obviously gives an advantage to sequential active learning over single shot active learning, as the latter is a restricted instance of the former.\nAs we still do not have a sequential version of our algorithm to compare with, we chose to experimentally compare our single shot algorithm with the classical method of uncertainty sampling (Bartlett et al., 2020). This method has proved to be relatively efficient for neural networks (Gal et al., 2017). Uncertainty sampling based active learning requires computing the uncertainty of the updated model regarding each sample in the pool. As such, this approach is sequential by nature.\nUsually uncertainty sampling is derived in connection to the cross entropy since in that case the network output after the softmax layer can be interpreted as a probability estimation of y = i given x, which we symbolize as pi(x). The uncertainty score (in one common version) is then given by\n1−max i∈[L] pi(x).\nBecause we use the square lose, we need to make some adaptation for the way of pi(x) is computed. Considering the fact that the square loss is an outcome of a maximum likelihood model that given x assumes y ∼ N (f(x), IL), it make sense to use\npi(x) = (2π) −L2 e− 1 2‖yi−f(x)‖ 2 2 ,\nwhere yi is the onehot vector of i.\nFigure 5. shows a comparison between the accuracy achieved with our single shot algorithm and the sequential active learning on MNIST with LeNet5. The acquisitions batch size of the sequential active learning were set to 100. Our algorithm ran with λ = t = 0. For the network training we used SGD with batch size 128, leaning rate 0.1 and no l2 regularization. The SGD number iterations is equivalent to 20 epochs of the full train set.\nInitially, our selection procedure shows a clear advantage. However, once the training set grows large enough, the benefit of a sequential setup starts to kick-in, the sequential algorithm starts to show superior results. This experiment motivates further development of sequential version of our algorithm." } ]
2,020
null
SP:4989f7703e106a20401cec0a5058d440720b0379
[ "This paper proposes a novel algorithm for offline policy optimization. The main idea is to prevent overestimation bias by regularizing against the variance of the importance weighted value estimate. There are two key modifications: (1) using an importance weight from the stationary distribution and (2) using Fenchel duality to introduce a min-max problem to avoid double sampling when estimating the gradient of the variance regularization term. The theory section motivates the use of variance regularization and the experiments show improvements over BCQ when adding the proposed variance regularization algorithm. " ]
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing algorithms.
[]
[ { "authors": [ "Prashanth L. A", "Michael C. Fu" ], "title": "Risk-sensitive reinforcement learning: A constrained optimization", "venue": "viewpoint. CoRR,", "year": 2018 }, { "authors": [ "Prashanth L. A", "Mohammad Ghavamzadeh" ], "title": "Variance-constrained actor-critic algorithms for discounted and average reward mdps", "venue": "Mach. Learn.,", "year": 2016 }, { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "An optimistic perspective on offline reinforcement learning", "venue": "In NeurIPS Deep Reinforcement Learning Workshop,", "year": 2019 }, { "authors": [ "Leemon Baird" ], "title": "Residual algorithms: Reinforcement learning with function approximation", "venue": "Proceedings of the Twelfth International Conference on Machine Learning,", "year": 1995 }, { "authors": [ "Lorenzo Bisi", "Luca Sabbioni", "Edoardo Vittori", "Matteo Papini", "Marcello Restelli" ], "title": "Risk-averse trust region optimization for reward-volatility reduction", "venue": "CoRR, abs/1912.03193,", "year": 2019 }, { "authors": [ "Stephen Boyd", "Lieven Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2004 }, { "authors": [ "Dotan Di Castro", "Aviv Tamar", "Shie Mannor" ], "title": "Policy gradients with variance related risk criteria", "venue": "In Proceedings of the 29th International Conference on Machine Learning,", "year": 2012 }, { "authors": [ "Yinlam Chow", "Mohammad Ghavamzadeh" ], "title": "Algorithms for cvar optimization in mdps", "venue": "In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Yinlam Chow", "Mohammad Ghavamzadeh", "Lucas Janson", "Marco Pavone" ], "title": "Risk-constrained reinforcement learning with percentile risk criteria", "venue": "J. Mach. Learn. Res.,", "year": 2017 }, { "authors": [ "Bo Dai", "Albert Shaw", "Lihong Li", "Lin Xiao", "Niao He", "Zhen Liu", "Jianshu Chen", "Le Song" ], "title": "SBEED: convergent reinforcement learning with nonlinear function approximation", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dongsheng Ding", "Xiaohan Wei", "Zhuoran Yang", "Zhaoran Wang", "Mihailo R. Jovanovic" ], "title": "Provably efficient safe exploration via primal-dual policy optimization", "venue": "CoRR, abs/2003.00534,", "year": 2020 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Ofir Nachum", "George Tucker", "Sergey Levine" ], "title": "D4RL: datasets for deep data-driven reinforcement learning", "venue": "CoRR, abs/2004.07219,", "year": 2020 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Javier Garcı́a", "Fern", "o Fernández" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Peter Henderson", "Riashat Islam", "Philip Bachman", "Joelle Pineau", "Doina Precup", "David Meger" ], "title": "Deep reinforcement learning that matters. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence", "venue": null, "year": 2018 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Sham M. Kakade", "John Langford" ], "title": "Approximately optimal approximate reinforcement learning", "venue": "In Machine Learning, Proceedings of the Nineteenth International Conference (ICML 2002),", "year": 2002 }, { "authors": [ "Ilya Kostrikov", "Ofir Nachum", "Jonathan Tompson" ], "title": "Imitation learning via off-policy distribution matching", "venue": "CoRR, abs/1912.05032,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.04779,", "year": 2020 }, { "authors": [ "Sascha Lange", "Thomas Gabel", "Martin A. Riedmiller" ], "title": "Batch reinforcement learning", "venue": "In Reinforcement Learning,", "year": 2012 }, { "authors": [ "Hoang Minh Le", "Cameron Voloshin", "Yisong Yue" ], "title": "Batch policy learning under constraints", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Lihong Li", "Rémi Munos", "Csaba Szepesvári" ], "title": "Toward minimax off-policy value estimation", "venue": "In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Qiang Liu", "Lihong Li", "Ziyang Tang", "Dengyong Zhou" ], "title": "Breaking the curse of horizon: Infinitehorizon off-policy estimation", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Alberto Maria Metelli", "Matteo Papini", "Francesco Faccio", "Marcello Restelli" ], "title": "Policy optimization via importance sampling", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ofir Nachum", "Bo Dai" ], "title": "Reinforcement learning via fenchel-rockafellar duality", "venue": "CoRR, abs/2001.01866,", "year": 2020 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ofir Nachum", "Bo Dai", "Ilya Kostrikov", "Yinlam Chow", "Lihong Li", "Dale Schuurmans" ], "title": "Algaedice: Policy gradient from arbitrary experience", "venue": "CoRR, abs/1912.02074,", "year": 2019 }, { "authors": [ "XuanLong Nguyen", "Martin J. Wainwright", "Michael I. Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Trans. Information Theory,", "year": 2010 }, { "authors": [ "Theodore J. Perkins", "Andrew G. Barto" ], "title": "Lyapunov design for safe reinforcement learning", "venue": "J. Mach. Learn. Res.,", "year": 2003 }, { "authors": [ "Doina Precup", "Richard S. Sutton", "Satinder P. Singh" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "In Proceedings of the Seventeenth International Conference on Machine Learning (ICML", "year": 2000 }, { "authors": [ "Doina Precup", "Richard S. Sutton", "Sanjoy Dasgupta" ], "title": "Off-policy temporal difference learning with function approximation", "venue": "Proceedings of the Eighteenth International Conference on Machine Learning (ICML", "year": 2001 }, { "authors": [ "Martin L. Puterman" ], "title": "Markov Decision Processes: Discrete Stochastic Dynamic Programming", "venue": null, "year": 1994 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael I. Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015,", "year": 2015 }, { "authors": [ "Samarth Sinha", "Jiaming Song", "Animesh Garg", "Stefano Ermon" ], "title": "Experience replay with likelihoodfree importance weights", "venue": "arXiv preprint arXiv:2006.13169,", "year": 2020 }, { "authors": [ "Matthew J. Sobel" ], "title": "The variance of discounted markov decision processes", "venue": "Journal of Applied Probability,", "year": 1982 }, { "authors": [ "James C. Spall" ], "title": "Multivariate stochastic approximation using a simultaneous perturbation gradient approximation", "venue": "IEEE TRANSACTIONS ON AUTOMATIC CONTROL,", "year": 1992 }, { "authors": [ "Richard S. Sutton", "David A. McAllester", "Satinder P. Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in Neural Information Processing Systems 12, [NIPS Conference,", "year": 1999 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Batch learning from logged bandit feedback through counterfactual risk minimization", "venue": "J. Mach. Learn. Res.,", "year": 2015 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Counterfactual risk minimization: Learning from logged bandit feedback", "venue": "In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015,", "year": 2015 }, { "authors": [ "Chen Tessler", "Daniel J. Mankowitz", "Shie Mannor" ], "title": "Reward constrained policy optimization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Philip S. Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High-confidence off-policy evaluation", "venue": "In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30,", "year": 2015 }, { "authors": [ "Philip S. Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High confidence policy improvement", "venue": "Proceedings of the 32nd International Conference on Machine Learning, ICML 2015,", "year": 2015 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Ahmed Touati", "Amy Zhang", "Joelle Pineau", "Pascal Vincent" ], "title": "Stable policy optimization via off-policy divergence regularization", "venue": "Proceedings of the Thirty-Sixth Conference on Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Masatoshi Uehara", "Nan Jiang" ], "title": "Minimax weight and q-function learning for off-policy evaluation", "venue": "CoRR, abs/1910.12809,", "year": 2019 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": "CoRR, abs/1911.11361,", "year": 2019 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": "CoRR, abs/1911.11361,", "year": 2019 }, { "authors": [ "Ruiyi Zhang", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Gendice: Generalized offline estimation of stationary values", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Shangtong Zhang", "Wendelin Boehmer", "Shimon Whiteson" ], "title": "Generalized off-policy actorcritic", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "2019 Kumar et al", "2019 Agarwal et al", "Kumar" ], "title": "2020) have tried to address this, by avoiding overestimation of Q-values, which leads to the extraplation error when bootstrapping value function estimates. This leads to offline RL agents generalizing poorly for unseen regions of the dataset. Additionally, due to the distribution mismatch, value function estimates", "venue": "Recent works (Fujimoto et al.,", "year": 2020 }, { "authors": [ "2018 rithms (Haarnoja et al", "2016 Lillicrap et al", "Fujimoto" ], "title": "2018) may fail without online interactions with the environment. In this work, we address the later problem to minimize variance of value function estimates through variance related risk constraints", "venue": "B APPENDIX : PER-STEP VERSUS EPISODIC VARIANCE OF RETURNS Following from (Castro et al.,", "year": 2012 }, { "authors": [ "Sinha" ], "title": "2020) to compute the state-action likelihood ratio, where they use a binary a classifier to classify samples between an on-policy and off-policy distribution. The proposed classifier, φ, is trained on the following objective, and takes as input the state-action tuples (s, a) to return a probability score that the state-action distribution is", "venue": "Empirical Likelihood Ratio", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Offline batch reinforcement learning (RL) algoithms are key towards scaling up RL for real world applications, such as robotics (Levine et al., 2016) and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datasets, similar to supervised learning, avoiding continual interaction with the environment, which could be problematic for safety and feasibility reasons. However, significant mismatch between the fixed collected data and the policy that the agent is considering can lead to high variance of value function estimates, a problem encountered by most off-policy RL algorithms (Precup et al., 2000). A complementary problem is that the value function can become overly optimistic in areas of state space that are outside the visited batch, leading the agent in data regions where its behavior is poor Fujimoto et al. (2019). Recently there has been some progress in offline RL (Kumar et al., 2019; Wu et al., 2019b; Fujimoto et al., 2019), trying to tackle both of these problems.\nIn this work, we study the problem of offline policy optimization with variance minimization. To avoid overly optimistic value function estimates, we propose to learn value functions under variance constraints, leading to a pessimistic estimation, which can significantly help offline RL algorithms, especially under large distribution mismatch. We propose a framework for variance minimization in offline RL, such that the obtained estimates can be used to regularize the value function and enable more stable learning under different off-policy distributions.\nWe develop a novel approach for variance regularized offline actor-critic algorithms, which we call Offline Variance Regularizer (OVR). The key idea of OVR is to constrain the policy improvement step via variance regularized value function estimates. Our algorithmic framework avoids the double sampling issue that arises when computing gradients of variance estimates, by instead considering the variance of stationary distribution corrections with per-step rewards, and using the Fenchel transformation (Boyd & Vandenberghe, 2004) to formulate a minimax optimization objective. This allows minimizing variance constraints by instead optimizing dual variables, resulting in simply an augmented reward objective for variance regularized value functions.\nWe show that even with variance constraints, we can ensure policy improvement guarantees, where the regularized value function leads to a lower bound on the true value function, which mitigates the usual overestimation problems in batch RL The use of Fenchel duality in computing the variance allows us to avoid double sampling, which has been a major bottleneck in scaling up variance-constrained\nactor-critic algorithms in prior work A. & Ghavamzadeh (2016); A. & Fu (2018). Practically, our algorithm is easy to implement, since it simply involves augmenting the rewards with the dual variables only, such that the regularized value function can be implemented on top of any existing offline policy optimization algorithms. We evaluate our algorithm on existing offline benchmark tasks based on continuous control domains. Our empirical results demonstrate that the proposed variance regularization approach is particularly useful when the batch dataset is gathered at random, or when it is very different from the data distributions encountered during training." }, { "heading": "2 PRELIMINARIES AND BACKGROUND", "text": "We consider an infinite horizon MDP as (S,A,P, γ) where S is the set of states, A is the set of actions, P is the transition dynamics and γ is the discount factor. The goal of reinforcement learning is to maximize the expected return J (π) = Es∼dβ [V π(s)], where V π(s) is the value function V π(s) = E[ ∑∞ t=0 γ\ntr(st, at) | s0 = s], and β is the initial state distribution. Considering parameterized policies πθ(a|s), the goal is maximize the returns by following the policy gradient (Sutton et al., 1999), based on the performance metric defined as :\nJ(πθ) = Es0∼ρ,a0∼π(s0) [ Qπθ (s0, a0) ] = E(s,a)∼dπθ (s,a) [ r(s, a) ] (1)\nwhere Qπ(s, a) is the state-action value function, since V π(s) = ∑ a π(a|s)Qπ(s, a). The policy optimization objective can be equivalently written in terms of the normalized discounted occupancy measure under the current policy πθ, where dπ(s, a) is the state-action occupancy measure, such that the normalized state-action visitation distribution under policy π is defined as : dπ(s, a) = (1 − γ) ∑∞ t=0 γ\ntP (st = s, at = a|s0 ∼ β, a ∼ π(s0)). The equality in equation 1 holds and can be equivalently written based on the linear programming (LP) formulation in RL (see (Puterman, 1994; Nachum & Dai, 2020) for more details). In this work, we consider the off-policy learning problem under a fixed dataset D which contains s, a, r, s′ tuples under a known behaviour policy µ(a|s). Under the off-policy setting, importance sampling (Precup et al., 2000) is often used to reweight the trajectory under the behaviour data collecting policy, such as to get unbiased estimates of the expected returns. At each time step, the importance sampling correction π(at|st)µ(at|st) is used to compute the expected return under the entire trajectory as\nJ(π) = (1 − γ)E(s,a)∼dµ(s,a)[ ∑T t=0 γ tr(st, at) (∏T t=1 π(at|st) µ(at|st ) ]. Recent works (Fujimoto et al., 2019) have demonstrated that instead of importance sampling corrections, maximizing value functions directly for deterministic or reparameterized policy gradients (Lillicrap et al., 2016; Fujimoto et al., 2018) allows learning under fixed datasets, by addressing the over-estimation problem, by maximizing the objectives of the form maxθ Es∼D [ Qπθ (s, πθ(s) ] ." }, { "heading": "3 VARIANCE REGULARIZATION VIA DUALITY IN OFFLINE POLICY OPTIMIZATION", "text": "In this section, we first present our approach based on variance of stationary distribution corrections, compared to importance re-weighting of episodic returns in section 3.1. We then present a derivation of our approach based on Fenchel duality on the variance, to avoid the double sampling issue, leading to a variance regularized offline optimization objective in section 3.2. Finally, we present our algorithm in 1, where the proposed regularizer can be used in any existing offline RL algorithm." }, { "heading": "3.1 VARIANCE OF REWARDS WITH STATIONARY DISTRIBUTION CORRECTIONS", "text": "In this work, we consider the variance of rewards under occupancy measures in offline policy optimization. Let us denote the returns as Dπ = ∑T t=0 γ\ntr(st, at), such that the value function is V π = Eπ[Dπ]. The 1-step importance sampling ratio is ρt = π(at|st)µ(at|st) , and the T-steps ratio can be denoted ρ1:T = ∏T t=1 ρt. Considering per-decision importance sampling (PDIS) (Precup et al.,\n2000), the returns can be similarly written as Dπ = ∑T t=0 γ\ntrtρ0:t. The variance of episodic returns, which we denote by VP(π), with off-policy importance sampling corrections can be written as : VP(π) = Es∼β,a∼µ(·|s),s′∼P(·|s,a) [( Dπ(s, a)− J(π) )2] .\nInstead of importance sampling, several recent works have instead proposed for marginalized importance sampling with stationary state-action distribution corrections (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2020; Uehara & Jiang, 2019), which can lead to lower variance estimators at the cost of introducing bias. Denoting the stationary distribution ratios as ω(s, a) = dπ(s,a)dµ(s,a) , the returns can be written as Wπ(s, a) = ω(s, a)r(s, a). The variance of marginalized IS is :\nVD(π) = E(s,a)∼dµ(s,a) [( Wπ(s, a)− J(π) )2] = E(s,a)∼dµ(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dµ(s,a) [ Wπ(s, a) ]2 (2)\nOur key contribution is to first consider the variance of marginalized IS VD(π) itself a as risk constraints, in the offline batch optimization setting. We show that constraining the offline policy optimization objective with variance of marginalized IS, and using the Fenchel-Legendre transformation on VD(π) can help avoid the well-known double sampling issue in variance risk constrained RL (for more details on how to compute the gradient of the variance term, see appendix B). We emphasize that the variance here is solely based on returns with occupancy measures, and we do not consider the variance due to the inherent stochasticity of the MDP dynamics." }, { "heading": "3.2 VARIANCE REGULARIZED OFFLINE MAX-RETURN OBJECTIVE", "text": "We consider the variance regularized off-policy max return objective with stationary distribution corrections ωπ/D (which we denote ω for short for clarity) in the offline fixed dataset D setting:\nmax πθ\nJ(πθ) := Es∼D [ Qπθ (s, πθ(s)) ] − λVD(ω, πθ) (3)\nwhere λ ≥ 0 allows for the trade-off between offline policy optimization and variance regularization (or equivalently variance risk minimization). The max-return objective under Qπθ (s, a) has been considered in prior works in offline policy optimization (Fujimoto et al., 2019; Kumar et al., 2019). We show that this form of regularizer encourages variance minimization in offline policy optimization, especially when there is a large data distribution mismatch between the fixed dataset D and induced data distribution under policy πθ." }, { "heading": "3.3 VARIANCE REGULARIZATION VIA FENCHEL DUALITY", "text": "At first, equation 3 seems to be difficult to optimize, especially for minimizing the variance regularization w.r.t θ. This is because finding the gradient of V(ω, πθ) would lead to the double sampling issue since it contains the squared of the expectation term. The key contribution of OVR is to use the Fenchel duality trick on the second term of the variance expression in equation 2, for regularizing policy optimization objective with variance of marginalized importance sampling. Applying Fenchel duality, x2 = maxy(2xy − y2), to the second term of variance expression, we can transform the variance minimization problem into an equivalent maximization problem, by introducing the dual variables ν(s, a). We have the Fenchel conjugate of the variance term as :\nV(ω, πθ) = max ν\n{ − 1\n2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + E(s,a)∼dD\n[ ω(s, a)r(s, a)2 ]} = max\nν E(s,a)∼dD\n[ − 1\n2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + ω(s, a)r(s, a)2 ] (4) Regularizing the policy optimization objective with variance under the Fenchel transformation, we therefore have the overall max-min optimization objective, explicitly written as :\nmax θ min ν J(πθ, ν) := Es∼D\n[ Qπθ (s, πθ(s)) ] −λE(s,a)∼dD [( − 1\n2 ν2+ν ·ω ·r+ω ·r2\n) (s, a) ] (5)" }, { "heading": "3.4 AUGMENTED REWARD OBJECTIVE WITH VARIANCE REGULARIZATION", "text": "In this section, we explain the key steps that leads to the policy improvement step being an augmented variance regularized reward objective. The variance minimization step involves estimating the stationary distribution ration (Nachum et al., 2019a), and then simply computing the closed form solution for the dual variables. Fixing dual variables ν, to update πθ, note that this leads to a standard maximum return objective in the dual form, which can be equivalently solved in the primal form,\nusing augmented rewards. This is because we can write the above above in the dual form as : J(πθ, ν, ω) := E(s,a)∼dD(s,a) [ ω(s, a) · r(s, a)− λ ( − 1\n2 ν2 + ν · ω · r + ω · r2\n) (s, a) ] = E(s,a)∼dD(s,a) [ ω(s, a) · ( r − λ · ν · r − λ · r2 ) (s, a) + λ\n2 ν(s, a)2 ] = E(s,a)∼dD(s,a) [ ω(s, a) · r̃(s, a) + λ\n2 ν(s, a)2\n] (6)\nwhere we denote the augmented rewards as : r̃(s, a) ≡ [r − λ · ν · r − λ · r2](s, a) (7)\nThe policy improvement step can either be achieved by directly solving equation 6 or by considering the primal form of the objective with respect to Qπθ (s, πθ) as in (Fujimoto et al., 2019; Kumar et al., 2019). However, solving equation 6 directly can be troublesome, since the policy gradient step involves findinding the gradient w.r.t ω(s, a) = dπθ (s,a)dD(s,a) too, where the distribution ratio depends on dπθ (s, a). This means that the gradient w.r.t θ would require finding the gradient w.r.t to the normalized discounted occupancy measure, ie,∇θdπθ (s). Instead, it is therefore easier to consider the augmented reward objective, using r̃(s, a) as in equation 7 in any existing offline policy optimization algorithm, where we have the variance regularized value function Q̃πθ (s, a).\nNote that as highlighted in (Sobel, 1982), the variance of returns follows a Bellman-like equation. Following this, (Bisi et al., 2019) also pointed to a Bellman-like solution for variance w.r.t occupancy measures. Considering variance of the form in equation 2, and the Bellman-like equation for variance, we can write the variance recursively as a Bellman equation:\nVπD(s, a) = ( r(s, a)− J(π) )2 + γEs′∼P,a′∼π′(·|s′) [ VπD(s′, a′) ] (8)\nSince in our objective, we augment the policy improvement step with the variance regularization term, we can write the augmented value function as Qπλ(s, a) := Q\nπ(s, a)− λVπD(s, a). This suggests we can modify existing policy optimization algorithms with augmented rewards on value function.\nRemark : Applying Fenchel transformation to the variance regularized objective, however, at first glance, seems to make the augmented rewards dependent on the policy itself, since r̃(s, a) depends on the dual variables ν(s, a) as well. This can make the rewards non-stationary, thereby the policy maximization step cannot be solved directly via the maximum return objective. However, as we discuss next, the dual variables for minimizing the variance term has a closed form solution ν(s, a), and thereby does not lead to any non-stationarity in the rewards, due to the alternating minimization and maximization steps.\nVariance Minimization Step : Fixing the policy πθ, the dual variables ν can be obtained using closed form solution given by ν(s, a) = ω(s, a) · r̃(s, a). Note that directly optimizing for the target policies using batch data, however, requires a fixed point estimate of the stationary distribution corrections, which can be achieved using existing algorithms (Nachum et al., 2019a; Liu et al., 2018). Solving the optimization objective additionally requires estimating the state-action distribution ratio, ω(s, a) = dπ(s,a)dD(s,a) . Recently, several works have proposed estimating the stationary distribution ratio, mostly for the off-policy evaluation case in infinite horizon setting (Zhang et al., 2020; Uehara & Jiang, 2019). We include a detailed discussion of this in appendix E.4.\nAlgorithm : Our proposed variance regularization approach with returns under stationary distribution corrections for offline optimization can be built on top of any existing batch off-policy optimization algorithms. We summarize our contributions in Algorithm 1. Implementing our algorithm requires estimating the state-action distribution ratio, followed by the closed form estimate of the dual variable ν. The augmented stationary reward with the dual variables can then be used to compute the regularized value function Qπλ(s, a). The policy improvement step involves maximizing the variance regularized value function, e.g with BCQ (Fujimoto et al., 2019)." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "In this section, we provide theoretical analysis of offline policy optimization algorithms in terms of policy improvement guarantees under fixed dataset D. Following then, we demonstrate that using the variance regularizer leads to a lower bound for our policy optimization objective, which leads to a pessimistic exploitation approach for offline algorithms.\nAlgorithm 1 Offline Variance Regularizer Initialize critic Qφ, policy πθ, network ωψ and regularization weighting λ; learning rate η for t = 1 to T do\nEstimate distribution ratio ωψ(s, a) using any existing DICE algorithm Estimate the dual variable ν(s, a) = ωψ(s, a) · r̃(s, a) Calculate augmented rewards r̃(s, a) using equation 7 Policy improvement step using any offline policy optimization algorithm with augmented rewards r̃(s, a) : θt = θt−1 + η∇θJ(θ, φ, ψ, ν)\nend for" }, { "heading": "4.1 VARIANCE OF MARGINALIZED IMPORTANCE SAMPLING AND IMPORTANCE SAMPLING", "text": "We first show in lemma 1 that the variance of rewards under stationary distribution corrections can similarly be upper bounded based on the variance of importance sampling corrections. We emphasize that in the off-policy setting under distribution corrections, the variance is due to the estimation of the density ratio compared to the importance sampling corrections. Lemma 1. The following inequality holds between the variance of per-step rewards under stationary distribution corrections, denoted by VD(π) and the variance of episodic returns with importance sampling corrections VP(π)\nVP(π) ≤ VD(π) (1− γ)2\n(9)\nThe proof for this and discussions on the variance of episodic returns compared to per-step rewards under occupancy measures is provided in the appendix B.1." }, { "heading": "4.2 POLICY IMPROVEMENT BOUND UNDER VARIANCE REGULARIZATION", "text": "In this section, we establish performance improvement guarantees (Kakade & Langford, 2002) for variance regularized value function for policy optimization. Let us first recall that the performance improvement can be written in terms of the total variation DTV divergence between state distributions (Touati et al., 2020) (for more discussions on the performance bounds, see appendix C) Lemma 2. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ\nJ(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (10)\nwhere π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|, and Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]. For detailed proof and discussions, see appendix C. Instead of considering the divergence between state visitation distributions, consider having access to both state-action samples generated from the environment. To avoid importance sampling corrections we can further considers the bound on the objective based on state-action visitation distributions, where we have an upper bound following from (Nguyen et al., 2010) : DTV(dπ′(s)||dπ(s)) ≤ DTV(dπ′(s, a)||dπ(s, a)). Following Pinsker’s inequality, we have: J(π′) ≥ J(π)+Es∼dπ(s),a∼π′(|s) [ Aπ(s, a) ] − πE(s,a)∼dπ(s,a) [√ DKL(dπ′(s, a)||dπ(s, a)) ] (11)\nFurthermore, we can exploit the relation between KL, total variation (TV) and variance through the variational representation of divergence measures. Recall that the total divergence between P and Q distributions is given by : DTV(p, q) = 12 ∑ x |p(x)−q(x)|. We can use the variational representation of the divergence measure. Denoting dπ(s, a) = βπ′(s, a), we have\nDTV(βπ′ ||βπ) = supf :S×A→R [ E(s,a)∼βπ′ [f(s, a)]− E(s,a)∼β(s,a)[φ ∗ ◦ f(s, a)] ]\n(12)\nwhere φ∗ is the convex conjugate of φ and f is the dual function class based on the variational representation of the divergence. Similar relations with the variational representations of f-divergences have also been considered in (Nachum et al., 2019b; Touati et al., 2020). We can finally obtain a bound for the policy improvement following this relation, in terms of the per-step variance: Theorem 1. For all policies π and π′, and the corresponding state-action visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of rewards under state-action occupancy measures.\nJ(π′)− J(π) ≥ Es∼dπ(s),a∼π′(a|s)[A π(s, a)]− Var(s,a)∼dπ(s,a)\n[ f(s, a) ] (13)\nwhere f(s, a) is the dual function class from the variational representation of variance.\nProof. For detailed proof, see appendix C.1." }, { "heading": "4.3 LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION", "text": "In this section, we show that augmenting the policy optimization objective with a variance regularizer leads to a lower bound to the original optimization objectiven J(πθ). Following from (Metelli et al., 2018), we first note that the variance of marginalized importance weighting with distribution corrections can be written in terms of the α−Renyi divergence. Let p and q be two probability measures, such that the Renyi divergence is Fα = 1α log ∑ x q(x) ( p(x) q(x) )α . When α = 1, this leads to the well-known KL divergence F1(p||q) = FKL(p||q). Let us denote the state-action occupancy measures under π and dataset D as dπ and dD. The variance of state-action distribution ratios is Var(s,a)∼dD(s,a)[ωπ/D(s, a)]. When α = 2 for the Renyi divergence, we have : Var(s,a)∼dD(s,a)[ωπ/D(s, a)] = F2(dπ||dD)− 1 (14) Following from (Metelli et al., 2018), and extending results from importance sampling ρ to marginalized importance sampling ωπ/D, we provide the following result that bounds the variance of the approximated density ratio ω̂π/D in terms of the Renyi divergence :\nLemma 3. Assuming that the rewards of the MDP are bounded by a finite constant, ||r||∞ ≤ Rmax. Given random variable samples (s, a) ∼ dD(s, a) from dataset D, for any N > 0, the variance of marginalized importance weighting can be upper bounded as :\nVar(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N ||r||2∞F2(dπ||dD) (15)\nSee appendix D.1 for more details. Following this, our goal is to derive a lower bound objective to our off-policy optimization problem. Concentration inequalities has previously been studied for both off-policy evaluation (Thomas et al., 2015a) and optimization (Thomas et al., 2015b). In our case, we can adapt the concentration bound derived from Cantelli’s ineqaulity and derive the following result based on variance of marginalized importance sampling. Under state-action distribution corrections, we have the following lower bound to the off-policy policy optimization objective with stationary state-action distribution corrections\nTheorem 2. Given state-action occupancy measures dπ and dD, and assuming bounded reward functions, for any 0 < δ ≤ 1 and N > 0, we have with probability at least 1− δ that :\nJ(π) ≥ E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] − √ 1− δ δ Var(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (16)\nEquation 16 shows the lower bound policy optimization objective under risk-sensitive variance constraints. The key to our derivation in equation 16 of theorem 2 shows that given off-policy batch data collected with behaviour policy µ(a|s), we are indeed optimizing a lower bound to the policy optimization objective, which is regularized with a variance term to minimize the variance in batch off-policy learning." }, { "heading": "5 EXPERIMENTAL RESULTS ON BENCHMARK OFFLINE CONTROL TASKS", "text": "Experimental Setup : We demonstrate the significance of variance regularizer on a range of continuous control domains (Todorov et al., 2012) based on fixed offline datasets from (Fu et al., 2020), which is a standard benchmark for offline algorithms. To demonstrate the significance of our variance regularizer OVR, we mainly use it on top of the BCQ algorithm and compare it with other existing baselines, using the benchmark D4RL (Fu et al., 2020) offline datasets for different tasks and off-policy distributions. Experimental results are given in table 1\nPerformance on Optimal and Medium Quality Datasets : We first evaluate the performance of OVR when the dataset consists of optimal and mediocre logging policy data. We collected the dataset using a fully (expert) or partially (medium) trained SAC policy. We build our algorithm OVR on top of BCQ, denoted by BCQ + VAR. Note that the OVR algorithm can be agnostic to the behaviour policy too for computing the distribution ratio (Nachum et al., 2019a) and the variance. We observe that even\nthough performance is marginally improved with OVR under expert settings, since the demonstrations are optimal itself, we can achieve significant improvements under medium dataset regime. This is because OVR plays a more important role when there is larger variance due to distribution mismatch between the data logging and target policy distributions. Experimental results are shown in first two columns of figure 1.\nPerformance on Random and Mixed Datasets : We then evaluate the performance on random datasets, i.e, the worst-case setup when the data logging policy is a random policy, as shown in the last two columns of figure 1. As expected, we observe no improvements at all, and even existing baselines such as BCQ (Fujimoto et al., 2019) can work poorly under random dataset setting. When we collect data using a mixture of random and mediocre policy, denoted by mixed, the performance is again improved for OVR on top of BCQ, especially for the Hopper and Walker control domains. We provide additional experimental results and ablation studies in appendix E.1." }, { "heading": "6 RELATED WORKS", "text": "We now discuss related works in offline RL, for evaluation and opimization, and its relations to variance and risk sensitive algorithms. We include more discussions of related works in appendix A.1. In off-policy evaluation, per-step importance sampling (Precup et al., 2000; 2001) have previously been used for off-policy evaluation function estimators. However, this leads to high variance estimators, and recent works proposed using marginalized importance sampling, for estimating stationary state-action distribution ratios (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2019), to reduce variance but with additional bias. In this work, we build on the variance of marginalized IS, to develop variance risk sensitive offline policy optimization algorithm. This is in contrast to prior works on variance constrained online actor-critic (A. & Ghavamzadeh, 2016; Chow et al., 2017; Castro et al., 2012) and relates to constrained policy optimization methods (Achiam et al., 2017; Tessler et al., 2019).\nFor offline policy optimization, several works have recently addressed the overestimation problem in batch RL (Fujimoto et al., 2019; Kumar et al., 2019; Wu et al., 2019b), including the very recently proposed Conservative Q-Learning (CQL) algorithm (Kumar et al., 2020). Our work is done in parallel to CQL, due to which we do not include it as a baseline in our experiments. CQL learns a value function which is guaranteed to lower-bound the true value function. This helps prevent value over-estimation for out-of-distribution (OOD) actions, which is an important issue in offline RL. We\n.\nnote that our approach is orthogonal to CQL in that CQL introduces a regularizer on the state action value function Qπ(s, a) based on the Bellman error (the first two terms in equation 2 of CQL), while we introduce a variance regularizer on the stationary state distribution dπ(s). Since the value of a policy can be expressed in two ways - either through Qπ(s, a) or occupancy measures dπ(s), both CQL and our paper are essentially motivated by the same objective of optimizing a lower bound on J(θ), but through different regularizers. Our work can also be considered similar to AlgaeDICE (Nachum et al., 2019b), since we introduce a variance regularizer based on the distribution corrections, instead of minimizing the f-divergence between stationary distributions in AlgaeDICE. Both our work and AlgaeDICE considers the dual form of the policy optimization objective in the batch setting, where similar to the Fenchel duality trick on our variance term, AlgaeDICE instead uses the variational form, followed by the change of variables tricks, inspired from (Nachum et al., 2019a) to handle their divergence measure." }, { "heading": "7 DISCUSSION AND CONCLUSION", "text": "We proposed a new framework for offline policy optimization with variance regularization called OVR, to tackle high variance issues due to distribution mismatch in offline policy optimization. Our work provides a practically feasible variance constrained actor-critic algorithm that avoids double sampling issues in prior variance risk sensitive algorithms (Castro et al., 2012; A. & Ghavamzadeh, 2016). The presented variance regularizer leads to a lower bound to the true offline optimization objective, thus leading to pessimistic value function estimates, avoiding both high variance and overestimation problems in offline RL. Experimentally, we evaluate the significance of OVR on standard benchmark offline datasets, with different data logging off-policy distributions, and show that OVR plays a more significant role when there is large variance due to distribution mismatch. While we only provide a variance related risk sensitive approach for offline RL, for future work, it would be interesting other risk sensitive approaches (Chow & Ghavamzadeh, 2014; Chow et al., 2017) and examine its significance in batch RL. We hope our proposed variance regularization framework would provide new opportunities for developing practically robust risk sensitive offline algorithms." }, { "heading": "A APPENDIX : ADDITIONAL DISCUSSIONS", "text": "" }, { "heading": "A.1 EXTENDED RELATED WORK", "text": "Other related works : Several other prior works have previously considered the batch RL setting (Lange et al., 2012) for off-policy evaluation, counterfactual risk minimization (Swaminathan & Joachims, 2015a;b), learning value based methods such as DQN (Agarwal et al., 2019), and others (Kumar et al., 2019; Wu et al., 2019b). Recently, batch off-policy optimization has also been introduced to reduce the exploitation error (Fujimoto et al., 2019) and for regularizing with arbitrary behaviour policies (Wu et al., 2019b). However, due to the per-step importance sampling corrections on episodic returns (Precup et al., 2000), off-policy batch RL methods is challenging. In this work, we instead consider marginalized importance sampling corrections and correct for the stationary stateaction distributions (Nachum et al., 2019a; Uehara & Jiang, 2019; Zhang et al., 2020). Additionally, under the framework of Constrained MDPs (Altman & Asingleutility, 1999), risk-sensitive and constrained actor-critic algorithms have been proposed previously (Chow et al., 2017; Chow & Ghavamzadeh, 2014; Achiam et al., 2017). However, these works come with their own demerits, as they mostly require minimizing the risk (ie, variance) term, where finding the gradient of the variance term often leads a double sampling issue (Baird, 1995). We avoid this by instead using Fenchel duality (Boyd & Vandenberghe, 2004), inspired from recent works (Nachum & Dai, 2020; Dai et al., 2018) and cast risk constrained actor-critic as a max-min optimization problem. Our work is closely related to (Bisi et al., 2019), which also consider per-step variance of returns, w.r.t state occupancy measures in the on-policy setting, while we instead consider the batch off-policy optimization setting with per-step rewards w.r.t stationary distribution corrections.\nConstrained optimization has previously been studied in in reinforcement learning for batch policy learning (Le et al., 2019), and optimization (Achiam et al., 2017), mostly under the framework of constrained MDPs (Altman & Asingleutility, 1999). In such frameworks, the cumulative return objective is augmented with a set of constraints, for safe exploration (Garcı́a et al., 2015; Perkins & Barto, 2003; Ding et al., 2020), or to reduce risk measures (Chow et al., 2017; A. & Fu, 2018; Castro et al., 2012). Batch learning algorithms (Lange et al., 2012) have been considered previously for counterfactual risk minimization and generalization (Swaminathan & Joachims, 2015a;b) and policy evaluation (Thomas et al., 2015a; Li et al., 2015), although little has been done for constrained offline policy based optimization. This raises the question of how can we learn policies in RL from fixed offline data, similar to supervised or unsupervised learning." }, { "heading": "A.2 WHAT MAKES OFFLINE OFF-POLICY OPTIMIZATION DIFFICULT?", "text": "Offline RL optimization algorithms often suffer from distribution mismatch issues, since the underlying data distribution in the batch data may be quite different from the induced distribution under target policies. Recent works (Fujimoto et al., 2019; Kumar et al., 2019; Agarwal et al., 2019; Kumar et al., 2020) have tried to address this, by avoiding overestimation of Q-values, which leads to the extraplation error when bootstrapping value function estimates. This leads to offline RL agents generalizing poorly for unseen regions of the dataset. Additionally, due to the distribution mismatch, value function estimates can also have large variance, due to which existing online off-policy algorithms (Haarnoja et al., 2018; Lillicrap et al., 2016; Fujimoto et al., 2018) may fail without online interactions with the environment. In this work, we address the later problem to minimize variance of value function estimates through variance related risk constraints." }, { "heading": "B APPENDIX : PER-STEP VERSUS EPISODIC VARIANCE OF RETURNS", "text": "Following from (Castro et al., 2012; A. & Ghavamzadeh, 2016), let us denote the returns with importance sampling corrections in the off-policy learning setting as :\nDπ(s, a) = T∑ t=0 γtr(st, at) ( T∏ t=1 π(at | st) µ(at | st) ) | s0 = s, a0 = a, τ ∼ µ (17)\nFrom this definition in equation 17, the action-value function, with off-policy trajectory-wise importance correction is Qπ(s, a) = E(s,a)∼dµ(s,a)[Dπ(s, a)], and similarly the value function can be defined as : V π(s) = Es∼dµ(s)[Dπ(s)]. For the trajectory-wise importance corrections, we can\ndefine the variance of the returns, similar to (A. & Fu, 2018) as : VP(π) = E(s,a)∼dµ(s,a)[D π(s, a)2]− E(s,a)∼dµ(s,a)[D π(s, a)]2 (18) where note that as in (Sobel, 1982), equation 18 also follows a Bellman like equation, although due to lack of monotonocitiy as required for dynamic programming (DP), such measures cannot be directly optimized by standard DP algorithms (A. & Fu, 2018).\nIn contrast, if we consider the variance of returns with stationary distribution corrections (Nachum et al., 2019a; Liu et al., 2018), rather than the product of importance sampling ratios, the variance term involves weighting the rewards with the distribution ratio ωπ/µ. Typically, the distribution ratio is approximated using a separate function class (Uehara & Jiang, 2019), such that the variance can be written as : Wπ(s, a) = ωπ/D(s, a) · r(s, a) | s = s, a ∼ π(· | s), (s, a) ∼ dD(s, a) (19) where we denote D as the data distribution in the fixed dataset, collected by either a known or unknown behaviour policy. The variance of returns under occupancy measures is therefore given by :\nVD(π) = E(s,a)∼dD(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dD(s,a) [ Wπ(s, a) ]2 (20)\nwhere note that the variance expression in equation 20 depends on the square of the per-step rewards with distribution correction ratios. We denote this as the dual form of the variance of returns, in contrast to the primal form of the variance of expected returns (Sobel, 1982).\nNote that even though the variance term under episodic per-step importance sampling corrections in equation 18 is equivalent to the variance with stationary distribution corrections in equation 20, following from (Bisi et al., 2019), considering per-step corrections, we will show that the variance with distribution corrections indeed upper bounds the variance of importance sampling corrections. This is an important relationship, since constraining the policy improvement step under variance constraints with occupancy measures therefore allows us to obtain a lower bound to the offline optimization objective, similar to (Kumar et al., 2020)." }, { "heading": "B.1 PROOF OF LEMMA 1 : VARIANCE INEQUALITY", "text": "Following from (Bisi et al., 2019), we show that the variance of per-step rewards under occupancy measures, denoted by VD(π) upper bounds the variance of episodic returns VP(π).\nVP(π) ≤ VD(π) (1− γ)2\n(21)\nProof. Proof of Lemma 1 following from (Bisi et al., 2019) is as follows. Denoting the returns, as above, but for the on-policy case with trajectories under π, as Dπ(s, a) = ∑∞ t=0 γ tr(st, at), and\ndenoting the return objective as J(π) = Es0∼ρ,at∼π(·|st),s′∼P [ Dπ(s, a) ] , the variance of episodic\nreturns can be written as : VP(π) = E(s,a)∼dπ(s,a) [( Dπ(s, a)− J(π)\n(1− γ)\n)2] (22)\n= E(s,a)∼dπ(s,a) [ (Dπ(s, a))2 ] + J(π)\n(1− γ)2 − 2J(π) (1− γ) E(s,a)∼dπ(s,a)\n[ Dπ(s, a) ] (23)\n= E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2\n(1− γ)2 (24)\nSimilarly, denoting returns under occupancy measures as Wπ(s, a) = dπ(s, a)r(s, a), and the returns under occupancy measures, equivalently written as J(π) = E(s,a)∼dπ(s,a)[r(s, a)] based on the primal and dual forms of the objective (Uehara & Jiang, 2019; Nachum & Dai, 2020), we can equivalently write the variance as :\nVD(π) = E(s,a)∼dπ(s,a) [( r(s, a)− J(π) )2] (25)\n= E(s,a)∼dπ(s,a) [ r(s, a)2 ] + J(π)2 − 2J(π)E(s,a)∼dπ(s,a)[r(s, a)] (26)\n= E(s,a)∼dπ(s,a) [ r(s, a)2 ] − J(π)2 (27)\nFollowing from equation 22 and 25, we therefore have the following inequality : (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( ∞∑ t=0 γt )( ∞∑ t=0 γtr(st, at) 2 )]\n(28)\n= (1− γ)Es0∼ρ,a∼π [ ∞∑ t=0 γtr(st, at) 2 ]\n(29)\n= E(s,a)∼dπ(s,a) [ r(s, a)2 ] (30)\nwhere the first line follows from Cauchy-Schwarz inequality. This concludes the proof.\nWe can further extend lemma 1, for off-policy returns under stationary distribution corrections (ie, marginalized importance sampling) compared importance sampling. Recall that we denote the variance under stationary distribution corrections as :\nVD(π) = E(s,a)∼dD(s,a) [( ωπ/D(s, a) · r(s, a)− J(π) )2] (31)\n= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ] − J(π)2 (32)\nwhere J(π) = E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] . We denote the episodic returns with importance\nsampling corrections as : Dπ = ∑T t=0 γ trtρ0:t. The variance, as denoted earlier is given by :\nVP(π) = E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2\n(1− γ)2 (33)\nWe therefore have the following inequality (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( T∑ t=0 γt )( T∑ t=0 γtr(st, at) 2 )( T∏ t=0 π(at|st) µD(at|st) )2] = (1− γ)Es0∼ρ,a∼π\n[ ∞∑ t=0 γtr(st, at) 2 ( T∏ t=0 π(at|st) µD(at|st) )2] (34)\n= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ]\n(35) which shows that lemma 1 also holds for off-policy returns with stationary distribution corrections." }, { "heading": "B.2 DOUBLE SAMPLING FOR COMPUTING GRADIENTS OF VARIANCE", "text": "The gradient of the variance term often leads to the double sampling issue, thereby making it impractical to use. This issue has also been pointed out by several other works (A. & Ghavamzadeh, 2016; Castro et al., 2012; Chow et al., 2017), since the variance involves the squared of the objective function itself. Recall that we have:\nVD(θ) = E(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]}2 (36)\nThe gradient of the variance term is therefore : ∇θVD(θ) = ∇θE(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − 2 · { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} · ∇θ { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} (37)\nwhere equation 37 requires multiple samples to compute the expectations in the second term. To see why this is true, let us denote J(θ) = EdD(s,a) [ ωπ/D(s, a)︸ ︷︷ ︸ ·r(s, a)IS(ω,πθ) ] where we have IS(ω, πθ) as the returns in short form. The variance of the returns with the stationary state-action distribution corrections can therefore be written as :\nVD(θ) = EdD(s,a) [ IS(ω, πθ)2 ] ︸ ︷︷ ︸\n(a)\n−EdD(s,a) [ IS(ω, πθ) ]2 ︸ ︷︷ ︸\n(b)\n(38)\nWe derive the gradient of each of the terms in (a) and (b) in equation 38 below. First, we find the gradient of the variance term w.r.t θ : ∇θEdD(s,a) [ IS(ω, πθ)2 ] = ∇θ ∑ s,a dD(s, a)IS(ω, πθ)2 = ∑ s,a dD(s, a)∇θIS(ω, πθ)2\n= ∑ s,a dD(s, a) · 2 · IS(ω, πθ) · IS(ω, πθ) · ∇θ log πθ(a | s)\n= 2 · ∑ s,a dD(s, a)IS(ω, πθ)2∇θ log πθ(a | s)\n= 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s)\n] (39)\nEquation 39 interestingly shows that the variance of the returns w.r.t πθ has a form similar to the policy gradient term, except the critic estimate in this case is given by the importance corrected returns, since IS(ω, πθ) = [ωπ/D(s, a) · r(s, a)]. We further find the gradient of term (b) from equation 38. Finding the gradient of this second term w.r.t θ is therefore :\n∇θEdD(s,a) [ IS(ω, πθ) ]2 = ∇θJ(θ)2 = 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (40)\nOverall, the expression for the gradient of the variance term is therefore : ∇θVD(θ) = 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s) ] − 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (41)\nThe variance gradient in equation 41 is difficult to estimate in practice, since it involves both the gradient of the objective and the objective J(θ) itself. This is known to have the double sampling issue (Baird, 1995) which requires separate independent rollouts. Previously, (Castro et al., 2012) tackled the variance of the gradient term using simultaneous perturbation stochastic approximation (SPSA) (Spall, 1992), where we can keep running estimates of both the return and the variance term, and use a two time scale algorithm for computing the gradient of the variance regularizer with per-step importance sampling corrections." }, { "heading": "B.3 ALTERNATIVE DERIVATION : VARIANCE REGULARIZATION VIA FENCHEL DUALITY", "text": "In the derivation of our algorithm, we applied the Fenchel duality trick to the second term of the variance expression 25. An alternative way to derive the proposed algorithm would be to see what happens if we apply the Fenchel duality trick to both terms of the variance expression. This might be useful since equation 41 requires evaluating both the gradient terms and the actual objective J(θ), due to the analytical expression of the form ∇θJ(θ) · J(θ), hence suffering from a double sampling issue. In general, the Fenchel duality is given by :\nx2 = max y (2xy − y2) (42) and applying Fenchel duality to both the terms, since they both involve squared terms, we get :\nEdD(s,a) [ IS(ω, πθ)2 ] ≡ EdD(s,a) [ max y { 2 · IS(ω, πθ) · y(s, a)− y(s, a)2 }] = 2 ·max\ny\n{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]} (43) Similarly, applying Fenchel duality to the second (b) term we have :\nEdD(s,a) [ IS(ω, πθ) ]2 = max\nν\n{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (44)\nOverall, we therefore have the variance term, after applying Fenchel duality as follows, leading to an overall objective in the form maxymaxν VD(θ), which we can use as our variance regularizer\nVD(θ) = 2 ·max y\n{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]}\n−max ν\n{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (45)\nUsing the variance of stationary distribution correction returns as a regularizer, we can find the gradient of the variance term w.r.t θ as follows, where the gradient terms dependent on the dual variables y and ν are 0.\n∇θVD(θ) = 2 · ∇θEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 0− 2 · ∇θEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] + 0\n= 2·EdD(s,a) [ IS(ω, πθ)·y(s, a)·∇θ log πθ(a | s) ] −2·EdD(s,a) [ IS(ω, πθ)·ν(s, a)·∇θ log πθ(a | s) ]\n= 2 · EdD(s,a) [ IS(ω, πθ) · ∇θ log πθ(a | s) · { y(s, a)− ν(s, a) }] (46)\nNote that from equation 46, the two terms in the gradient is almost equivalent, and the difference comes only from the difference between the two dual variables y(s, a) and ν(s, a). Note that our variance term also requires separately maximizing the dual variables, both of which has the following closed form updates :\n∇νVD(θ) = −2 · ∇νEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] +∇νν2 = 0 (47)\nSolving which exactly, leads to the closed form solution ν(s, a) = EdD(s,a) [ IS(ω, πθ) ] . Similarly,\nwe can also solve exactly using a closed form solution for the dual variables y, such that : ∇yVD(θ) = 2 · ∇yEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 2 · ∇yEdD(s,a) [ y(s, a)2 ] = 0 (48)\nSolving which exactly also leads to the closed form solution, such that y(s, a) = 12 · IS(ω, πθ) = 1 2 · dπ(s,a) dµ(s,a)\n· r(s, a). Note that the exact solutions for the two dual variables are similar to each other, where ν(s, a) is the expectation of the returns with stationary distribution corrections, whereas y(s, a) is only the return from a single rollout." }, { "heading": "C APPENDIX : MONOTONIC PERFORMANCE IMPROVEMENT GUARANTEES", "text": "UNDER VARIANCE REGULARIZATION\nWe provide theoretical analysis and performance improvements bounds for our proposed variance constrained policy optimization approach. Following from (Kakade & Langford, 2002; Schulman et al., 2015; Achiam et al., 2017), we extend existing performance improvement guarantees based on the stationary state-action distributions instead of only considering the divergence between the current policy and old policy. We show that existing conservative updates in algorithms (Schulman et al., 2015) can be considered for both state visitation distributions and the action distributions, as similarly pointed by (Achiam et al., 2017). We can then adapt this for the variance constraints instead of the divergence constraints. According to the performance difference lemma (Kakade & Langford, 2002), we have that, for all policies π and π′ :\nJ(π′)− J(π) = Es∼dπ′ ,a∼π′ [A π(s, a)] (49)\nwhich implies that when we maximize 49, it will lead to an improved policy π′ with policy improvement guarantees over the previous policy π. We can write the advantage function with variance augmented value functions as :\nAπλ = Q π λ(s, a)− V πλ (s) = Es′∼P [ r(s, a)− λ(r(s, a)− J(π))2 + γV πλ (s′)− V πλ (s) ] However, equation 49 is often difficult to maximize directly, since it additionally requires samples from π′ and dπ′ , and often a surrogate objective is instead proposed by (Kakade & Langford, 2002). Following (Schulman et al., 2015), we can therefore obtain a bound for the performance difference based on the variance regularized advantage function :\nJ(π′) ≥ J(π) + Es∼dπ(s),a∼π′(a|s) [ Aπλ(s, a) ] (50)\nwhere we have the augmented rewards for the advantage function, and by following Fenchel duality for the variance, can avoid policy dependent reward functions. Otherwise, we have the augmented rewards for value functions as r̃(s, a) = r(s, a)− λ(r(s, a)− J(π))2. This however suggests that the performance difference does not hold without proper assumptions (Bisi et al., 2019). We can therefore obtain a monotonic improvement guarantee by considering the KL divergence between\npolicies : Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)] (51) which ignores the changes in the state distribution dπ′ due to the improved policy π′. (Schulman et al., 2015) optimizes the surrogate objectives Lπ(π′) while ensuring that the new policy π′ stays close to the current policy π, by imposing a KL constraint (Es∼dπ [DKL(π′(· | s)||π(· | s)] ≤ δ). The performance difference bound, based on the constraint between π and π′ as in TRPO (Schulman et al., 2015) is given by :\nLemma 4. The performance difference lemma in (Schulman et al., 2015), where α = DmaxTV = maxsDTV(π, π′)\nJ(π′) ≥ Lπ(π′)− 4 γ\n(1− γ)2 (DmaxTV (π′||π))2 (52)\nwhere = maxs,a |Aπ(s, a)|, which is usually denoted with α, where\nThe performance improvement bxound in (Schulman et al., 2015) can further be written in terms of the KL divergence by following the relationship between total divergence (TV) and KL, which follows from Pinsker’s inequality, DTV(p||q)2 ≤ DKL(p||q), to get the following improvement bound :\nJ(π′) ≥ Lπ(π′)− 4 γ\n(1− γ)2 DKL(π′||π) (53)\nWe have a performance difference bound in terms of the state distribution shift dπ′ and dπ. This justifies that Lπ(π′) is a sensible lower bound to J(π′) as long as there is a total variation distance between dπ′ and dπ which ensures that the policies π′ and π stay close to each other. Finally, following from (Achiam et al., 2017), we obtain the following lower bound, which satisfies policy improvement guarantees :\nJ(π′) ≥ Lπ(π′)− 2γ π\n1− γ Es∼dπ [DTV(π′(· | s)||π(· | s))] (54)\nEquation 53 and 54 assumes that there is no state distribution shift between π′ and π. However, if we explicitly assume state distribution changes, dπ′ and dπ due to π′ and π respectively, then we have the following performance improvement bound :\nLemma 5. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (55) where π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|\nwhich can be further written in terms of the surrogate objective Lπ(π′) as : J(π′) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ) = Lπ(π′)− πDTV(dπ′ ||dπ) (56)" }, { "heading": "C.1 PROOF OF THEOREM 1 : POLICY IMPROVEMENT BOUND WITH VARIANCE REGULARIZATION", "text": "Proof. We provide derivation for theorem 1. Recall that for all policies π′ and π, and corresponding state visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of state-action distribution corrections\nJ(π′)− J(π) ≥ Es∼dπ,a∼π′ [ Aπ(s, a) ] − Vars∼dπ,a∼π [ f(s, a) ] (57)\nwhere f(s, a) is the dual function class, for the divergence between dπ′(s, a) and dπ(s, a) Following from Pinsker’s inequality, the performance difference lemma written in terms of the state visitation distributions can be given by :\nJ(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ)\n≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− π √ DKL(dπ′ ||dπ) (58)\nFollowing from (Schulman et al., 2015), we can alternately write this follows, where we further apply the variational form of TV J(π′) ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [ DTV(dπ′ ||dπ)2 ] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [( max f {Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)]}\n)2] ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max\nf Es∼dπ\n[( Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)] )2] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max\nf\n{( Es∼dπ,a∼π[f(s, a)]− Es∼dπ,a∼π[Es∼dπ,a∼π[f(s, a)]] )2} = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max\nf Vars∼dπ,a∼π\n[ f(s, a) ] (59)\nTherefore the policy improvement bound depends on maximizing the variational representation f(s, a) of the f-divergence to guaranetee improvements from J(π) to J(π′). This therefore leads to the stated result in theorem 1." }, { "heading": "D APPENDIX : LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION", "text": "" }, { "heading": "D.1 PROOF OF LEMMA 3", "text": "Recalling lemma 3 which states that, the proof of this follows from (Metelli et al., 2018). We extend this for marginalized importance weighting, and include here for completeness. Note that compared to importance weighting, which leads to an unbiased estimator as in (Metelli et al., 2018), correcting for the state-action occupancy measures leads to a biased estimator, due to the approximation ω̂π/D. However, for our analysis, we only require to show a lower bound objective, and therefore do not provide any bias variance analysis as in off-policy evaluation.\nVar(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (60)\nProof. Assuming that state action samples are drawn i.i.d from the dataset D, we can write : Var(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N Var(s1,a1)∼dD(s,a) [ dπ(s1, a1 dD(s1, a1) · r(s1, a1) ]\n≤ 1 N E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2]\n≤ 1 N ||r||2∞E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2] = 1 N ||r||2∞F2(dπ||dD) (61)" }, { "heading": "D.2 PROOF OF THEOREM 2:", "text": "First let us recall the stated theorem 2. By constraining the off-policy optimization problem with variance constraints, we have the following lower bound to the optimization objective with stationary state-action distribution corrections\nJ(π) ≥ E(s,a)∼dD(s,a)[ dπ(s, a)\ndD(s, a) r(s, a)]− √ 1− δ δ Var(s,a)∼dµ(s,a)[ dπ(s, a) dD(s, a) r(s, a)] (62)\nProof. The proof for the lower bound objective can be obtained as follows. We first define a relationship between the variance and the α-divergence with α = 2, as also similarly noted in (Metelli et al., 2018). Given we have batch samples D, and denoting the state-action distribution correction with ωπ/D(s, a), we can write from lemma 3 :\nVar(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (63)\nwhere the per-step estimator with state-action distribution corrections is given by ωπ/D(s, a) · r(s, a). Here, the reward function r(s, a) is a bounded function, and for any N > 0 the variance of the\nper-step reward estimator with distribution corrections can be upper bounded by the Renyi-divergence (α = 2). Finally, following from (Metelli et al., 2018) and using Cantelli’s inequality, we have with probability at least 1− δ where 0 < δ < 1 :\nPr ( ωπ/D − J(π) ≥ λ ) ≤ 1\n1 + λ 2 Var(s,a)∼dD(s,a)[ωπ/D(s,a)·r(s,a)] (64)\nand by using δ = 1 1+ λ 2\nVar(s,a)∼dD(s,a) [ωπ/D(s,a)·r(s,a)]\nwe get that with probability at least 1− δ, we have:\nJ(π) = E(s,a)∼dπ(s,a) ≥ E(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)]− √ 1− δ δ\nVar(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (65)\nwhere we can further replace the variance term with α = 2 for the Renyi divergence to conclude the proof for the above theorem. We can further write the lower bound for for α-Renyi divergence, following the relation between variance and Renyi-divergence for α = 2 as :\nJ(π) = E(s,a)∼dπ(s,a)[r(s, a)] ≥ E(s,a)∼dD(s,a)[ dπ(s, a)\ndD(s, a) · r(s, a)]− ||r||∞\n√ (1− δ)d2(dπ||dD)\nδN This hints at the similarity between our proposed variance regularized objective with that of other related works including AlgaeDICE (Nachum et al., 2019b) which uses a f-divergence D f (dπ||dD) between stationary distributions." }, { "heading": "E APPENDIX : ADDITIONAL EXPERIMENTAL RESULTS", "text": "" }, { "heading": "E.1 EXPERIMENTAL ABLATION STUDIES", "text": "In this section, we present additional results using state-action experience replay weightings on existing offline algorithms, and analysing the significance of our variance regularizer on likelihood corrected offline algorithms. Denoting ω(s, a) for the importance weighting of state-action occupancy measures based on samples in the experience replay buffer, we can modify existing offline algorithms to account for state-action distribution ratios.\nThe ablation experimental results using the Hopper control benchmark are summarized in figure 2. The same base BCQ algorithm is used with a modified objective for BCQ (Fujimoto et al., 2019) where the results for applying off-policy importance weights are denoted as “BCQ+I.W.”. We employ the same technique to obtain ω(s, a) for both the baseline and for adding variance regularization as described. The results suggest that adding the proposed per-step variance regularization scheme significantly outperforms just importance weighting the expected rewards for off-policy policy learning." }, { "heading": "E.2 EXPERIMENTAL RESULTS IN CORRUPTED NOISE SETTINGS", "text": "We additionally consider a setting where the batch data is collected from a noisy environment, i.e, in settings with corrupted rewards, r → r + , where ∼ N (0, 1). Experimental results are presented in figures 1, 3. From our results, we note that using OVR on top of BCQ (Fujimoto et al., 2019), we can achieve significantly better performance with variance minimization, especially when the agent is given sub-optimal demonstrations. We denote it as medium (when the dataset was collected by a half trained SAC policy) or a mixed behaviour logging setting (when the data logging policy is a mixture of random and SAC policy). This is also useful for practical scalability, since often data collection is\nexpensive from an expert policy. We add noise to the dataset, to examine the significance of OVR under a noisy corrupted dataset setting." }, { "heading": "E.3 EXPERIMENTAL RESULTS ON SAFETY BENCHMARK TASKS", "text": "Safety Benchmarks for Variance as Risk : We additionally consider safety benchmarks for control tasks, to analyse the significance of variance regularizer as a risk constraint in offline policy optimization algorithms. Our results are summarized in table 3." }, { "heading": "E.4 DISCUSSIONS ON OFFLINE OFF-POLICY OPTIMIZATION WITH STATE-ACTION DISTRIBUTION RATIOS", "text": "In this section, we include several alternatives by which we can compute the stationary state-action distribution ratio, borrowing from recent works (Uehara & Jiang, 2019; Nachum et al., 2019a).\nOff-Policy Optimization with Minimax Weight Learning (MWL) : We discuss other possible ways of optimizing the batch off-policy optimization objective while also estimating the state-action density ratio. Following from (Uehara & Jiang, 2019) we further modify the off-policy optimization part of the objective J(θ) in L(θ, λ) as a min-max objective, consisting of weight learning ωπ/D\nTable 3: Results on the Safety-Gym environments Ray et al.. We report the mean and S.D. of episodic returns and costs over five random seeds and 1 million timesteps. The goal of the agent is to maximize the episodic return, while minimizing the cost incurred.\nPointGoal1 PointGoal2 Reward Cost Reward Cost\nBCQ 43.1 ± 0.3 137.0 ± 3.6 32.7± 0.7 468.2 ± 9.1 BCQ+OVR 44.2 ± 0.3 127.1 ± 4.0 33.2 ± 0.7 453.9 ± 7.3\nPointButton1 PointButton2 Reward Cost Reward Cost\nBCQ 30.9 ± 2.2 330.8 ± 8.3 18.1 ± 1.1 321.6 ± 4.1 BCQ+OVR 30.7 ± 2.3 321.5 ± 6.8 19.6 ± 1.0 305.7 ± 6.1\nand optimizing the resulting objective J(θ, ω). We further propose an overall policy optimization objective, where a single objective can be used for estimating the distribution ratio, evaluating the critic and optimizing the resulting objective. We can write the off-policy optimization objective with its equivalent starting state formulation, such that we have :\nEdD(s,a) [ ωπθ/D(s, a) · r(s, a) ] = (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (66)\nFurthermore, following Bellman equation, we expect to have E[r(s, a)] = E[Qπ(s, a)−γQπ(s′, a′)] EdD(s,a) [ ωπθ/D(s, a)·{Q π(s, a)−γQπ(s′, a′)} ] = (1−γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (67)\nWe can therefore write the overall objective as : J(ω, πθ, Q) = EdD(s,a) [ ωπθ/D(s, a) · {Q π(s, a)− γQπ(s′, a′)} ]\n− (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (68)\nThis is similar to the MWL objective in (Uehara & Jiang, 2019) except we instead consider the bias reduced estimator, such that accurate estimates of Q or ω will lead to reduced bias of the value function estimation. Furthermore, note that in the first part of the objective J(πθ, ω,Q)2, we can further use entropy regularization for smoothing the objective, since instead ofQπ(s′, a′) in the target, we can replace it with a log-sum-exp and considering the conjugate of the entropy regularization term, similar to SBEED (Dai et al., 2018). This would therefore give the first part of the objective as an overall min-max optimization problem :\nJ(ω, πθ) = Edµ(s,a) [ ωπθ/D(s, a) · {r(s, a) + γQ π(s′, a′) + τ log π(a | s)−Qπ(s, a)} ]\n+ (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (69)\nsuch that from our overall constrained optimization objective for maximizing θ, we have turned it into a min-max objective, for estimating the density ratios, estimating the value function and maximizing the policies\nω∗π/D, Q ∗, π∗ = argmin\nω,Q argmax π J(πθ, ω,Q)\n2 (70)\nwhere the fixed point solution for the density ratio can be solved by minimizing the objective :\nω∗π/D = argmin ω\nL(ωπ/D, Q)2 = Edµ(s,a) [ {γω(s, a) ·Qπ(s′, a′)− ω(s, a)Qπ(s, a)}+\n(1− γ)Eβ(s,a)Qπ(s0, a0) ] (71)\nDualDICE : In contrast to MWL (Uehara & Jiang, 2019), DualDICE (Nachum et al., 2019a) introduces dual variables through the change of variables trick, and minimizes the Bellman residual of the dual variables ν(s, a) to estimate the ratio, such that : ν∗(s, a)− Bπν∗(s, a) = ωπ/D(s, a) (72) the solution to which can be achieved by optimizing the following objective\nmin ν L(ν) = 1 2 EdD\n[ (ν − Bπν)(s, a)2 ] − (1− γ)Es0,a0∼β(s,a) [ ν(s0, a0) ] (73)\nMinimizing Divergence for Density Ratio Estimation : The distribution ratio can be estimated using an objective similar to GANs (Goodfellow et al., 2014; Ho & Ermon, 2016), as also similarly\nproposed in (Kostrikov et al., 2019).\nmax h G(h) = E(s,a)∼dD\n[ log h(s, a) ] + E(s,a)∼dπ [ log(1− h(s, a)) ] (74)\nwhere h is the discriminator class, discriminating between samples from dD and dπ. The optimal discriminator satisfies :\nlog h∗(s, a)− log(1− h∗(s, a)) = log dD(s, a) dπ(s, a)\n(75)\nThe optimal solution of the discriminator is therefore equivalent to minimizing the divergence between dπ and dD, since the KL divergence is given by :\n−DKL(dπ||dD) = E(s,a)∼dπ [ log dD(s, a)\ndπ(s, a)\n] (76)\nAdditionally, using the Donsker-Varadhan representation, we can further write the KL divergence term as :\n−DKL(dπ||dD) = min x\nlogE(s,a)∼dD [ expx(s, a) ] − E(s,a)∼dπ [ x(s, a) ] (77)\nsuch that now, instead of the discriminator class h, we learn the function class x, the optimal solution to which is equivalent to the distribution ratio plus a constant\nx∗(s, a) = log dπ(s, a)\ndD(s, a) (78)\nHowever, note that both the GANs like objective in equation 74 or the DV representation of the KL divergence in equation 77 requires access to samples from both dπ and dD. In our problem setting however, we only have access to batch samples dD. To change the dependency on having access to both the samples, we can use the change of variables trick, such that : x(s, a) = ν(s, a)−Bπν(s, a), to write the DV representation of the KL divergence as :\n−DKL(dπ||dD) = min ν\nlogE(s,a)∼dD [ exp ν(s, a)− Bπν(s, a) ] −E(s,a)∼dπ [ ν(s, a)−Bπν(s, a) ] (79)\nwhere the second expectation can be written as an expectation over initial states, following from DualDICE, such that we have −DKL(dπ||dD) = min\nν logE(s,a)∼dD\n[ exp ν(s, a)− Bπν(s, a) ] −(1−γ)E(s,a)∼β0(s,a) [ ν(s0, a0) ] (80)\nBy minimizing the above objective w.r.t ν, which requires only samples from the fixed batch data dD and the starting state distribution. The solution to the optimal density ratio is therefore given by :\nx∗(s, a) = ν∗(s, a)− Bπν∗(s, a) = log dπ(s, a) dD(s, a) = logω∗(s, a) (81)\nEmpirical Likelihood Ratio : We can follow Sinha et al. (2020) to compute the state-action likelihood ratio, where they use a binary a classifier to classify samples between an on-policy and off-policy distribution. The proposed classifier, φ, is trained on the following objective, and takes as input the state-action tuples (s, a) to return a probability score that the state-action distribution is from the target policy. The objective for φ can be formulated as\nLcls = max φ −Es,a∼D[log(φ(s, a))] + Es∼D[log(φ(s, π(s))] (82)\nwhere s, a ∼ D are samples from the behaviour policy, and s, π(s) are samples from the target policy. The density ratio estimates for a given s, a ∼ D are simply ω(s, a) = σ(φ(s, a)) like in Sinha et al. (2020). We then use these ω(s, a) for density ratio corrections for the target policy in equantion ??." } ]
2,020
null
SP:4e77d43eb99688600f6c2115e1882e0b1e11a751
[ "This paper proposed a novel method which to quantify the reliability of DNN-driven hypotheses in a statistical hypothesis testing framework. Naive statistical testings are not appropriate for the DNN-driven hypotheses, where the hypotheses are selected by looking at the data(i.e. The selection bias exists). To address this problem, the authors developed a novel homotopy method under the Selective-Inference(SI) framework, which can derive the exact sampling distribution of the DNN-driven hypotheses. In this paper, the authors mainly focus on DNNs which consist of affine operations, max-operations, and piecewise-linear activation. As described by Lee et al. (2016), the main idea of SI is to make the inference conditional on the selection event. Specifically to the DNN-driven hypotheses, the authors proposed a novel method that consists of two steps, 1) Adding extra conditioning to make the problem traceable. 2) Combining multiple over-conditioning cases by homotopy method to solve the over-conditioning problem. The experimental results on both synthetic and real-world datasets illustrate the proposed method can successfully control the FP error rate." ]
In the past few years, various approaches have been developed to explain and interpret deep neural network (DNN) representations, but it has been pointed out that these representations are sometimes unstable and not reproducible. In this paper, we interpret these representations as hypotheses driven by DNN (called DNN-driven hypotheses) and propose a method to quantify the reliability of these hypotheses in statistical hypothesis testing framework. To this end, we introduce Selective Inference (SI) framework, which has received much attention in the past few years as a new statistical inference framework for data-driven hypotheses. The basic idea of SI is to make conditional inferences on the selected hypotheses under the condition that they are selected. In order to use SI framework for DNN representations, we develop a new SI algorithm based on homotopy method which enables us to derive the exact (non-asymptotic) conditional sampling distribution of the DNN-driven hypotheses. In this paper, we demonstrate the proposed method in computer vision tasks as practical examples. We conduct experiments on both synthetic and real-world datasets, through which we offer evidence that our proposed method can successfully control the false positive rate, has decent performance in terms of computational efficiency, and provides good results in practical applications.
[]
[ { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "PloS one,", "year": 2015 }, { "authors": [ "François Bachoc", "Hannes Leeb", "Benedikt M" ], "title": "Pötscher. Valid confidence intervals for postmodel-selection predictors", "venue": "arXiv preprint arXiv:1412.4605,", "year": 2014 }, { "authors": [ "François Bachoc", "Gilles Blanchard", "Pierre Neuvial" ], "title": "On the post selection inference constant under restricted isometry properties", "venue": "Electronic Journal of Statistics,", "year": 2018 }, { "authors": [ "Shuxiao Chen", "Jacob Bien" ], "title": "Valid inference corrected for outlier removal", "venue": "Journal of Computational and Graphical Statistics,", "year": 2019 }, { "authors": [ "Yunjin Choi", "Jonathan Taylor", "Robert Tibshirani" ], "title": "Selecting the number of principal components: Estimation of the true rank of a noisy matrix", "venue": "The Annals of Statistics,", "year": 2017 }, { "authors": [ "Ann-Kathrin Dombrowski", "Maximillian Alber", "Christopher Anders", "Marcel Ackermann", "KlausRobert Müller", "Pan Kessel" ], "title": "Explanations can be manipulated and geometry is to blame", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning", "venue": "arXiv preprint arXiv:1702.08608,", "year": 2017 }, { "authors": [ "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Inverting visual representations with convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Vo Nguyen Le Duy", "Ichiro Takeuchi" ], "title": "Parametric programming approach for powerful lasso selective inference without conditioning on signs", "venue": "arXiv preprint arXiv:2004.09749,", "year": 2020 }, { "authors": [ "Vo Nguyen Le Duy", "Hiroki Toda", "Ryota Sugiyama", "Ichiro Takeuchi" ], "title": "Computing valid p-value for optimal changepoint by selective inference using dynamic programming", "venue": "arXiv preprint arXiv:2002.09132,", "year": 2020 }, { "authors": [ "William Fithian", "Dennis Sun", "Jonathan Taylor" ], "title": "Optimal inference after model selection", "venue": "arXiv preprint arXiv:1410.2597,", "year": 2014 }, { "authors": [ "William Fithian", "Jonathan Taylor", "Robert Tibshirani", "Ryan Tibshirani" ], "title": "Selective sequential model selection", "venue": "arXiv preprint arXiv:1512.02565,", "year": 2015 }, { "authors": [ "Ruth C Fong", "Andrea Vedaldi" ], "title": "Interpretable explanations of black boxes by meaningful perturbation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Amirata Ghorbani", "Abubakar Abid", "James Zou" ], "title": "Interpretation of neural networks is fragile", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Megan L Head", "Luke Holman", "Rob Lanfear", "Andrew T Kahn", "Michael D Jennions" ], "title": "The extent and consequences of p-hacking in science", "venue": "PLoS Biol,", "year": 2015 }, { "authors": [ "Juyeon Heo", "Sunghwan Joo", "Taesup Moon" ], "title": "Fooling neural network interpretations via adversarial model manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sangwon Hyun", "Kevin Lin", "Max G’Sell", "Ryan J Tibshirani" ], "title": "Post-selection inference for changepoint detection algorithms with application to copy number variation data", "venue": "arXiv preprint arXiv:1812.03644,", "year": 2018 }, { "authors": [ "John PA Ioannidis" ], "title": "Why most published research findings are false", "venue": "PLoS medicine,", "year": 2005 }, { "authors": [ "Pieter-Jan Kindermans", "Sara Hooker", "Julius Adebayo", "Maximilian Alber", "Kristof T Schütt", "Sven Dähne", "Dumitru Erhan", "Been Kim" ], "title": "The (un) reliability of saliency methods", "venue": "arXiv preprint arXiv:1711.00867,", "year": 2017 }, { "authors": [ "Jason D Lee", "Dennis L Sun", "Yuekai Sun", "Jonathan E Taylor" ], "title": "Exact post-selection inference, with application to the lasso", "venue": "The Annals of Statistics,", "year": 2016 }, { "authors": [ "Keli Liu", "Jelena Markovic", "Robert Tibshirani" ], "title": "More powerful post-selection inference, with application to the lasso", "venue": "arXiv preprint arXiv:1801.09037,", "year": 2018 }, { "authors": [ "Joshua R Loftus" ], "title": "Selective inference after cross-validation", "venue": "arXiv preprint arXiv:1511.08866,", "year": 2015 }, { "authors": [ "Joshua R Loftus", "Jonathan E Taylor" ], "title": "A significance test for forward stepwise model selection", "venue": "arXiv preprint arXiv:1405.3920,", "year": 2014 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "David Alvarez Melis", "Tommi Jaakkola" ], "title": "Towards robust interpretability with self-explaining neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Snigdha Panigrahi", "Jonathan Taylor", "Asaf Weinstein" ], "title": "Bayesian post-selection inference in the linear model", "venue": "arXiv preprint arXiv:1605.08824,", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": " why should i trust you?” explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "Shinya Suzumura", "Kazuya Nakagawa", "Yuta Umezu", "Koji Tsuda", "Ichiro Takeuchi" ], "title": "Selective inference for sparse high-order interaction models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kosuke Tanizaki", "Noriaki Hashimoto", "Yu Inatsu", "Hidekata Hontani", "Ichiro Takeuchi" ], "title": "Computing valid p-values for image segmentation by selective inference", "venue": null, "year": 2020 }, { "authors": [ "Xiaoying Tian", "Jonathan Taylor" ], "title": "Selective inference with a randomized response", "venue": "The Annals of Statistics,", "year": 2018 }, { "authors": [ "Ryan J Tibshirani", "Jonathan Taylor", "Richard Lockhart", "Robert Tibshirani" ], "title": "Exact post-selection inference for sequential regression procedures", "venue": "Journal of the American Statistical Association,", "year": 2016 }, { "authors": [ "Ronald L Wasserstein", "Nicole A Lazar" ], "title": "The asa statement on p-values: context, process, and purpose, 2016", "venue": null, "year": 2016 }, { "authors": [ "Fan Yang", "Rina Foygel Barber", "Prateek Jain", "John Lafferty" ], "title": "Selective inference for group-sparse linear models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Xinyang Zhang", "Ningfei Wang", "Hua Shen", "Shouling Ji", "Xiapu Luo", "Ting Wang" ], "title": "Interpretable deep learning under fire", "venue": "In 29th {USENIX} Security Symposium ({USENIX} Security", "year": 2020 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The remarkable predictive performance of deep neural networks (DNNs) stems from their ability to learn appropriate representations from data. In order to understand the decision-making process of DNNs, it is thus important to be able to explain and interpret DNN representations. For example, in image classification tasks, knowing the attention region from DNN representation allows us to understand the reason for the classification. In the past few years, several methods have been developed to explain and interpret DNN representations (Ribeiro et al., 2016; Bach et al., 2015; Doshi-Velez & Kim, 2017; Lundberg & Lee, 2017; Zhou et al., 2016; Selvaraju et al., 2017); however, some of them have turned out to be unstable and not reproducible (Kindermans et al., 2017; Ghorbani et al., 2019; Melis & Jaakkola, 2018; Zhang et al., 2020; Dombrowski et al., 2019; Heo et al., 2019). Therefore, it is crucially important to develop a method to quantify the reliability of DNN representations.\nIn this paper, we interpret these representations as hypotheses that are driven by DNN (called DNNdriven hypotheses) and employ statistical hypothesis testing framework to quantify the reliability of DNN representations. For example, in an image classification task, the reliability of an attention region can be quantified based on the statistical significance of the difference between the attention region and the rest of the image. Unfortunately, however, traditional statistical test cannot be applied to this problem because the hypothesis (attention region in the above example) itself is selected by the data. Traditional statistical test is valid only when the hypothesis is non-random. Roughly speaking, if a hypothesis is selected by the data, the hypothesis will over-fit to the data and the bias needs to be corrected when assessing the reliability of the hypothesis.\nOur main contribution in this paper is to introduce Selective Inference (SI) approach for testing the reliability of DNN representations. The basic idea of SI is to perform statistical inference under the condition that the hypothesis is selected. SI approach has been demonstrated to be effective\nin the context of feature selections such as Lasso. In this paper, in order to introduce SI for DNN representations, we develop a novel SI algorithm based on homotopy method, which enables us to derive the exact (non-asymptotic) conditional sampling distribution of the DNN-driven hypothesis. We use p-value as a criterion to quantify the reliability of DNN representation. In the literature, pvalues are often misinterpreted and there are various source of mis-interpretation has been discussed (Wasserstein & Lazar, 2016). In this paper, by using SI, we address one of the sources of misinterpreted p-values; the p-values are biased when the hypothesis is selected after looking at the data (often called double-dipping or data dredging). We believe our approach is a first significant step to provide valid p-values for assessing the reliability of DNN representations. Figure 1 shows an example that illustrates the importance of our method.\nRelated works. Several recent approaches have been developed to visualize and understand a trained DNN. Many of these post-hoc approaches (Mahendran & Vedaldi, 2015; Zeiler & Fergus, 2014; Dosovitskiy & Brox, 2016; Simonyan et al., 2013) have focused on developing visualization tools for the activation maps and/or the filter weights within trained networks. Others have aimed to identify the discriminative regions in an input image, given a trained network (Selvaraju et al., 2017; Fong & Vedaldi, 2017; Zhou et al., 2016; Lundberg & Lee, 2017). In parallel, some recent studies have showed that many popular methods for explanation and interpretation are not stable with respect to the perturbation or the adversarial attack on the input data and the model (Kindermans et al., 2017; Ghorbani et al., 2019; Melis & Jaakkola, 2018; Zhang et al., 2020; Dombrowski et al., 2019; Heo et al., 2019). However, there are no previous studies that quantitatively evaluate the stability and reproducibility of DNN representations with a rigorous statistical inference framework.\nIn the past few years, SI has been actively studied for inference on the features of linear models selected by several feature selection methods, e.g., Lasso (Lee et al., 2016; Liu et al., 2018; Duy & Takeuchi, 2020). The basic idea of SI is to make inference conditional on the selection event, which allows us to derive the exact (non-asymptotic) sampling distribution of the test statistic. Besides, SI has also been applied to various problems (Bachoc et al., 2014; Fithian et al., 2015; Choi et al., 2017; Tian et al., 2018; Chen & Bien, 2019; Hyun et al., 2018; Bachoc et al., 2018; Loftus & Taylor, 2014; Loftus, 2015; Panigrahi et al., 2016; Tibshirani et al., 2016; Yang et al., 2016; Suzumura et al., 2017; Duy et al., 2020). However, to the best of our knowledge, there is no existing study that provides SI for DNNs, which is technically challenging. This study is partly motivated by Tanizaki et al. (2020) where the authors provide a framework to compute p-values for image segmentation results provided by graph cut and threshold-based segmentation algorithms. As we demonstrate in this paper, our method can be also used to assess the reliability of DNN-based segmentation results.\nContribution. To our knowledge, this is the first study that provides an exact (non-asymptotic) inference method for statistically quantifying the reliability of data-driven hypotheses that are discovered from DNN representation. We propose a novel SI homotopy method, inspired by Duy & Takeuchi (2020), for conducting powerful and efficient SI for DNN representations. We conduct experiments on both synthetic and real-world datasets, through which we offer evidence that our proposed method can successfully control the false positive rate, has decent performance in terms of computational efficiency, and provides good results in practical applications. We provide our implementation in the supplementary document and it will be released when this paper is published." }, { "heading": "2 PROBLEM STATEMENT", "text": "To formulate the problem, we denote an image with n pixels corrupted with Gaussian noise as X = (X1, ..., Xn)\n> = µ+ ε, ε ∼ N(0,Σ), (1) where µ ∈ Rn is an unknown mean pixel intensity vector and ε ∈ Rn is a vector of Normally distributed noise with the covariance matrix Σ that is known or able to be estimated from external data. We note that we do not assume that the pixel intensities in an image follow Normal distribution in Equation (1). Instead, we only assume that the vector of noises added to the true pixel values follows a multivariate Normal distribution. For an image X and a trained DNN, the main target is to identify an attention region (discriminative/informative region) in the input image X based on a DNN representation. A pixel is assigned to the attention region if its corresponding value in the representation layer is greater than a pre-defined threshold. We denote the set of pixels ofX divided into attention region and non-attention region as C+X and C − X , respectively. Definition 1. We define A(X) as the event that the result of dividing pixels of image X into two sets of pixels C+X and C − X is obtained by applying a DNN onX , i.e.,\nA(X) = {C+X , C − X}. (2)\nQuantifying the statistical significance of DNN-driven hypotheses. Given an observed image xobs ∈ Rn sampled from the model (1), we can obtain C+\nxobs and C− xobs by applying DNN on xobs.\nLet us consider a score ∆ that represents the degree to which the attention region differs from the non-attention region. In general, we can define any score as long as it is written in the form ∆ = η>xobs. For example, we can define ∆ as the difference in average pixel values between the attention region and the non-attention region, i.e.,\n∆ = mC+ xobs −mC− xobs =\n1 |C+ xobs | ∑\ni∈C+ xobs\nxobsi − 1 |C− xobs | ∑\ni∈C− xobs\nxobsi = η >xobs,\nwhere η = 1|C+ xobs |1 n C+ xobs − 1|C− xobs |1 n C− xobs , and 1nC ∈ Rn is a vector whose elements belonging to a set C are 1, and 0 otherwise. If the value of |∆| is sufficiently large, the difference between C+\nxobs and C− xobs is significant and\nthe attention region is reliable. To quantify the statistical significance, we consider a statistical hypothesis testing with the following null hypothesis H0 and alternative hypothesis H1:\nH0 : µC+ xobs = µC− xobs vs. H1 : µC+ xobs 6= µC− xobs , (3)\nwhere µC+ xobs and µC− xobs are the true means of the pixel values in the attention region and nonattention region, respectively. Given a significance level α (e.g., 0.05), we reject H0 if the p-value is smaller than α, which indicates the attention region differs from the non-attention region. Otherwise, we cannot say that the difference is significant.\nIn a standard (naive) statistical test, the hypotheses in (3) are assumed to be fixed, i.e., non-random. Then, the naive (two-sided) p-value is simply given as\npnaive = PH0 ( |η>X| ≥ |∆| ) = PH0 ( |η>X| ≥ |η>xobs| ) . (4)\nHowever, since the hypotheses in (3) are actually not fixed in advance, the naive p-value is not valid in the sense that, if we reject H0 with a significance level α, the false detection rate (type-I error) cannot be controlled at level α, which indicates that pnaive is unreliable. This is due to the fact that the hypotheses (the attention region) in (3) are selected by looking at the data (the input image), and thus selection bias exists. This selection bias is sometimes called data dredging, data snooping or p-hacking (Ioannidis, 2005; Head et al., 2015).\nSelective inference (SI) for computing valid p-values. The basic idea of SI is to make inference conditional on the selection event, which allows us to derive the exact (non-asymptotic) sampling distribution of the test statistic η>X in an attempt to avoid the selection bias. Thus, we employ the following conditional p-value\npselective = PH0 ( |η>X| ≥ |η>xobs| | A(X) = A(xobs), q(X) = q(xobs) ) , (5)\nwhere q(X) = (In − cη>)X with c = Ση(η>Ση)−1. The first condition A(X) = A(xobs) indicates the event that the result of dividing pixels into an attention region and non-attention region for a random image X is the same as that of the observed image xobs, i.e., C+X = C + xobs\nand C−X = C − xobs\n. The second condition q(X) = q(xobs) indicates the component that is independent of the test statistic forX is the same as the one for xobs. The q(X) corresponds to the component z in the seminal SI paper of Lee et al. (2016) (Sec 5, Eq 5.2 and Theorem 5.2). The p-value in (5), which is called selective type I error or selective p-values in the SI literature (Fithian et al., 2014), is valid in the sense that PH0(pselective < α) = α,∀α ∈ [0, 1], i.e., the false detection rate is theoretically controlled at level α indicating the selective p-value is reliable.\nTo calculate the selective p-value in (5), we need to identify the conditional data space. Let us define the set of x ∈ Rn that satisfies the conditions in (5) as\nX = {x ∈ Rn | A(x) = A(xobs), q(x) = q(xobs)}. (6) According to the second condition, the data in X are restricted to a line (Sec 6 in Liu et al. (2018), and Fithian et al. (2014)). Therefore, the set X can be re-written, using a scalar parameter z ∈ R, as\nX = {x(z) = a+ bz | z ∈ Z}, (7) where a = q(xobs), b = Ση`(η>` Ση`) −1, and\nZ = { z ∈ R | A(x(z)) = A(xobs) } . (8)\nNow, let us consider a random variable Z ∈ R and its observation zobs ∈ R that satisfyX = a+bZ and xobs = a+ bzobs. Then, the selective p-value in (5) is re-written as\npselective = PH0 ( |η>X| ≥ |η>xobs| |X ∈ X ) = PH0 ( |Z| ≥ |zobs| | Z ∈ Z ) . (9)\nSince the variable Z ∼ N(0,η>Ση) under the null hypothesis, the law of Z | Z ∈ Z follows a truncated Normal distribution. Once the truncation region Z is identified, the selective p-value (9) can be computed as\npselective = F Z 0,η>Ση(−|z obs|) + 1− FZ0,η>Ση(|z obs|), (10)\nwhere F Em,s2 is the c.d.f. of the truncated normal distribution with mean m, variance s 2 and truncation region E . Therefore, the most important task is to identify Z .\nExtension of the problem setup to hypothesis driven from DNN-based image segmentation. We interpret the hypothesis driven from image segmentation result as the one obtained from the representation at output layer instead of internal representation. Our problem setup is general and can be directly applied to this case. For example, we can consider the attention region as the object region and the non-attention region as the background region. Then, we can conduct SI to quantify the significance of the difference between object and background regions. We note that we consider the case where the image is segmented into two regions—object and background—to simplify the problem and notations. The extension to more than two regions is straightforward." }, { "heading": "3 PROPOSED METHOD", "text": "As we discussed in §2, to calculate the selective p-value, the truncation region Z in Equation (8) must be identified. To constructZ , we have to 1) computeA(x(z)) for all z ∈ R, and 2) identify the set of intervals of z on whichA(x(z)) = A(xobs). However, it seems intractable to obtainA(x(z)) for infinitely many values of z ∈ R. Our first idea to develop SI for DNN is that we additionally condition on some extra event to make the problem tractable. We now focus on a class of DNNs whose activation functions (AFs) are piecewise-linear, e.g., ReLU, Leaky ReLU (the extension to general AFs is discussed later). Then, we consider additionally conditioning on the selected piece of each piecewise-linear AF in the DNN.\nDefinition 2. Let sj(x) be “the selected piece” of a piecewise-linear AF at the j-th unit in a DNN for a given input image x, and let s(x) be the set of sj(x) for all the nodes in a DNN .\nFor example, for a ReLU activation function, sj(x) takes either 0 or 1 depending on whether the input to the j-th unit is located at the flat part (inactive) or the linear part (active) of the ReLU function. Using the notion of selected pieces s(x), instead of computing the selective p-value in (9), we consider the following over-conditioning (oc) conditional p-value\npocselective = PH0 ( |Z| ≥ |zobs| | Z ∈ Zoc ) , (11)\nwhere Zoc = { z ∈ R | A(x(z)) = A(xobs), s(x(z)) = s(xobs) } . However, such an overconditioning in SI leads to the loss of statistical power (Lee et al., 2016).\nOur second idea is to develop a homotopy method to resolve the over-conditioning problem, i.e., remove the conditioning of s(x(z)) = s(xobs). With the homotopy method, we can efficiently compute A(x(z)) in a finite number of operations without the need of considering infinitely many values of z ∈ R, which is subsequently used to obtain truncation region Z in (8). The main idea is to compute a finite number of breakpoints at which one node of the network is going to change its status from active to inactive or vice versa. This concept is similar to the regularization path of Lasso where we can compute a finite number of breakpoints at which the active set changes.\nTo this end, we introduce a two-step iterative approach generally described as follows (see Fig. 2):\n• Step 1 (over-conditioning step). Considering over-conditioning case by additionally conditioning on the selected pieces of all the hidden nodes in the DNN.\n• Step 2 (homotopy step). Combining multiple over-conditioning cases by homotopy method to obtain A(x(z)) for all z ∈ R." }, { "heading": "3.1 STEP1: OVER-CONDITIONING STEP", "text": "We now show that by conditioning on the selected pieces s(xobs) of all the hidden nodes, we can write the selection event of the DNN as a set of linear inequalities.\nLemma 1. Consider a class of DNN which consists of affine operations and piecewise-linear AFs. Then, the over-conditioning region is written as\nZoc = {z ∈ R | Θ(s(x obs))x(z) ≤ ψ(s(x obs))}\nfor a matrix Θ(s(x obs)) and a vector ψ(s(x obs)) which depend only on the selected pieces s(xobs).\nProof. For the class of DNN, by fixing the selected pieces of all the piecewise-linear AFs, the input to each AF is represented by an affine function of an image x. Therefore, the condition for selecting\na piece in a piecewise-linear AF, sj(x(z)) = sj(xobs), is written as a linear inequality w.r.t. x(z). Similarly, the value of each unit in the representation layer is also written as an affine function of x(z). Since the attention region is selected if the value is greater than a threshold, the choice of attention region A(x(z)) = A(xobs) is characterized by a set of linear inequalities w.r.t. x(z).\nFurthermore, let us consider max-operation, an operation to select the max one from a finite number of candidates. A max-operation is characterized by a set of comparison operators, i.e., inequalities. Let us consider a DNN which contains max-operators, and denote s̃(x) be the set of selected candidates of all the max-operators for an input image x.\nCorollary 1. Consider a class of DNN which consists of affine operations, max-operations and piecewise-linear AFs. Then, a region Z̃oc defined as Z̃oc := {z ∈ Zoc | s̃(x(z)) = s̃(xobs)} is characterized by a set of linear inequalities w.r.t. x(z).\nThe proof is shown in Appendix A.1.\nRemark 1. In this work, we mainly focus on the trained DNN where the activation functions used at hidden layers are piecewise linear, e.g., ReLU, Leaky ReLU, which is commonly used in CNN. Otherwise, if there is any specific demand to use non-piecewise linear functions such as sigmoid or tanh at hidden layers, we can apply some piecewise-linear approximation approach to these functions. We provided examples about the approximation for this case in Appendix A.5.\nRemark 2. Most of the basic operations in a trained neural network are written as affine operations. In the traditional neural network, the multiplication results between the weight matrix and the output of the previous layer and its summation with bias vector is affine operation. In a CNN, the main convolution operation is obviously an affine operation. Upsampling operation is also affine.\nRemark 3. Although the max-pooling operation is not an affine operation, it can be written as a set of linear inequalities. For instance, v1 = max{v1, v2, v3} can be written as a set {e>1 v ≤ e>2 v, e > 1 v ≤ e>3 v}, where v = (v1, v2, v3)> and ei is a standard basis vector with a 1 at position i.\nRemark 4. In Remark 1, we mentioned that we need to perform piecewise linear approximation for non-piecewise linear activations. However, if these functions are used at output layer, we do not need to perform the approximation task because we can define the set of linear inequalities based on the values before doing activation. See the next example for the case of sigmoid function.\nExample 1. Let us consider a 3-layer neural network with n input nodes, h hidden nodes and n ouput nodes. Let W (1) ∈ Rh×n and w(1) ∈ Rh respectively be the weight matrix and bias vector between input layer and hidden layer, and W (2) ∈ Rn×h and w(2) ∈ Rn respectively be the weight matrix and bias vector between hidden layer and output layer. The activation function at hidden layer is ReLU, and we use sigmoid function at output layer. At the hidden layer, for any node j ∈ [h], the selection event is written as{\nW (1) j,: x+ w (1) j ≥ 0, if the output of ReLU function at jth node ≥ 0, W (1) j,: x+ w (1) j < 0, otherwise.\nLet a(1) ∈ Rh and s(1) ∈ Rh be the vectors in which a(1)j∈[h] = 1, s (1) j∈[h] = 1 if the output of ReLU function at the jth node≥ 0, and a(1)j = 0, s (1) j = −1 otherwise. Then we have the linear inequality system Θ1x ≤ ψ1 where Θ1 = (−s(1)1 W (1) 1,: , ...,−s (1) h W (1) h,: ) > and ψ1 = (s (1) 1 w (1) 1 , ..., s (1) h w (1) h ) >. Next, for any output node o ∈ [n], the selection event—a linear inequality—is written as{ W (2) o,: ((W (1)x+w(1)) ◦ a(1)) + w(2)o ≥ 0, if the output of sigmoid function at oth node ≥ 0.5,\nW (2) o,: ((W (1)x+w(1)) ◦ a(1)) + w(2)o < 0, otherwise,\nwhere ◦ is the element-wise product. Similar to the hidden layer, we can also construct the linear inequality system Θ2x ≤ ψ2 at the output layer. Finally, the whole linear inequality system is written as\nΘx ≤ ψ = (Θ1 Θ2)> x ≤ (ψ1 ψ2)> . (12)\nAlgorithm 1 compute solution path Input: a, b, [zmin, zmax] 1: Initialization: t = 1, zt = zmin, T = zt 2: while zt < zmax do 3: Obtain A(x(zt)) by applying a trained DNN to x(zt) = a+ bzt 4: Compute the next breakpoint zt+1← Equation (13). Then assign T = T ∪ {zt+1}, and t = t+ 1 5: end while\nOutput: {A(x(zt)}zt∈T" }, { "heading": "3.2 STEP 2: HOMOTOPY STEP", "text": "We now introduce a homotopy method to compute A(x(z)) based on over-conditioning step. Lemma 2. Consider a real value zt. By applying a trained DNN to x(zt), we obtain a set of linear inequalities Θ(s(x(zt)))x(zt) ≤ ψ(s(x(zt))). Then, the next breakpoint zt+1 > zt at which the status of one node is going to be changed from active to inactive or vice versa, i.e., the sign of one linear inequality is going to be changed, is calculated by\nzt+1 = min k:(Θ(s(x(zt)))b)k>0\nψ (s(x(zt))) k − (Θ(s(x(zt)))a)k\n(Θ(s(x(zt)))b)k . (13)\nThe proof is shown in Appendix A.2. Algorithm 1 shows our solution to efficiently identify A(x(z)). In this algorithm, multiple breakpoints z1 < z2 < ... < z|T | are computed one by one. Each breakpoint zt, t ∈ [|T |], indicates a point at which the sign of one linear inequality is changed, i.e., the status of one node in the network is going to change from active to inactive or vice versa. By identifying all these breakpoints {zt}t∈[|T |], the solution path is given by A(x(z)) = A(x(zt)) if z ∈ [zt, zt+1], t ∈ [|T |]. For the choice of [zmin, zmax], see Appendix A.3." }, { "heading": "4 EXPERIMENT", "text": "We highlight the main results. Several additional results and details can be found in Appendix A.6.\nNumerical Experiments. We demonstrate the performances of two versions of the proposed method: proposed-method (homotopy) and proposed-method-oc. The p-values in these two versions were computed by (5) and (11), respectively. Besides, we also compared the proposed methods with the naive p-value in (4) and the permutation test. The details of permutation test procedure is described in Appendix A.6. To test the FPR control, we generated 120 null images x = (x1, ..., xn) in which xi ∈ [n] ∼ N(0, 1) for each n ∈ {64, 256, 1024, 4096}. To test the power, we generated images x = (x1, ..., xn) with n = 256 for each true average difference in the underlying model µC+x − µC−x = ∆µ ∈ {0.5, 1.0, 1.5, 2.0}. For each case, we ran 120 trials. We chose the significance level α = 0.05. For more information about the setup as well as the the structure of a neural network, see the experimental setup paragraph in Appendix A.6. The results of FPR control are shown in the first part of Fig. 3. The proposed methods could successfully control the FPR under α = 0.05 while the naive method can not. Since the naive method fails to control FPR, we did not consider the power anymore. In the second part of Fig. 3, we see that the over-conditioning option has lower power than the homotopy method. It is because the truncation region in proposed-methodoc is shorter than the one in proposed-method (homotopy), which is demonstrated in the third part of Fig. 3. The last part of Fig. 3 shows the reason why the proposed homotopy method is efficient. With the homotopy method, we only need to consider the number of encountered intervals on the line along the direction of test statistic which is almost linearly increasing in practice.\nReal-data examples. We performed comparison on real-world brain image dataset, which includes 939 images with tumor and 941 images without tumor. We first compared our method with permutation test in terms of FPR control. The results are shown in Table 1. Since the permutation test could not control the FPR properly, we did not compare the power. The comparisons between naive p-value and selective p-value are shown in Figs. 4, 5, 6 and 7. The naive p-value was still small\neven when the image has no tumor region, which indicates that the naive p-values cannot be used for quantifying the reliability of DNN-driven hypotheses. The proposed method could successfully identify false positive detections as well as true positive detections." }, { "heading": "5 CONCLUSION", "text": "We proposed a novel method to conduct statistical inference on the significance of the data-driven hypotheses driven from neural network representation based on the concept of selective inference. In the context of explainable DNN or interpretable DNN, we are primarily interested in the reliability of the trained network when given new inputs (not training inputs). Therefore, the validity of our proposed method does not depend on how the DNN is trained.\nIn regard of the generality, the proposed method can be applied to any kind of network as long as the network operation is characterized by a set of linear inequalities (or approximated by piecewiselinear functions) because all the algorithms and theories in §2 and §3 only depend on the property of each component and does not depend on the entire structure of the network.\nWe believe that this paper provides a significant step toward reliable artificial intelligence (AI) and open several directions for statistically evaluating the reliability of DNN representation-driven hypotheses. Although it is not necessary to account the impact of training in this paper because the validity of our proposed method does not depend on how the DNN is trained, defining a new problem setup and providing solution for the case in which the training process needs to be considered is a potential direction. Moreover, widening the practical applicability of the proposed method in other fields such as NLP and signal processing would also represent a valuable contribution." } ]
2,020
null
SP:8a32dfc80f31fd3da97e15ce98193144d03836b5
[ "This paper proposes a variant of the GTD2 algorithm by adding an additional regularization term to the objective function, and the new algorithm is named as Gradient-DD (GDD). The regularization ensures that the value function does not change drastically between consecutive iterations. The authors show that the update rule of GDD can be written as a difference equation and aim to further show the convergence via Lyapunov based analysis. An simulation study is provided to compare the proposed GDD algorithm with TD, ETD, and GTD. " ]
Off-policy algorithms, in which a behavior policy differs from the target policy and is used to gain experience for learning, have proven to be of great practical value in reinforcement learning. However, even for simple convex problems such as linear value function approximation, these algorithms are not guaranteed to be stable. To address this, alternative algorithms that are provably convergent in such cases have been introduced, the most well known being gradient descent temporal difference (GTD) learning. This algorithm and others like it, however, tend to converge much more slowly than conventional temporal difference learning. In this paper we propose gradient descent temporal difference-difference (GradientDD) learning in order to improve GTD learning by introducing second-order differences in successive parameter updates. We investigate this algorithm in the framework of linear value function approximation, analytically showing its improvement over GTD learning. Studying the model empirically on the random walk and Boyan-chain prediction tasks, we find substantial improvement over GTD learning and, in several cases, better performance even than conventional TD learning.
[]
[ { "authors": [ "K. Atkinson", "W. Han", "D. Stewart" ], "title": "Numerical Solution of Ordinary Differential Equations", "venue": "JOHN WILEY & SONS,", "year": 2008 }, { "authors": [ "L.C. Baird" ], "title": "Residual algorithms: Reinforcement learning with function approximation", "venue": "In Proceedings of the 12 th International Conference on Machine Learning,", "year": 1995 }, { "authors": [ "V.S. Borkar", "S.P. Meyn" ], "title": "The ODE method for convergence of stochastic approximation and reinforcement learning", "venue": "SIAM Journal on Control and Optimization,", "year": 2000 }, { "authors": [ "Justin A. Boyan" ], "title": "Technical update: least-squares temporal difference learning", "venue": "Machine Learning,", "year": 2002 }, { "authors": [ "S.S. Du", "J. Chen", "L. Li", "L. Xiao", "D. Zhou" ], "title": "Stochastic variance reduction methods for policy evaluation", "venue": "In Proceedings of the 34 th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "M. Geist", "B. Scherrer" ], "title": "Off-policy learning with eligibility traces: A survey", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "S. Ghiassian", "A. Patterson", "S. Garg", "D. Gupta", "A. White", "M. White" ], "title": "Gradient temporaldifference learning with regularized corrections", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "L. Hackman" ], "title": "Faster gradient-TD algorithms", "venue": "Master’s thesis, University of Alberta, Edmonton,", "year": 2012 }, { "authors": [ "B. Liu", "S. Mahadevan", "J. Liu" ], "title": "Regularized off-policy TD-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "B. Liu", "J. Liu", "M. Ghavamzadeh", "S. Mahadevan", "M. Petrik" ], "title": "Finite-sample analysis of proximal gradient TD algorithms", "venue": "In Proceedings of the 31st International Conference on Uncertainty in Artificial Intelligence,", "year": 2015 }, { "authors": [ "B. Liu", "J. Liu", "M. Ghavamzadeh", "S. Mahadevan", "M. Petrik" ], "title": "Proximal gradient temporal difference learning algorithms", "venue": "In The 25th International Conference on Arti cial Intelligence", "year": 2016 }, { "authors": [ "H.R. Maei" ], "title": "Gradient temporal-difference learning algorithms", "venue": "PhD thesis, University of Alberta, Edmonton,", "year": 2011 }, { "authors": [ "H.R. Maei", "R.S. Sutton" ], "title": "GQ(λ): A general gradient algorithm for temporal-difference prediction learning with eligibility traces", "venue": "In Proceedings of the 3rd Conference on Artificial General Intelligence,", "year": 2010 }, { "authors": [ "A.R. Mahmood", "H. van Hasselt", "R.S. Sutton" ], "title": "Weighted importance sampling for off-policy learning with linear function approximation", "venue": "In Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "J. Peters", "K. Mülling", "Y. Altün" ], "title": "Relative entropy policy search", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "D. Precup", "R.S. Sutton", "S. Singh" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "In Proceedings of the 17 th International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael I. Jordan", "Pieter Abbeel" ], "title": "Trust region policy optimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "R.S. Sutton", "Cs. Szepesvári", "H.R. Maei" ], "title": "A convergent O(n) algorithm for off-policy temporal difference learning with linear function approximation", "venue": "In Advances in Neural Information Processing Systems", "year": 2009 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": "The MIT Press, second edition edition,", "year": 2018 }, { "authors": [ "R.S. Sutton", "M. Mahmood", "A.R. amd White" ], "title": "An emphatic approach to the problem of off-policy temporal difference learning", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "R.S. Sutton", "H.R. Maei", "D. Precup", "S. Bhatnagar", "D. Silver", "Cs. Szepesvári", "E. Wiewiora" ], "title": "Fast gradient-descent methods for temporal-difference learning with linear function approximation", "venue": "In Proceedings of the 26th International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "A. White", "M. White" ], "title": "Investigating practical linear temporal difference learning", "venue": "In International Conference on Autonomous Agents and Multi-Agent Systems,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning. However, because off-policy methods learn a value function for a target policy given data due to a different behavior policy, they often exhibit greater variance in parameter updates. When applied to problems involving function approximation, off-policy methods are slower to converge than on-policy methods and may even diverge (Baird, 1995; Sutton & Barto, 2018).\nTwo general approaches have been investigated to address the challenge of developing stable and effective off-policy temporal-difference algorithms. One approach is to use importance sampling methods to warp the update distribution back to the on-policy distribution (Precup et al., 2000; Mahmood et al., 2014). This approach is useful for decreasing the variance of parameter updates, but it does not address stability issues. The second main approach to addressing the challenge of off-policy learning is to develop true gradient descent-based methods that are guaranteed to be stable regardless of the update distribution. Sutton et al. (2009a;b) proposed the first off-policy gradientdescent-based temporal difference (GTD and GTD2, respectively) algorithms. These algorithms are guaranteed to be stable, with computational complexity scaling linearly with the size of the function approximator. Empirically, however, their convergence is much slower than conventional temporal difference (TD) learning, limiting their practical utility (Ghiassian et al., 2020; White & White, 2016). Building on this work, extensions to the GTD family of algorithms (see Ghiassian et al. (2018) for a review) have allowed for incorporating eligibility traces (Maei & Sutton, 2010; Geist & Scherrer, 2014), non-linear function approximation such as with a neural network (Maei, 2011), and reformulation of the optimization as a saddle point problem (Liu et al., 2015; Du et al., 2017). However, due to their slow convergence, none of these stable off-policy methods are commonly used in practice.\nIn this work, we introduce a new gradient descent algorithm for temporal difference learning with linear value function approximation. This algorithm, which we call gradient descent temporal difference-difference (Gradient-DD) learning, is an acceleration technique that employs second-\norder differences in successive parameter updates. The basic idea of Gradient-DD is to modify the error objective function by additionally considering the prediction error obtained in last time step, then to derive a gradient-descent algorithm based on this modified objective function. In addition to exploiting the Bellman equation to get the solution, this modified error objective function avoids drastic changes in the value function estimate by encouraging local search around the current estimate. Algorithmically, the Gradient-DD approach only adds an additional term to the update rule of the GTD2 method, and the extra computational cost is negligible. We show mathematically that applying this method significantly improves the convergence rate relative to the GTD2 method for linear function approximation. This result is supported by numerical experiments, which also show that Gradient-DD obtains better convergence in many cases than conventional TD learning." }, { "heading": "1.1 RELATED WORK", "text": "In related approaches to ours, some previous studies have attempted to improve Gradient-TD algorithms by adding regularization terms to the objective function. Liu et al. (2012) have used l1 regularization on weights to learn sparse representations of value functions, and Ghiassian et al. (2020) has used l2 regularization on weights. Unlike these references, our approach modifies the error objective function by regularizing the evaluation error obtained in the most recent time step. With this modification, our method provides a learning rule that contains second-order differences in successive parameter updates.\nOur approach is similar to trust region policy optimization (Peters & Schaal, 2008; Schulman et al., 2015) or relative entropy policy search (Peters et al., 2010), which penalize large changes being learned in policy learning. In these methods, constrained optimization is used to update the policy by considering the constraint on some measure between the new policy and the old policy. Here, however, our aim here is to look for the optimal value function, and the regularization term uses the previous value function estimate to avoid drastic changes in the updating process." }, { "heading": "2 GRADIENT DESCENT METHOD FOR OFF-POLICY TEMPORAL DIFFERENCE LEARNING", "text": "" }, { "heading": "2.1 PROBLEM DEFINITION AND BACKGROUND", "text": "In this section, we formalize the problem of learning the value function for a given policy under the Markov Decision Process (MDP) framework. In this framework, the agent interacts with the environment over a sequence of discrete time steps, t = 1, 2, . . .. At each time step the agent observes a partial summary of the state st ∈ S and selects an action at ∈ A. In response, the environment emits a reward rt ∈ R and transitions the agent to its next state st+1 ∈ S. The state and action sets are finite. State transitions are stochastic and dependent on the immediately preceding state and action. Rewards are stochastic and dependent on the preceding state and action, as well as on the next state. The process generating the agent’s actions is termed the behavior policy. In off-policy learning, this behavior policy is in general different from the target policy π : S → A. The objective is to learn an approximation to the state-value function under the target policy in a particular environment:\nV (s) = Eπ [ ∞∑ t=1 γt−1rt|s1 = s ] , (1)\nwhere γ ∈ [0, 1) is the discount rate. In problems for which the state space is large, it is practical to approximate the value function. In this paper we consider linear function approximation, where states are mapped to feature vectors with fewer components than the number of states. Specifically, for each state s ∈ S there is a corresponding feature vector x(s) ∈ Rp, with p ≤ |S|, such that the approximate value function is given by\nVw(s) := w >x(s). (2)\nThe goal is then to learn the parameters w such that Vw(s) ≈ V (s)." }, { "heading": "2.2 GRADIENT TEMPORAL DIFFERENCE LEARNING", "text": "A major breakthrough for the study of the convergence properties of MDP systems came with the introduction of the GTD and GTD2 learning algorithms (Sutton et al., 2009a;b). We begin by briefly recapitulating the GTD algorithms, which we will then extend in the following sections. To begin, we introduce the Bellman operator B such that the true value function V ∈ R|S| satisfies the Bellman equation:\nV = R + γPV =: BV,\nwhere R is the reward vector with components E(rn+1|sn = s), and P is a matrix of state transition probabilities. In temporal difference methods, an appropriate objective function should minimize the difference between the approximate value function and the solution to the Bellman equation.\nHaving defined the Bellman operator, we next introduce the projection operator Π, which takes any value function V and projects it to the nearest value function within the space of approximate value functions of the form (2). Letting X be the matrix whose rows are x(s), the approximate value function can be expressed as Vw = Xw. We will also assume that there exists a limiting probability distribution such that ds = limn→∞ p(sn = s) (or, in the episodic case, ds is the proportion of time steps spent in state s). The projection operator is then given by\nΠ = X(X>DX)−1X>D,\nwhere the matrix D is diagonal, with diagonal elements ds.\nThe natural measure of how closely the approximation Vw satisfies the Bellman equation is the mean-squared Bellman error:\nMSBE(w) = ‖Vw −BVw‖2D, (3) where the norm is weighted by D, such that ‖V‖2D = V>DV. However, because the Bellman operator follows the underlying state dynamics of the Markov chain, irrespective of the structure of the linear function approximator, BVw will typically not be representable as Vw for any w. An alternative objective function, therefore, is the mean squared projected Bellman error (MSPBE), which we define as\nJ(w) = ‖Vw −ΠBVw‖2D. (4) Following (Sutton et al., 2009b), our objective is to minimize this error measure. As usual in stochastic gradient descent, the weights at each time step are then updated by ∆w = −α∇wJ(w), where α > 0, and\n−1 2 ∇wJ(w) =− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn)\n≈− E[(γxn+1 − xn)x>n ]η. (5) For notational simplicity, we have denoted the feature vector associated with sn as xn = x(sn). We have also introduced the temporal difference error δn = rn + (γxn+1 − xn)>wn, as well as η, a linear predictor to approximate [E(xnx>n )]\n−1E(δnxn). Because the factors in Eqn. (5) can be directly sampled, the resulting updates in each step are\nδn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − αn(γxn+1 − xn)(x>n ηn). (6)\nThese updates define the GTD2 learning algorithm, which we will build upon in the following section." }, { "heading": "3 GRADIENT DESCENT TEMPORAL DIFFERENCE-DIFFERENCE LEARNING", "text": "In order to improve the GTD2 algorithm described above, in this section we modify the objective function via additionally considering the approximation error Vw−Vwn−1 given the previous time step n− 1. Specifically, we modify Eqn. (4) as follows:\nJGDD(w|wn−1) = J(w) + κ‖Vw −Vwn−1‖2D, (7)\nFigure 1: Schematic diagram of Gradient-DD learning with w ∈ R2. Rather than updating w directly along the gradient of the MSPBE (arrow), the update rule selects wn that minimizes the MSPBE while satisfying the constraint ‖Vw −Vwn−1‖2D ≤ µ (shaded ellipse).\nwhere κ ≥ 0 is a parameter of the regularization. Minimizing Eqn. (7) is equivalent to the following optimization\narg min w\nJ(w) s.t. ‖Vw −Vwn−1‖2D ≤ µ (8)\nwhere µ > 0 is a parameter which becomes large when κ is small, so that the MSPBE objective is recovered as µ→∞, equivalent to κ→ 0 in Eqn. (7). We show in the Appendix that for any µ > 0, there exist κ ≥ 0 such that the solution of Eqn. (7) and that of Eqn. (8) are the same. Eqns. (7) and (8) represent a tradeoff between minimizing the MSPBE error and preventing the estimated value function from changing too drastically. Rather than simply minimizing the optimal prediction from the projected Bellman equation, the agent makes use of the most recent update to look for the solution. Figure 1 gives a schematic view of the effect of the regularization. Rather than directly following the direction of the MSPBE gradient, the update chooses a w that minimizes the MSPBE while following the constraint that the estimated value function should not change too greatly. In effect, the regularization term encourages searching around the estimate at previous time step, especially when the state space is large.\nWith these considerations in mind, the negative gradient of JGDD(w|wn−1) is\n− 1 2 ∇wJGDD(w|wn−1)\n=− E[(γxn+1 − xn)x>n ][E(xnx>n )]−1E(δnxn)− κE[(x>nwn − x>nwn−1)xn] ≈− E[(γxn+1 − xn)x>n ]ηn − κE[(x>nwn − x>nwn−1)xn]. (9)\nBecause the terms in Eqn. (9) can be directly sampled, the stochastic gradient descent updates are given by\nδn =rn + (γxn+1 − xn)>wn ηn+1 =ηn + βn(δn − x>n ηn)xn wn+1 =wn − κn(x>nwn − x>nwn−1)xn − αn(γxn+1 − xn)(x>n ηn). (10)\nThese update equations define the Gradient-DD method, in which the GTD2 update equations (6) are generalized by including a second-order update term in the third update equation, where this term originates from the squared bias term in the objective (7). In the following sections, we shall analytically and numerically investigate the convergence and performance of Gradient-DD learning." }, { "heading": "4 IMPROVED CONVERGENCE RATE", "text": "In this section we analyze the convergence rate of Gradient-DD learning. Note that the second-order update in the last line in Eqn. (10) can be rewritten as a system of first-order difference equations:\n(I + κnxnx > n )(wn+1 −wn) =κnxnx>n (un+1 − un)− αn(γxn+1 − xn)(x>n ηn);\nun+1 =wn+1 −wn. (11)\nLet βn = ζαn, ζ > 0. We consider constant step sizes in the updates, i.e., κn = κ and αn = α. Denote Hn = [\n0 0 0 xnx > n\n] and Gn = [ √ ζxnx > n xn(xn − γxn+1)>\n−(xn − γxn+1)x>n 0\n] . We rewrite\nthe update rules of two iterations in Eqn. (11) as a single iteration in a combined parameter vector with 2n components, ρn = (η > n / √ ζ,w>n )\n>, and a new reward-related vector with 2n components, gn+1 = (rnx > n ,0 >)>, as follows:\nρn+1 =ρn − κHn(ρn − ρn−1) + √ ζα(Gnρn + gn+1), (12)\nDenoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (12) is rewritten as[\nρn+1 − ρn ψn+1 −ψn\n] =α [ I + κHn −καHn\nI −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn\n− √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√ ζα−1gn+1 ] ,\n(13) where the second step is from [\nI + κHn −καHn I −αI\n]−1 = [ I −κHn\nα−1I −α−1(I + κHn)\n] . De-\nnote Jn = [ − √ ζGn −κHn\n− √ ζα−1Gn −α−1(I + κHn)\n] . Eqn. (13) tells us that Jn is the update matrix of\nthe Gradient-DD algorithm. (Note that Gn is the update matrix of the GTD2 algorithm.) Therefore, assuming the stochastic approximation in Eqn. (13) goes to the solution of an associated ordinary differential equation (ODE) under some regularity conditions (a convergence property is provided in the appendix by following Borkar & Meyn (2000)), we can analyze the improved convergence rate of Gradient-DD learning by comparing the eigenvalues of the matrices E(Gn) denoted by G, and\nE(Jn) denoted by J (Atkinson et al., 2008). Obviously, J = [ − √ ζG −κH\n− √ ζα−1G −α−1(I + κH)\n] ,\nwhere H = E(Hn). To simplify, we consider the case that the matrix E(xnx>n ) = I. Let λG be a real eigenvalue of the matrix √ ζG. (Note that G is defined here with opposite sign relative to G in Maei (2011).) From Maei (2011), the eigenvalues of the matrix −G are strictly negative. In other words, λG > 0. Let λ be an eigenvalue of the matrix J, i.e. a solution to the equation\n|λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ = λ2 + [α−1(1 + κ) + λG]λ+ α−1λG = 0. (14) The smaller eigenvalues λm of the pair solutions to Eqn. (14) are\nλm < −λG, where details of the above derivations are given in the appendix. This explains the enhanced speed of convergence in Gradient-DD learning. We shall illustrate this enhanced speed of convergence in numerical experiments in Section 5.\nAdditionally, we also show a convergence property of Gradient-DD under constant step sizes by applying the ordinary differential equation method of stochastic approximation (Borkar & Meyn, 2000). Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Under some conditions, we prove that, for any > 0, there exists b1 < ∞ such that lim sup\nn→∞ P (‖wn − w∗‖ > ) ≤ b1α.\nDetails are provided in the appendix. For tapered step sizes, which would be necessary to obtain an even stronger convergence proof, the analysis framework in Borkar & Meyn (2000) does not apply into the Gradient-DD algorithm. Although theoretical investigation of the convergence under tapered step sizes is a question to be studied, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes." }, { "heading": "5 EMPIRICAL STUDY", "text": "In this section, we assess the practical utility of the Gradient-DD method in numerical experiments. To validate performance of Gradient-DD learning, we compare Gradient-DD learning with GTD2\nlearning, TDC learning (TD with gradient correction (Sutton et al., 2009b)), TD learning, and Emphatic TD learning (Sutton & Mahmood, 2016) in tabular representation using a random-walk task and in linear representation using the Boyan-chain task. For each method and each task, we performed a scan over the step sizes αn and the parameter κ so that the comprehensive performance of the different algorithms can be compared. We considered two choices of step size sequence {αn}:\n• (Case 1) αn is constant, i.e., αn = α0. • (Case 2) The learning rate αn is tapered according to the schedule αn = α0(103 +\n1)/(103 + n).\nWe set the κ = cα0 where c = 1, 2, 4. Additionally, we also allow κ dependent on n and consider Case 3: αn is tapered as in Case 2, but κn = cαn. In order to simplify presentation, the results of Case 3 are reported in the Appendix. To begin, we set βn = αn, then later allow for βn = ζαn under ζ ∈ {1/4, 1/2, 1, 2} in order to investigate the effect of the two-timescale approach of the Gradient-based TD algorithms on Gradient-DD. In all cases, we set γ = 1." }, { "heading": "5.1 RANDOM WALK TASK", "text": "As a first test of Gradient-DD learning, we conducted a simple random walk task (Sutton & Barto, 2018) with tabular representation of the value function. The random walk task has a linear arrangement ofm states plus an absorbing terminal state at each end. Thus there arem+2 sequential states, S0, S1, · · · , Sm, Sm+1, where m = 20, 50, or 100. Every walk begins in the center state. At each step, the walk moves to a neighboring state, either to the right or to the left with equal probability. If either edge state (S0 or Sm+1) is entered, the walk terminates. A walk’s outcome is defined to be r = 0 at S0 and r = 1 at Sm+1. Our aim is to learn the value of each state V (s), where the true values are (1, · · · ,m)/(m+ 1). In all cases the approximate value function is initialized to the intermediate value V0(s) = 0.5. In order to investigate the effect of the initialization V0(s), we also initialize V0(s) = 0, and report the results in Figure 7 of the Appendix, where its performance is very similar as the initialization V0(s) = 0.5.\nWe first compare the methods by plotting the empirical RMS error from the final episode during training as a function of step size α in Figure 2, where 5000 episodes are used. From the figure, we can make several observations. (1) Emphatic TD works well but is sensitive to α. It prefers very small α even in the tapering case, and this preference becomes strong as the state space becomes large in size. (2) Gradient-DD works well and is robust to α, as is conventional TD learning. (3) TDC performs similarly to the GTD2 method, but requires slightly larger α than GTD2. (4) Gradient-DD performs similarly to conventional TD learning and better than the GTD2 method. This advantage is consistent in different settings. (5) The range of α leading to effective learning for Gradient-DD is roughly similar to that for GTD2.\nNext we look closely at the performance during training, which we show in Figure 3, where each method and parameter setting was run for 5000 episodes. From the observations in Figure 2, in order to facilitate comparison of these methods, we set α0 = 0.1 for 10 spaces, α0 = 0.2 for 20 spaces, and α0 = 0.5 for 50 spaces. Because Emphatic TD requires the step size α to be especially small\nas shown in Figure 2, the plotted values of α0 for Emphatic TD are tuned relative to the values used in the algorithm defined in Sutton & Mahmood (2016), where the step sizes of Emphatic TD α(ETD)0 are chosen from {0.5%, 0.1%, 0.05%, 0.01%} by the smallest area under the performance curve. Additionally we also tune α0 for TDC because TDC requires αn larger a little than GTD2 as shown in Figure 2. The step sizes for TDC are set as α(TDC)n = aαn, where a is chosen from {1, 1.5, 2, 3} by the smallest area under the performance curve.\nFrom the results shown in Figure 3a, we draw several observations. (1) For all conditions tested, Gradient-DD converges much more rapidly than GTD2 and TDC. The results indicate that GradientDD even converges faster than TD learning in some cases, though it is not as fast in the beginning episodes. (2) The advantage of Gradient-DD learning over other methods grows as the state space increases in size. (3) Gradient-DD learning is robust to the choice of c, which controls the size κ of the second-order update, as long as c is not too large. (Empirically c = 2 is a good choice.) (4) Gradient-DD has consistent and good performance under both the constant step size setting and under the tapered step size setting. In summary, compared with GTD2 learning and other methods, Gradient-DD learning in this task leads to improved learning with good convergence.\nIn addition to investigating the effects of the learning rate, size of the state space, and magnitude of the regularization parameter, we also investigated the effect of using distinct values for the two learning rates, αn and βn. To do this, we set βn = ζαn with ζ ∈ {1/4, 1/2, 1, 2} and report the results in Figure 8 of the appendix. The results show that comparably good performance of Gradient-DD is obtained under these various βn settings." }, { "heading": "5.2 BOYAN-CHAIN TASK", "text": "We next investigate Gradient-DD learning on the Boyan-chain problem, which is a standard task for testing linear value-function approximation (Boyan, 2002). In this task we allow for 4p − 3 states, with p = 20, each of which is represented by a p-dimensional feature vector. The p-dimensional representation for every fourth state from the start is [1, 0, · · · , 0] for state s1, [0, 1, 0, · · · , 0] for s5,\n· · · , and [0, 0, · · · , 0, 1] for the terminal state s4p−3. The representations for the remaining states are obtained by linearly interpolating between these. The optimal coefficients of the feature vector are (−4(p − 1),−4(p − 2), · · · , 0)/5. Simulations with p = 50 and 100 give similar results to those from the random walk task, and hence are not shown here. In each state, except for the last one before the end, there are two possible actions: move forward one step or move forward two steps with equal probability 0.5. Both actions lead to reward -0.3. The last state before the end just has one action of moving forward to the terminal with reward -0.2. As in the random-walk task, α0 used in Emphatic TD is tuned from {0.5%, 0.2%, 0.1%, 0.05%}. We report the results in Figure 4, which leads to conclusions similar to those already drawn from Figure 3. (1) Gradient-DD has much faster convergence than GTD2 and TDC, and generally converges to better values despite being somewhat slower than TD learning at the beginning episodes. (2) Gradient-DD is competitive with Emphatic TD. The improvement over other methods grows as the state space becomes larger. (3) As κ increases, the performance of Gradient-DD improves. Additionally, the performance of Gradient-DD is robust to changes in κ as long as κ is not very large. Empirically a good choice is to set κ = α or 2α. (4) Comparing the performance with constant step size versus that with tapered step size, the Gradient-DD method performs better with tapered step size than it does with constant step size." }, { "heading": "5.3 BAIRD’S COUNTEREXAMPLE", "text": "We also verify the performance of Gradient-DD on Baird’s off-policy counterexample (Baird, 1995), for which TD learning famously diverges. We consider three cases: 7-state, 100-state and 500-state. We set α = 0.02 (but α = 10−5 for ETD), β = α and γ = 0.99. We set κ = 0.2 for GDD1, κ = 0.4 for GDD2 and κ = 0.8 for GDD3. For the initial parameter values (1, · · · , 1, 10, 1)>. We measure the performance by the empirical RMS errors as function of sweep, and report the results in Figure 5. The figure demonstrates that Gradient-DD works as well on this well-known counterexample as GTD2 does, and even works better than GTD2 for the 100-state case. We also observe that the performance improvement of Gradient-DD increases as the state spaces increases. We also note that, because the linear approximation leaves a residual error in the value estimation due to the projection\nerror, the RMS errors in this task do not go to zero. Interestingly, Gradient-DD reduces this residual error as the size of the state space increases." }, { "heading": "6 CONCLUSION AND DISCUSSION", "text": "In this work, we have proposed Gradient-DD learning, a new gradient descent-based TD learning algorithm. The algorithm is based on a modification of the projected Bellman error objective function for value function approximation by introducing a second-order difference term. The algorithm significantly improves upon existing methods for gradient-based TD learning, obtaining better convergence performance than conventional linear TD learning.\nSince GTD learning was originally proposed, the Gradient-TD family of algorithms has been extended for incorporating eligibility traces and learning optimal policies (Maei & Sutton, 2010; Geist & Scherrer, 2014), as well as for application to neural networks (Maei, 2011). Additionally, many variants of the vanilla Gradient-TD methods have been proposed, including HTD (Hackman, 2012) and Proximal Gradient-TD (Liu et al., 2016). Because Gradient-DD just modifies the objective error of GTD2 by considering an additional squared-bias term, it may be extended and combined with these other methods, potentially broadening its utility for more complicated tasks.\nIn this work we have focused on value function prediction in the two simple cases of tabular representations and linear approximation. An especially interesting direction for future study will be the application of Gradient-DD learning to tasks requiring more complex representations, including neural network implementations. Such approaches are especially useful in cases where state spaces are large, and indeed we have found in our results that Gradient-DD seems to confer the greatest advantage over other methods in such cases. Intuitively, we expect that this is because the difference between the optimal update direction and that chosen by gradient descent becomes greater in higher-dimensional spaces (cf. Fig. 1). This performance benefit in large state spaces suggests that Gradient-DD may be of practical use for these more challenging cases." }, { "heading": "6.1 ON THE EQUIVALENCE OF EQNS. (7) & (8)", "text": "The Karush-Kuhn-Tucker conditions of Eqn. (8) are the following system of equations d dwJ(w) + κ d dw (‖Vw −Vwn−1‖ 2 D − µ) = 0;\nκ(‖Vw −Vwn−1‖2D − µ) = 0; ‖Vw −Vwn−1‖2D ≤ µ; κ ≥ 0.\nThese equations are equivalent to d dwJ(w) + κ d dw‖Vw −Vwn−1‖ 2 D = 0 and κ > 0,\nif ‖Vw −Vwn−1‖2D = µ; d dwJ(w) = 0 and κ = 0, if ‖Vw −Vwn−1‖ 2 D < µ.\nThus, for any µ > 0, there exists a κ ≥ 0 such that ddwJ(w) + µ d dw‖Vw −Vwn−1‖ 2 D = 0." }, { "heading": "6.2 EIGENVALUES OF J", "text": "Let λ be an eigenvalue of the matrix J. We have that |λI− J| = ∣∣∣∣ λI +√ζG κH√ζα−1G λI + α−1(I + κH) ∣∣∣∣ = ∣∣∣∣ λI +√ζG κH−λα−1I λI + α−1I ∣∣∣∣\n= ∣∣∣∣ λI +√ζG κH0 λI + α−1I + κα−1λ(λI +√ζG)−1H ∣∣∣∣\n=|(λI + √ ζG)(λI + α−1I) + κα−1λH|.\nFrom the assumption E(xnx>n ) = I and the definition of H, some eigenvalues of the matrix J, λ, are solutions to |λI− J| =(λ+ λG)(λ+ α−1) = 0; and other eigenvalues of the matrix J, λ, are solutions to\n|λI− J| =(λ+ λG)(λ+ α−1) + κα−1λ =λ2 + [α−1(1 + κ) + λG]λ+ α −1λG = 0.\nNote λG > 0. the pair solutions to the equation above are\nλ =− 1 2 [α−1(1 + κ) + λG]± 1 2\n√ [α−1(1 + κ) + λG]2 − 4α−1λG\n=− 1 2 [α−1(1 + κ) + λG]± 1 2\n√ [α−1(1 + κ)− λG]2 + 4α−1λGκ.\nThus, the smaller eigenvalues of the pairs are\nλm =− 1\n2 [α−1(1 + κ) + λG]−\n1\n2\n√ [α−1(1 + κ)− λG]2 + 4α−1λGκ\n<− 1 2 [α−1(1 + κ) + λG]− 1 2\n√ [α−1(1 + κ)− λG]2,\nwhere the inequality is from λG > 0. When α−1(1 + κ)− λG > 0, then\nλm <− 1\n2 [α−1(1 + κ) + λG]−\n1 2 (α−1(1 + κ)− λG)\n=− α−1(1 + κ) <− λG,\nWhen α−1(1 + κ)− λG ≤ 0, then\nλm <− 1\n2 [α−1(1 + κ) + λG] +\n1 2 (α−1(1 + κ)− λG)\n=− λG,\nCONVERGENCE WITH CONSTANT STEP SIZES\nAt last we apply the ODE method of stochastic approximation to obtain the convergence performance.\nTheorem 1 Consider the update rules (10) with constant step size sequences κ, α and β satisfying κ ≥ 0, β = ζα, ζ > 0, α ∈ (0, 1) and β > 0. Let the TD fixed point be w∗, such that Vw∗ = ΠBVw∗ . Suppose that (A1) (xn, rn,xn+1) is an i.i.d. sequence with uniformly bounded second moments, and (A2) E[(xn − γxn+1)x>n ] and E(xnx>n ) are non-singular. Then for any > 0, there exists b1 <∞ such that\nlim sup n→∞\nP (‖wn −w∗‖ > ) ≤ b1α.\nProof From the constant step sizes in the conditions, we denote κn = κ and αn = α. Thus, Eqn. (12) equals\n(I + κHn)(ρn+1 − ρn)− κHn(ρn+1 − 2ρn + ρn−1) =− √ ζα(Gnρn − gn+1). (A.1)\nDenoting ψn+1 = α −1(ρn+1 − ρn), Eqn. (A.1) is rewritten as[\nρn+1 − ρn ψn+1 −ψn ] =α [ I + κHn −καHn\nI −αI ]−1 [ −√ζ(Gnρn − gn+1) ψn ] =α [ − √ ζGn −κHn\n− √ ζα−1Gn −α−1(I + κHn) ] [ ρn ψn ] + α [ √ ζgn+1√\nζα−1gn+1\n] , (A.2)\nwhere the second step is from[ I + κHn −καHn\nI −αI\n]−1 = [ I −κHn\nα−1I −α−1(I + κHn)\n] .\nDenoting G = E(Gn), g = E(gn) and H = E(Hn), then the TD fixed point of Eqn. (A.1) is given by\n−Gρ+ g = 0 (A.3)\nWe apply the ordinary differential equation approach of the stochastic approximation in Theorem 1 (Theorem 2.3 of (Borkar & Meyn, 2000)) into Eqn. (A.2). Note that (Sutton et al., 2009a) and (Sutton et al., 2009b) also applied Theorem 2.3 of (Borkar & Meyn, 2000) in using the gradientdescent method for temporal-difference learning to obtain their convergence results. For simplifying notation, denote\nJn =\n[ − √ ζGn −κHn\n− √ ζα−1Gn −α−1(I + κHn)\n] ,\nJ =\n[ − √ ζG −κH\n− √ ζα−1G −α−1(I + κH)\n] ,\nyn = [ ρn ψn ] , hn = [ √ ζgn+1√ ζα−1gn+1 ] , and h = [ √ ζg√ ζα−1g ] . Eqn. (A.2) is rewritten as\nyn+1 = yn + α(f(yn) + h + Mn+1), (A.4)\nwhere f(yn) = Jyn and Mn+1 = (Jn − J)yn + hn − h. Now we verify the conditions (c1-c4) of Lemma 1. Firstly, Condition (c1) is satisfied under the assumption of constant step sizes. Secondly, f(y) is Lipschitz and f∞(y) = Gy. Following Sutton et al. (2009a), the Assumption A2 implies the real parts of all the eigenvalues of G are positive. Therefore, Condition (c2) is satisfied.\nBecauseE(Mn+1|Fn) = 0 and E(‖Mn+1‖2|Fn) ≤ c0(1+‖yn‖2), whereFn = σ(yi,Mi, i ≤ n), is a martingale difference sequence, we have that\n‖Mn+1‖2 ≤ 2(‖Jn − J‖2‖yn‖2 + ‖hn − h‖2). (A.5)\nFrom the assumption A1, Eqn. (A.5) follows that there are constants cj and ch such that\nE(‖Jn − J‖2|Fn) ≤ cj ; E(‖hn+1 − h‖2) ≤ ch.\nThus, Condition (c3) is satisfied.\nFinally, Condition (c4) is satisfied by noting that y∗ = G−1g is the unique globally asymptotically stable equilibrium.\nTheorem 1 bounds the estimation error of w in probability. Note that the convergence of GradientDD learning provided in Theorem 1 is a somewhat weaker result than the statement that wn → w∗ with probability 1 as n → ∞. The technical reason for this is the condition on step sizes. In Theorem 1, we consider the case of constant step sizes, with αn = α and κn = κ. This restriction is imposed so that Eqn. (12) can be written as a system of first-order difference equations, which cannot be done rigorously when step sizes are tapered as in (Sutton et al., 2009b). As shown below, however, we find empirically in numerical experiments that the algorithm does in fact converge with tapered step sizes and even obtains much better performance in this case than with fixed step sizes." }, { "heading": "AN ODE RESULT ON STOCHASTIC APPROXIMATION", "text": "We introduce an ODE result on stochastic approximation in the following lemma, then prove Theorem 1 by applying this result.\nLemma 1 (Theorem 2.3 of Borkar & Meyn (2000)) Consider the stochastic approximation algorithm described by the d-dimensional recursion\nyn+1 = yn + an[f(yn) + Mn+1].\nSuppose the following conditions hold: (c1) The sequence {αn} satisfies for some constant 0 < α < ᾱ < 1, α < αn < ᾱ; (c2) The function f is Lipschitz, and there exists a function f∞ such that limr→∞ fr(y) = f∞(y), where the scaled function fr : Rd → Rd is given by fr(y) = f(ry)/r. Furthermore, the ODE ẏ = f∞(y) has the origin as a globally asymptotically stable equilibrium; (c3) The sequence {Mn,Fn}, with Fn = σ(yi,Mi, i ≤ n), is a martingale difference sequence. Moreover, for some c0 <∞ and any initial condition y0, E(‖Mn+1‖2|Fn) ≤ c0(1 + ‖yn‖2). (c4) The ODE\nẏ(t) = f(y(t))\nhas a unique globally asymptotically stable equilibrium y∗. Then for any > 0, there exists b1 <∞ such that lim sup\nn→∞ P (‖yn − y∗‖ > ) ≤ b1ᾱ.\n6.3 ADDITIONAL EMPIRICAL RESULTS" } ]
2,020
null
SP:dcb62a0cc1b03e9ea24b2ed167f14255d9386f95
[ "This paper presents a methodology for incorporating factor-graphs into model-based and model-free RL methods. The work starts by assuming access to a correct and factor graph showing the relationship between individual state factors, actions, and rewards. The authors propose to make use of this factor graph by using a Factored Neural Network - which is similar to the standard feed-forward MLP networks that would typically be used to parameterize a policy or Q-function - except that it masks out connections between input and output nodes that are not connected in the factor graph. Presumably this results in a sparser neural network which can lead to faster learning and better sample complexity. The authors demonstrate how these factored NNs can be incorporated with model-based MCTS as well as model-free DQN and PPO. In short - the algorithm remains unchanged and the only substition seems to be the Factored NN rather than a fully-connected NN. Experiments are performed on Multi-Cartpole (simultaneous control over several cartpoles), Taxi, BitFlip, and PyBullet's Ant, Half-Cheetah, and Humanoid. Each of the factored algorithms is compared with the un-factored equivalent and increased sample efficiency of learning is noted for the factored variants. The authors provide the manually-defined factor-graphs used for each of these environments in the Appendix." ]
We propose a simple class of deep reinforcement learning (RL) methods, called FactoredRL, that can leverage factored environment structures to improve the sample efficiency of existing model-based and model-free RL algorithms. In tabular and linear approximation settings, the factored Markov decision process literature has shown exponential improvements in sample efficiency by leveraging factored environment structures. We extend this to deep RL algorithms that use neural networks. For model-based algorithms, we use the factored structure to inform the state transition network architecture and for model-free algorithms we use the factored structure to inform the Q network or the policy network architecture. We demonstrate that doing this significantly improves sample efficiency in both discrete and continuous state-action space settings.
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Bharathan Balaji", "Jordan Bell-Masterson", "Enes Bilgin", "Andreas Damianou", "Pablo Moreno Garcia", "Arpit Jain", "Runfei Luo", "Alvaro Maggiar", "Balakrishnan Narayanaswamy", "Chun Ye" ], "title": "Orl: Reinforcement learning benchmarks for online stochastic optimization problems", "venue": "arXiv preprint arXiv:1911.10641,", "year": 2019 }, { "authors": [ "Jonathan Bassen", "Bharathan Balaji", "Michael Schaarschmidt", "Candace Thille", "Jay Painter", "Dawn Zimmaro", "Alex Games", "Ethan Fast", "John C Mitchell" ], "title": "Reinforcement learning for the adaptive scheduling of educational activities", "venue": "In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems,", "year": 2020 }, { "authors": [ "Craig Boutilier", "Richard Dearden", "Moises Goldszmidt" ], "title": "Exploiting structure in policy construction", "venue": "In IJCAI,", "year": 1995 }, { "authors": [ "Sandeep Chinchali", "Pan Hu", "Tianshu Chu", "Manu Sharma", "Manu Bansal", "Rakesh Misra", "Marco Pavone", "Sachin Katti" ], "title": "Cellular network traffic scheduling with deep reinforcement learning", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Erwin Coumans", "Yunfei Bai" ], "title": "Pybullet, a python module for physics simulation for games, robotics and machine learning", "venue": "GitHub repository,", "year": 2016 }, { "authors": [ "Janos Csirik", "David S Johnson", "Claire Kenyon", "James B Orlin", "Peter W Shor", "Richard R Weber" ], "title": "On the sum-of-squares algorithm for bin packing", "venue": "Journal of the ACM (JACM),", "year": 2006 }, { "authors": [ "Hao Cui", "Roni Khardon" ], "title": "Online symbolic gradient-based optimization for factored action mdps", "venue": "In IJCAI,", "year": 2016 }, { "authors": [ "Hao Cui", "Roni Khardon", "Alan Fern", "Prasad Tadepalli" ], "title": "Factored mcts for large scale stochastic planning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Thomas Dean", "Keiji Kanazawa" ], "title": "A model for reasoning about persistence and causation", "venue": "Computational intelligence,", "year": 1989 }, { "authors": [ "Francesco Fraternali", "Bharathan Balaji", "Dhiman Sengupta", "Dezhi Hong", "Rajesh K Gupta" ], "title": "Ember: energy management of batteryless event detection sensors with deep reinforcement learning", "venue": "In Proceedings of the 18th Conference on Embedded Networked Sensor Systems,", "year": 2020 }, { "authors": [ "Ilaria Giannoccaro", "Pierpaolo Pontrandolfo" ], "title": "Inventory management in supply chains: a reinforcement learning approach", "venue": "International Journal of Production Economics,", "year": 2002 }, { "authors": [ "Joren Gijsbrechts", "Robert N Boute", "Jan A Van Mieghem", "Dennis Zhang" ], "title": "Can deep reinforcement learning improve inventory management? performance on dual sourcing, lost sales and multiechelon problems", "venue": "Performance on Dual Sourcing, Lost Sales and Multi-Echelon Problems (July", "year": 2019 }, { "authors": [ "Carlos Guestrin", "Daphne Koller", "Ronald Parr" ], "title": "Multiagent planning with factored mdps", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Carlos Guestrin", "Daphne Koller", "Ronald Parr", "Shobha Venkataraman" ], "title": "Efficient solution algorithms for factored mdps", "venue": "Journal of Artificial Intelligence Research,", "year": 2003 }, { "authors": [ "Varun Gupta", "Ana Radovanovic" ], "title": "Online stochastic bin packing", "venue": "arXiv preprint arXiv:1211.2687,", "year": 2012 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Victoria J Hodge", "Richard Hawkins", "Rob Alexander" ], "title": "Deep reinforcement learning for drone navigation using sensor data", "venue": "Neural Computing and Applications,", "year": 2020 }, { "authors": [ "Zhengyao Jiang", "Dixing Xu", "Jinjun Liang" ], "title": "A deep reinforcement learning framework for the financial portfolio management problem", "venue": "arXiv preprint arXiv:1706.10059,", "year": 2017 }, { "authors": [ "Michael Kearns", "Daphne Koller" ], "title": "Efficient reinforcement learning in factored mdps", "venue": "In Proceedings of the 16th International Joint Conference on Artificial Intelligence - Volume 2,", "year": 1999 }, { "authors": [ "Michael Kearns", "Satinder Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Levente Kocsis", "Csaba Szepesvári" ], "title": "Bandit based monte-carlo planning", "venue": "In European conference on machine learning,", "year": 2006 }, { "authors": [ "Finnian Lattimore", "Tor Lattimore", "Mark D Reid" ], "title": "Causal bandits: Learning good interventions via causal inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Changjian Li", "Krzysztof Czarnecki" ], "title": "Urban driving with multi-objective deep reinforcement learning", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 359–367. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2019 }, { "authors": [ "Eric Liang", "Richard Liaw", "Robert Nishihara", "Philipp Moritz", "Roy Fox", "Ken Goldberg", "Joseph Gonzalez", "Michael Jordan", "Ion Stoica" ], "title": "Rllib: Abstractions for distributed reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander Gaunt" ], "title": "Constrained graph variational autoencoders for molecule design", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Chaochao Lu", "Bernhard Schölkopf", "José Miguel Hernández-Lobato" ], "title": "Deconfounding reinforcement learning in observational settings", "venue": "arXiv preprint arXiv:1812.10576,", "year": 2018 }, { "authors": [ "Hongzi Mao", "Parimarjan Negi", "Akshay Narayan", "Hanrui Wang", "Jiacheng Yang", "Haonan Wang", "Ryan Marcus", "Mehrdad Khani Shirkoohi", "Songtao He", "Vikram Nathan" ], "title": "Park: An open platform for learning-augmented computer systems", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Afshin Oroojlooyjadid", "MohammadReza Nazari", "Lawrence Snyder", "Martin Takáč" ], "title": "A deep qnetwork for the beer game: A deep reinforcement learning algorithm to solve inventory optimization problems", "venue": "arXiv preprint arXiv:1708.05924,", "year": 2017 }, { "authors": [ "Ian Osband", "Benjamin Van Roy" ], "title": "Near-optimal reinforcement learning in factored mdps", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Nazneen N Sultana", "Hardik Meisheri", "Vinita Baniwal", "Somjit Nath", "Balaraman Ravindran", "Harshad Khadilkar" ], "title": "Reinforcement learning for multi-product multi-node inventory management in supply chains", "venue": null, "year": 2006 }, { "authors": [ "Oriol Vinyals", "Timo Ewalds", "Sergey Bartunov", "Petko Georgiev", "Alexander Sasha Vezhnevets", "Michelle Yeo", "Alireza Makhzani", "Heinrich Küttler", "John Agapiou", "Julian Schrittwieser" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "Tingwu Wang", "Renjie Liao", "Jimmy Ba", "Sanja Fidler" ], "title": "Nervenet: Learning structured policy with graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tianshu Wei", "Yanzhi Wang", "Qi Zhu" ], "title": "Deep reinforcement learning for building hvac control", "venue": "In Proceedings of the 54th Annual Design Automation Conference 2017,", "year": 2017 }, { "authors": [ "Jason D Williams", "Geoffrey Zweig" ], "title": "End-to-end lstm-based dialog control optimized with supervised and reinforcement learning", "venue": "arXiv preprint arXiv:1606.01269,", "year": 2016 }, { "authors": [ "Jason D Williams", "Kavosh Asadi", "Geoffrey Zweig" ], "title": "Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning", "venue": "arXiv preprint arXiv:1702.03274,", "year": 2017 }, { "authors": [ "Cathy Wu", "Aboudy Kreidieh", "Kanaad Parvate", "Eugene Vinitsky", "Alexandre M Bayen" ], "title": "Flow: Architecture and benchmarking for reinforcement learning in traffic control", "venue": "arXiv preprint arXiv:1710.05465,", "year": 2017 }, { "authors": [ "Cathy Wu", "Aravind Rajeswaran", "Yan Duan", "Vikash Kumar", "Alexandre M Bayen", "Sham Kakade", "Igor Mordatch", "Pieter Abbeel" ], "title": "Variance reduction for policy gradient with action-dependent factorized baselines", "venue": "arXiv preprint arXiv:1803.07246,", "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Ziping Xu", "Ambuj Tewari" ], "title": "Near-optimal reinforcement learning in factored mdps: Oracle-efficient algorithms for the non-episodic setting", "venue": "arXiv preprint arXiv:2002.02302,", "year": 2020 }, { "authors": [ "Yunan Ye", "Hengzhi Pei", "Boxin Wang", "Pin-Yu Chen", "Yada Zhu", "Ju Xiao", "Bo Li" ], "title": "Reinforcementlearning based portfolio management with augmented asset movement prediction states", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Liang Yu", "Yi Sun", "Zhanbo Xu", "Chao Shen", "Dong Yue", "Tao Jiang", "Xiaohong Guan" ], "title": "Multi-agent deep reinforcement learning for hvac control in commercial buildings", "venue": "IEEE Transactions on Smart Grid,", "year": 2020 }, { "authors": [ "Junzhe Zhang", "Elias Bareinboim" ], "title": "Near-optimal reinforcement learning in dynamic treatment regimes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jie Zhou", "Ganqu Cui", "Zhengyan Zhang", "Cheng Yang", "Zhiyuan Liu", "Lifeng Wang", "Changcheng Li", "Maosong Sun" ], "title": "Graph neural networks: A review of methods and applications", "venue": "arXiv preprint arXiv:1812.08434,", "year": 2018 }, { "authors": [ "Paul Zipkin" ], "title": "Old and new methods for lost-sales inventory systems", "venue": "Operations Research,", "year": 2008 }, { "authors": [ "Csirik" ], "title": "2019), which has industrial counterparts in truck packing and virtual machine packing, among others. In an RL formulation, the state is given by the amount filled in each bin, the dimensions of a new item to be packed, and the action is to place the new item in one of the bins, with a reward function for minimizing waste. The factored graph structure is key in this problem in the following way: The state of each bin in timestep", "venue": null, "year": 2019 }, { "authors": [ "e.g. Ye" ], "title": "stock of asset A, stock of asset B", "venue": null, "year": 2020 }, { "authors": [ "Ray RLlib library Liang" ], "title": "We use the default hyper-parameters in the library unless specified below. We use the hyper-parameter variable names as used in the library for ease of replicability", "venue": "For PPO,", "year": 2018 }, { "authors": [ "man" ], "title": "Critic) and when we disable the critic altogether (without Critic)", "venue": "Factored", "year": 2015 } ]
[ { "heading": null, "text": "We propose a simple class of deep reinforcement learning (RL) methods, called FactoredRL, that can leverage factored environment structures to improve the sample efficiency of existing model-based and model-free RL algorithms. In tabular and linear approximation settings, the factored Markov decision process literature has shown exponential improvements in sample efficiency by leveraging factored environment structures. We extend this to deep RL algorithms that use neural networks. For model-based algorithms, we use the factored structure to inform the state transition network architecture and for model-free algorithms we use the factored structure to inform the Q network or the policy network architecture. We demonstrate that doing this significantly improves sample efficiency in both discrete and continuous state-action space settings." }, { "heading": "1 INTRODUCTION", "text": "In many domains, the structure of the Markov Decision Process (MDP) is known at the time of problem formulation. For example, in inventory management, we know the structure of the state transition: how inventory flows from a vendor, to a warehouse, to a customer (Giannoccaro & Pontrandolfo, 2002; Oroojlooyjadid et al., 2017). In portfolio management, we know that a certain asset changes only when the agent buys or sells a corresponding item (Jiang et al., 2017). Similar structural information is available in vehicle routing, robotics, computing, and many others. Our work stems from the observation that we can exploit the known structure of a given MDP to learn a good policy. We build on the Factored MDP literature (Boutilier et al., 1995; Osband & Van Roy, 2014; Kearns & Singh, 2002; Cui & Khardon, 2016), and propose a factored graph to represent known relationships between states, actions and rewards in a given problem. We use the factored graphs to inform the structure of the neural networks used in deep reinforcement learning (RL) algorithms to improve their sample efficiency. We give literature references and example factor graphs for real world applications in Appendix A.\nConsider a motivational example, where the goal of the agent is to balance multiple independent cartpoles simultaneously, with each cartpole defined as per OpenAI gym (G. Brockman & Zaremba, 2016). The agent can take a ‘left’ or ‘right’ action on each cartpole, and the state includes the position and velocity of each cart and each pole. We refer to this as the Multi-CartPole problem.\nBoth model-based and model-free algorithms treat the state-action space as a single entity, which makes exploration combinatorially complex. As a consequence, the sample efficiency of RL algorithms degrades exponentially with the number of cartpoles, despite the problem remaining conceptually simple for a human. By allowing the agent access to the problem’s factored structure (i.e. each action affects only one cartpole), we bypass the need to learn about each action’s relationship with the entire state, and instead only need to learn about each action’s relationship with its single, related cartpole.\nWe show how to integrate knowledge of the factored graph into both model-based and model-free deep RL algorithms, and thereby improve sample efficiency. In all cases, we first write down a factored graph as an adjacency matrix, representing the relationships between state, action, and reward. From this adjacency matrix, we then define a Factored Neural Network (Factored NN), which uses input and output masking to reflect the structure of the factored graph.\nFinally, we show how to integrate this Factored NN into existing deep RL algorithms. For modelbased, we use the Factored NN to learn decomposed state transitions, and then integrate this state transition model with Monte Carlo Tree Search (MCTS) (Kocsis & Szepesvári, 2006). For model-free, we use the Factored NN to learn a decomposed Q-function, and then integrate with DQN (Mnih et al., 2015). Also for model-free, we use the Factored NN to learn a decomposed policy function, and then integrate with PPO (Schulman et al., 2017). In all three cases, we demonstrate empirically that these Factored RL methods (Factored MCTS, DQN, and PPO) are able to achieve better sample efficiency than their vanilla implementations, on a range of environments." }, { "heading": "2 RELATED WORK", "text": "Several methods have been proposed that exploit the structural information of a problem in the Factored MDP literature. Kearns & Koller (1999) propose a method to conduct model-based RL with a Dynamic Bayesian Network (DBN) (Dean & Kanazawa, 1989) and learn its parameters based on an extension of the Explicit Explore or Exploit (E3) algorithm (Kearns & Singh, 2002). Guestrin et al. (2003) propose a linear program and a dynamic program based algorithm to learn linear value functions in Factored MDPs, and extend it to multi-agent settings (Guestrin et al., 2002). They exploit the context specific and additive structure in Factored MDP that capture the locality of influence of specific states and actions. We use the same structures in our proposed algorithms. Cui & Khardon (2016) propose a symbolic representation of Factored MDPs. Osband & Van Roy (2014) propose posterior sampling and upper confidence bounds based algorithms and prove that they are near-optimal. They show that the sample efficiency of the algorithm scales polynomially with the number of parameters that encode the factored MDP, which may be exponentially smaller than the full state-action space. Xu & Tewari (2020) extend the results to non-episodic settings and Lattimore et al. (2016) show similar results for contextual bandits. The algorithms proposed in these prior works assume a tabular (Cui et al., 2015; Geißer et al.) or linear setting (Guestrin et al., 2003), or require symbolic expressions (Cui & Khardon, 2016). We extend these ideas to deep RL algorithms by incorporating the structural information in the neural network.\nLi & Czarnecki (2019) propose a factored DQN algorithm for urban driving applications. Our proposed algorithms are similar, but we extend the ideas to model-based algorithms like MCTS (Kocsis & Szepesvári, 2006), and model-free on-policy algorithms like PPO (Schulman et al., 2017). We also evaluate our algorithms on a variety of environments which encompass discrete and continuous stateaction spaces. The Factored NN we propose is closely related to Graph Neural Networks (Scarselli et al., 2008; Zhou et al., 2018), which are deep learning based methods that operate on graph domain and have been applied to domains such as network analysis (Kipf & Welling, 2016), molecule design(Liu et al., 2018) and computer vision (Xu et al., 2018). Instead of explicitly embedding the neighbors of all the nodes with neural networks, we use a single neural network with masking.\nNerveNet Wang et al. (2018) addresses the expressiveness of structure in an MDP, similar to our work. They focus on robotics applications and demonstrate state-action factorization with PPO. In our work, we additionally demonstrate state transition and state-reward factorization in MCTS and DQN respectively. In addition, they propose imposing a structure with Graph Neural Networks. In contrast, we propose using input and output masking without modifying the neural architecture.\nWorking Memory Graphs Loynd et al. (2020) uses Transformer networks for modeling both factored observations and dependencies across time steps. However, they only evaluate their method in a grid world with a single discrete action. In contrast, we demonstrate our methods on multiple environments and algorithms with factorization in state transition, state-action and state-reward relationships. In addition, our factored network is a simple extension to the existing network used to solve a problem, whereas they impose a complex network architecture.\nAction masking has been used effectively to improve RL performance in multiple works (Williams & Zweig, 2016; Williams et al., 2017; Vinyals et al., 2017). We use a similar trick when applying our Factored NN to policy networks in model-free RL. However, we use both an action mask as well as a state mask to incorporate factored structure in policy networks. Our state transition networks for model-based RL also imposes masks on both input and output corresponding to current state-action and next state respectively. Wu et al. (2018) introduce an action dependent baseline in actor-critic algorithms, where a separate advantage function is learned for each action. Their method also exploits\nstructure available in the action space. Our method to incorporate structure is orthogonal, as we modify the policy network in actor-critic methods.\nThere is also a relationship between our work and the emerging intersection of reinforcement learning and causal inference, as factored graphs are are a super-set of causal graphs in the MDP setting. Lu et al. (2018) use the backdoor criterion in causal inference and variational autoencoders. Zhang & Bareinboim (2019) propose a near-optimal algorithm by taking advantage of causal inference in non-Markovian dynamic treatment regimes. Both works assume there exist unobserved confounders in the environment. We instead tackle a different problem where there are no unobserved confounders and show that there are still benefits to leverage structural information." }, { "heading": "3 TERMINOLOGY", "text": "We briefly describe terminology used in this paper. We use Directed Acyclic Graphs (DAG) to represent relationships between the variables. DAGs consist of nodes and edges where the nodes correspond to random variables X = (X1, ..., Xd) , and a directed edge from variable Xi to Xj represents that Xi has an effect on Xj (Xi is also called the parent of Xj). Under Markov conditions, the joint distribution of the variables can be factored as p(X1:d) = ∏d i=1 p(Xi|PA(Xi)).\nConsider a general Markov Decision Process (MDP) defined by (S,A,P, R, ρ0, γ), where S,A denote the state and action space respectively, P denotes the transition probability, R represents the reward function, ρ0 and γ represent the initial distribution of the state and discount factor respectively.\nIn the classic RL setting, one typically assumes each state Skt+1 depends on the entire previous states and actions, i.e.,PA(Skt+1) = {{Skt } |S| k=1, {Akt } |A| k=1}, where | · | denotes the cardinality of the space, and PA denotes the parents of a node in a bayesian network. However, in many scenarios, one component of the action Akt may only cause part of the state-space {Skt }k∈Ck to change, where Ck is the index set of the related states of the kth component of the action. In other words, the parents of each state may only be a subset of the actions and previous states, i.e., PA(Skt+1) $ {{Skt } |S| k=1, {Akt } |A| k=1}. Simplifying the conditional dependencies helps to construct a more accurate model, enabling us to better decompose the the dynamics and reduce complexity of the learning tasks. We assume the factored structure of the environment does not change over time." }, { "heading": "4 FACTORED NEURAL NETWORK", "text": "We introduce Factored Neural Networks (Factored NN), a generic method for using knowledge from a factored graph to improve neural network predictions. The Factored NN works as follows: we start with a factored graph represented as an adjacency matrix that tells us which of our inputs influence which of our outputs. Then, we predict each output one at a time while masking all the inputs that are irrelevant for the particular output according to our factored graph. We refer to the unmodified neural network as Ordinary NN.\nFigure 1 gives an example. From the factored graph on the left, we observe that output o1 only depends on input i1, and output o2 depends on both inputs. An Ordinary NN takes (i1, i2) as input and outputs (o1, o2) in one go. The Factored NN instead predicts o1 and o2 separately using knowledge of the factored graph. When predicting o1, it masks out i2 and only considers relevant input i1. When predicting o2, it does not mask any inputs. Then o1 and o2 are combined into one vector so that the output form of the Factored NN is of the same form as with an Ordinary NN, so backpropagation can be done as normal.\nBelow, we show how to use the Factored NN in both model-based and model-free RL algorithms, using the same underlying factored structure but varying which elements of (S,A, R) to take as input/output depending on the algorithm. Using the Multi-Cartpole environment as an example, Figure 2 illustrates how a factored graph informs the Factored NN for learning decomposed state transitions, decomposed reward functions, or decomposed policy functions. The following sections discuss the applications of these to MCTS, DQN, and PPO respectively.\nThe factored structure of an environment has to be manually specified. While this may seem challenging for well established benchmarks, for a real life application we still need to define the\n(1) Ordinary Neural Network\nMDP with state, actions and reward. Adding factorization information is relatively easy for a domain expert familiar with the details of the problem. Appendix A shows examples factored graphs for a few real world applications.\n(a) Factored Graph\n(b) Factored MCTS\n(c) Factored DQN\n(d) Factored PPO" }, { "heading": "5 FACTORED MCTS", "text": "In model-based RL, an environment model uses the current state and chosen actions to predict the next state, reward, and whether the episode is done or not. The more accurate the environment model, the better it can plan and the higher the rewards it can achieve - ways of improving the accuracy of environment models are therefore of interest to all model-based RL algorithms.\nIn order to learn an environment model more efficiently, we can construct a Factored NN that predicts next state given current state and action, according to the underlying factored structure of the problem. Taking Multi-Cartpole with factored graph displayed in Figure 2 as an example, the transition probability can be factored as p(st+1|st,at) = ∏d k=1 p(s k t+1|skt , akt ), where d is the number of cartpoles, skt and a k t represent the state vector and the action taken for the k\nth cartpole. This efficiently reduces the complexity from a modeling perspective. The Factored NN takes input (st,at),\ndecomposes accordingly and returns st+1 as output. Figure 2b gives a graphical representation of Factored MCTS.\nWe can fold this Factored NN into model-based RL algorithms anywhere we use an environment model. In this work, we demonstrate using Monte Carlo Tree Search (MCTS) (Kocsis & Szepesvári, 2006) with a learned model. We implement MCTS by iterating between: 1) learning the parameters of the environment model with a gradient-based approach from existing observations; and 2) acting in the world by rolling out samples from the environment model and picking the best action using tree search." }, { "heading": "6 FACTORED DQN", "text": "Model-free algorithms do not use an environment model, but rather directly learn a Q-value or policy. We can still use a Factored NN to obtain better sample efficiency, simply by specifying the relevant parts of (S,A, R) as input/output. In the case of DQN, we need to learn a Q-value given the current state and action and update it with:\nQ(st, at)←− Q(st, at) + α[R(st, at) + γmax a′ Q(st, a ′)−Q(st, at)] (1)\nWhen the state and action space is high-dimensional, estimating Q-value becomes computationally expensive. Adhering to the underlying factored structure, we can decompose the Q-value with Factored NN. Taking Multi-Cartpole as an illustrating example, the total reward is the summation of the individual rewards: R(st,at) = ∑d k=1R(s k t , a k t ), where R(s, a) is the reward function. We can break down the Q-value in the same way as the factored structure does not change across the episode: Q(st,at) = ∑d k=1Q(s k t , a k t ). The Factored NN takes the state-action pair (st,at) as the input, decomposes it into individual state-action pair (skt , a k t ) for each cartpole, predicts individual Q-value Q(skt , a k t ) and combines them into the final Q(st,at) which can then be updated with (1). Figure 2c gives an illustration of Factored DQN." }, { "heading": "7 FACTORED PPO", "text": "Finally, we can also integrate the Factored NN into model-free algorithms that directly do policy optimization. In this work, we show how to do this with PPO (Schulman et al., 2017), an actor-critic algorithm where a policy network determines the action based on the state, and a value network predicts the episode return from the current state.\nThe policy network π(at|st) directly optimizes the best action at given the current state st. The factored structure can be used to reduce the complexity of a problem by decomposing the conditional distribution π(at|st) accordingly. We can then apply the Factored NN to the policy network, by mapping only the structurally related states to the actions. In the Multi-Cartpole example, π(at|st) =∏d\nk=1 π(a k t |skt ). Factored NN takes the entire state st as input, decomposes into each individual state\nskt and predicts its corresponding action a k t for each cartpole k. See Figure 2d for an illustration." }, { "heading": "8 EXPERIMENTS", "text": "We show experimental results for Factored MCTS, DQN, and PPO on a variety of simulation environments. In all experiments, we first define a factored graph representing the relationships among a given problem’s (S,A, R), then leverage that graph in a Factored NN to learn a policy with either MCTS, DQN, or PPO. The results below compare these FactoredRL algorithms with their vanilla counterparts. All experiments use the same hyper-parameters for Factored NN and Ordinary NN, they are reported in Appendix C. Each experiment is run on 5 different seeds." }, { "heading": "8.1 FACTORED MCTS EXPERIMENTS", "text": "We experiment with Factored MCTS on two environments: Multi-Cartpole and Taxi. We chose these environments because we can easily decompose their state transitions, and they involve discrete actions which MCTS requires.\nMulti-Cartpole Experiments: We first test using a Factored NN with MCTS on the Multi-Cartpole environment. In this environment, the agent balances multiple independent cartpoles at once, where a single cartpole is defined as per OpenAI gym (G. Brockman & Zaremba, 2016).\nThe state size is 4 multiplied by the number of cartpoles, as each cartpole has a state of size 4 representing its position, velocity and angle. The action taken by the agent is a binary vector representing the direction of force applied to each cartpole. The reward given to the agent is the sum of rewards for each cartpole, receiving 1/(Number of Cartpoles) if a cartpole is upright, and 0 if not.\nThe factored structure for this environment that we leverage in the Factored NN is that the state transitions for each cartpole are independent.\nThe results are displayed in Figure 3a and b. For both cases we consider, i.e. 4, and 8 cartpoles, incorporating the factored structure into the problem via the Factored NN leads to superior model prediction error and environment reward. In terms of sample efficiency, we find that Factored NN achieves the final score of the Ordinary NN in 25% and 10% of the time for 4 and 8 cartpoles respectively.\nTaxi Experiments: We also test Factored MCTS on a simplified version of the Taxi environment (G. Brockman & Zaremba, 2016). In this environment there is a taxi and a passenger, and the goal is for the taxi to pick up the passenger and drop him off at a specified location. 1\nThe state is given by the location of the taxi, the passenger, and the target location. The action taken by the agent is a discrete action to move up, down, left, right, or to pick up or drop off the passenger. A positive reward is given if the agent drops off the passenger in the correct location; a negative reward is given every timestep and if the taxi tries to dropoff or pickup a passenger illegally.\nThe factored structure for this environment that we leverage in the Factored NN is that the destination never changes and the location of the taxi and passenger only depends on a subset of actions.\nThe results are displayed in Figure 3c. As with the Multi-Cartpole experiments, incorporating the factored structure into the problem via the Factored NN leads to superior model prediction error and environment reward. In terms of sample efficiency, we find that Factored NN achieves the final score of the Ordinary NN in 60% of the time.\n1To make the environment solvable in a reasonable amount of time using MCTS, we simplified the problem slightly by having the taxi always begin with the customer onboard and within a certain distance of its destination." }, { "heading": "8.2 FACTORED DQN EXPERIMENTS", "text": "We experiment with Factored DQN on two environments: Bitflip, and Multi-Cartpole. We chose these environments because factored structure in these two environments allow us to decompose Q-values (which we cannot do in the Taxi environment), and DQN requires discrete actions.\nBitFlip Experiments: Our BitFlip problem formulation is inspired from the example introduced by Andrychowicz et al. (2017). In this environment, the agent tries to flip bits (0 or 1) in a vector to match the values of a target vector of bits.\nThe state is given by two sets of n bits, where one set consists of current n bits and another set consists of n target bits (2n state). The action taken by the agent at each timestep is a discrete binary vector a of size n where ai = 0, 1 means no flip and flip on the i-th current bit (2n possible actions). The reward given to the agent is −1 for each flipped bit and a positive reward of 2 for each flipped bit matched to a target bit. This environment is episodic and it ends if it reaches the maximum number of time steps (= 3n) or if current bits are same as target bits.\nFactored NN uses the independence of the reward corresponding to each bit to estimate the respective Q-value of each bit flip. The final Q value is the sum of Q-values for each bit. Note that only sum of independent rewards are available during training time, which differentiates the problem from solving each sub-problem independently.\nThe results are displayed in Figure 4a. For all cases we consider, i.e., 2, 4, and 8 bits, incorporating the factored structure into the problem via the Factored NN leads to superior environment reward. In terms of sample efficiency, Factored NN achieves the asymptotic performance of the ordinary NN in 10% of the time in the 8 bit case. The results for 2 and 4 bits are presented in Appendix D\nMulti-Cartpole Experiments: We also test Factored DQN on the Multi-Cartpole environment, with the same problem statement described in Section 8.1. Factored NN leverages the information that the reward for one cartpole is independent from other cartpoles. Again, note that only sum of independent rewards are available during training time.\nThe results are displayed in Figure 4b and 4c, for environments with 4 and 8 cartpoles, respectively. Factored DQN is superior in terms of mean episodic rewards. The performance gap is bigger when as we increase the number of cartpoles because the number of outputs of the Q-function grows exponentially. Factored NN reduces the complexity by estimating the Q-value of each cartpole independently. In terms of sample efficiency, Factored NN achieves asymptotic performance within 300000 timesteps while NN suffers from high-dimensional state-action pairs and does not improve its performance within 1 million timesteps." }, { "heading": "8.3 FACTORED PPO EXPERIMENTS", "text": "We experiment with Factored PPO on five environments: Bitflip, Multi-Cartpole, Half-Cheetah, Ant, and Humanoid. We chose these environments to evaluate on both discrete and continuous state-action spaces. We do not evaluate on Taxi because it has only one set of actions that are not separable by factored structure of a policy network.\nBitflip & Multi-Cartpole Experiments: We first test using a Factored NN with PPO on the Bitflip and Multi-Cartpole environments. These have the same setup described in Sections 8.1 and 8.2.\nFor BitFlip, we map the action of flipping a bit to the state of the corresponding bit, the rest of the state is masked out. For Multi-Cartpole, we map the action of controlling each cartpole with the state of the corresponding cartpole, masking out the state of other cartpoles.\nWe show a subset of the results in Figure 5, and the rest are given in Appendix E. For both environments, Factored PPO outperforms its vanilla implementation with respect to environment reward over the training period. In terms of sample efficiency, Factored NN achieves the final score of the Ordinary NN in 46% of the time for BitFlip with 8 bits; in 32% and 22% of the time for 4 and 8 cartpoles respectively.\nRobotics Experiments: We also test Factored PPO on three continuous control robotics environments: Ant, Half-Cheetah, and Humanoid with a horizon of 1000 steps, as defined in PyBullet (Coumans & Bai, 2016)2. In these environments, the agent controls the robot joints, and its goal is to walk upright.\nThe state for each of these environments is slightly different, but in general state consists of joint angle and angular velocity as well as global state such as robot position and contact force with the ground. The action taken by the agent controls each joint torque. HalfCheetah, Ant and Humanoid have 6, 8 and 17 joints respectively. The reward given to the agent is positive if the robot is upright and moving. There is negative reward for using electricity and if the robot falls or tangles its legs. The Factored NN leverages the structure that the state of each robot joint maps only to the respective joint actions.\nThe results are displayed in Figure 6, which shows the training curves for the baseline PPO and Factored PPO. In terms of sample efficiency, Factored PPO reaches the final episode reward of baseline PPO in 35%, 61% and 64% of the time for HalfCheetah, Ant and Humanoid respectively. We observe similar results when we remove the critic in PPO, and report detailed results in Appendix E.\n2The open source PyBullet environments are slightly different from the MuJoCo counterparts - the converged episode rewards differ. PyBullet emulates physics accurately, and is as challenging to solve as MuJoCo." }, { "heading": "9 CONCLUSION AND FUTURE WORK", "text": "We demonstrate how to exploit factored structural knowledge in both model-based and model-free RL. We leverage this factored structure through a Factored Neural Network (Factored NN), which uses input and output masking to reflect the factored graph. In model-based RL, we show how to use a Factored NN to learn a state transition model, which can then be integrated into algorithms like MCTS. In model-free RL, we show how to use a Factored NN to learn a factored Q function which can be integrated into DQN, and we also show how to use a Factored NN to learn a factored policy function for use in PPO. Our method can be easily generalized to other model-based and model-free algorithms with the same idea. We have tested our FactoredRL approach on both continuous and discrete state-action spaces including bitflip, cartpole, taxi, and robotics environments, which show improved sample efficiency relative to vanilla algorithms.\nThe contribution of this work is in tying together factored graphs and deep reinforcement learning, showing that factored structure can improve the performance of RL algorithms. The theoretical properties of Factored NN, scalability to high dimensional inputs such as images, and alternative methods for incorporating factored structure into RL are interesting directions of future work. Critically, in this work we take as given access to an accurate factored graph, specified manually ex ante. In many environments, doing so is either time consuming, inaccurate, or outright infeasible. In future work, we will focus on factored graph discovery in online, RL settings, where the agent’s actions affect the distribution of data and therefore the quality of the learned graph." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A EXAMPLES OF FACTORED GRAPHS IN REAL WORLD APPLICATIONS", "text": "There are many real world RL applications where we can leverage the knowledge of a factored graph. Application domains include supply chain (Sultana et al., 2020), portfolio management (Ye et al., 2020), industrial control (Wei et al., 2017), robotics (Haarnoja et al., 2018), cellular networks (Chinchali et al., 2018), online education (Bassen et al., 2020), Internet of Things (Fraternali et al., 2020) and many more (Wu et al., 2017; Mao et al., 2019). Below we include a few examples of RL applications where we can employ our factored RL approach." }, { "heading": "A.1 OPERATIONS RESEARCH", "text": "Many problems in operations research, and especially supply chain, are natural environments for the use of factored graphs. This is because the historical dominant techniques, Linear and Dynamic Programming, require us to articulate problem structure in the form of a model to be solved, and this problem structure can easily be expressed in factored graphs as well. Two canonical problems with wide-ranging industrial applications can demonstrate this. First, there is the Bin Packing problem (Csirik et al., 2006; Gupta & Radovanovic, 2012; Balaji et al., 2019), which has industrial counterparts in truck packing and virtual machine packing, among others. In an RL formulation, the state is given by the amount filled in each bin, the dimensions of a new item to be packed, and the action is to place the new item in one of the bins, with a reward function for minimizing waste. The factored graph structure is key in this problem in the following way: The state of each bin in timestep t+1 is a function only of the same bin in timestep t, and the action at time t, meaning that RL can ignore spurious relationships between bin levels in predicting the state transition. Figure 7 depicts the factored graph.\nBinpacking Factored Graph\nt t+1\nSecond, there is the Newsvendor problem (Balaji et al., 2019; Zipkin, 2008; Gijsbrechts et al., 2019), which has a common industrial counterpart in purchase ordering decisions (e.g. from a retailer to a vendor). In an RL formulation, the state is given by the economic parameters of the item to be ordered, the inventory on-hand, and the inventory to-arrive from the vendor in timesteps t+1, t+2, etc.,\nthe action is how much of the item to order, and the reward is a function of inventory on-hand and customer demand. The factored graph structure in this problem is especially helpful for simplifying the dynamics of the inventory to-arrive, since this is a linear pipeline from the vendor to the agent, i.e. the inventory that will arrive at t+1 is the same as inventory in the previous period that will arrive at t+2. Figure 8 depicts the factored graph.\nt t+1\nNewsvendor Factored Graph" }, { "heading": "A.2 ROBOTICS", "text": "In robotics, the state may be high-dimensional, but the subspaces of the state may evolve independently of others, and only depend on a low dimensional subspace of the previous state. We have included examples of Ant, Half-Cheetah, and Humanoid in the paper with factor graph given in appendix A.4, where the transition dynamics of a robot’s arms may be reasonably assumed to be independent of the transition dynamics of its legs. A similar example, we can use factored graphs for drone control with deep RL (Hodge et al., 2020).\nA.3 INDUSTRIAL CONTROL\nIt is common in industrial control to have several components that work together to accomplish a goal. For example in HVAC (short for heating, ventilation and air conditioning) control, the building is divided into zones each of which is controlled by a set of damper, fan and heating element (Yu et al., 2020). All the zones need to work in concert with a central air handler unit that supplies cold air. The state in this problem is the temperature of individual zones, the supply air temperature and weather conditions. The action is to set the controls of each zone. The reward is to ensure thermal comfort with minimal energy use. A state-action factored graph can be used inform the RL agent that\nDrone Control Factored Graph\nt t+1\ncontrol of each zone is independent of each other. The reward function can also be factorized as the thermal comfort in each zone is measured independently. Figure 10 depicts the factored graph." }, { "heading": "HVAC Factored Graph", "text": "t t+1" }, { "heading": "A.4 PORTFOLIO MANAGEMENT", "text": "These problems (e.g. Ye et al. (2020)) generally have states of the form [stock of asset A, stock of asset B, . . . ] and action spaces of the form [buy/sell of asset A, buy/sell of asset B, . . . ]. In this scenario we have important prior knowledge about the factor graph, for example we know that the sub-action of buying/selling asset A will not influence the sub-state stock of asset B. The method would allow us to incorporate this prior knowledge into our RL Agent and improve performance. Figure 11 depicts the factored graph.\nt t+1" }, { "heading": "Portfolio Management Factored Graph", "text": "" }, { "heading": "B FACTORED GRAPHS OF ENVIRONMENTS USED IN EVALUATION", "text": "" }, { "heading": "B.1 MULTIPLE-CARTPOLE", "text": "Let pit, v i t and θ i t represent the position, velocity and angle for cart i at time t and let A i t, R i t represent the force and reward for cart i at time t, then the factored graph for this environment can be resepented by" }, { "heading": "B.2 BITFLIP", "text": "Define Sit to be the bit at position i at time t, define A i t to be the action of whether to flip the i-th bit at time t and let Rit represents whether the i-th bit equals to the i-th bit of the target bits. Then the factored graph for this environment can be resepented by" }, { "heading": "B.3 TAXI", "text": "Define ptaxit , p dest. t , p pass t to be the location of the taxi, target destination and passenger at time t respectively, let amovet , a pick t , a drop t to be the action of moving (up, down, left, right), picking up and dropping off passengers at time t. Then the factored graph for this environment can be resepented by" }, { "heading": "B.4 ROBOTICS", "text": "Define sglobal to be the global features of the robot (e.g., position, contact force) and define sit to be the state of joint i at time t. The action for each joint is denoted by ait. Here we show only 3 actions\n[p0t , v 0 t , θ 0 t ]\nA0t\n[p0t+1, v 0 t+1,\nθ0t+1\nA0t+1\nR0t+1\n[p1t , v 1 t , θ 1 t ]\nA1t\n[p1t+1, v 1 t+1,\nθ1t+1\nA1t+1\nR1t+1\nRt+1\nt t+ 1\nCartpole 0\nCartpole 1\nS0t\nA0t\nS1t+1\nA0t+1\nR0t+1\nS1t\nA1t\nS1t+1\nA1t+1\nR1t+1\nRt+1\nt t+ 1\nBit 0\nBit 1\nppas.t\npdest.t\nptaxi.t\namovet\napickt\nadropt\nppas.t+1\npdest.t+1\nptaxi.t+1\nt t+ 1\nfor simplicity, the graph scales similarly as we add more joints (e.g. 17 joints for the humanoid case). The factored graph for this environment can be represented by\ns0t\ns1t\ns2t\nsglobalt\na0t\na1t\na2t\ns0t+1\ns1t+1\ns2t+1\nsglobalt+1\nt t+ 1" }, { "heading": "C HYPERPARAMETERS", "text": "Below we provide the hyperparameters used for each experiment.\nFor PPO, we used the Ray RLlib library Liang et al. (2018). We use the default hyper-parameters in the library unless specified below. We use the hyper-parameter variable names as used in the library for ease of replicability. If some of the hyper-parameter names are unclear, please refer to the RLlib documentation for details." }, { "heading": "D ADDITIONAL DQN EXPERIMENTS", "text": "We compared Factored DQN with DQN on the BitFlip and Multi-Cartpole environments across various number of bits and cartpoles. We find that the Factored DQN consistently outperforms DQN and the performance gap is larger when the environment is harder." }, { "heading": "E ADDITIONAL PPO EXPERIMENTS", "text": "We also compared PPO with Factored NN vs. with an ordinary NN on the BitFlip and multi-Cartpole environments with the results shown below. We find that the Factored NN algorithms perform better and that the difference increases the more complex the environment is.\nIn addition, we report PPO results when we disable the generalized advantage estimation (GAE) Schulman et al. (2015) (Critic) and when we disable the critic altogether (without Critic)." } ]
2,020
FACTOREDRL: LEVERAGING FACTORED GRAPHS FOR DEEP REINFORCEMENT LEARNING
SP:ad7eb2bcb3a83153f140e5e8bfaa8b76110e62ab
[ "It is a very poorly written paper. Basic idea of finding a way to not have to wait for full forward pass is not new. Multiple research papers have been published from the extreme of using stale weight to some form of sub-network backdrop as a proxy for the full network. This paper proposed no new idea for local update. Prior work have all suffered with one or both of these two limitations: a) poor experimental framework, or b) not being able to meet the accuracy bar set by backprop. This work suffers from both. Very poorly described experimental basis - and failing to come even close to the backprop accuracy target with any decent speedup claim. Former is my biggest concern. Section 6 starts with 'Here we show that performance gains of local parallelism can be realized on real hardware' - with near-zero description of any 'real' hardware, except a footnote on '1000 IPUs on a chip'. " ]
Deep learning models trained on large data sets have been widely successful in both vision and language domains. As state-of-the-art deep learning architectures have continued to grow in parameter count so have the compute budgets and times required to train them, increasing the need for compute-efficient methods that parallelize training. Two common approaches to parallelize the training of deep networks have been data and model parallelism. While useful, data and model parallelism suffer from diminishing returns in terms of compute efficiency for large batch sizes. In this paper, we investigate how to continue scaling compute efficiently beyond the point of diminishing returns for large batches through local parallelism, a framework which parallelizes training of individual layers in deep networks by replacing global backpropagation with truncated layer-wise backpropagation. Local parallelism enables fully asynchronous layer-wise parallelism with a low memory footprint, and requires little communication overhead compared with model parallelism. We show results in both vision and language domains across a diverse set of architectures, and find that local parallelism is particularly effective in the high-compute regime.
[]
[ { "authors": [ "David H. Ackley", "Geoffrey E. Hinton", "Terrence J. Sejnowski" ], "title": "A learning algorithm for Boltzmann machines", "venue": "Cognitive Science,", "year": 1985 }, { "authors": [ "Eugene Belilovsky", "Michael Eickenberg", "Edouard Oyallon" ], "title": "Greedy layerwise learning can scale to ImageNet", "venue": "In 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Eugene Belilovsky", "Michael Eickenberg", "Edouard Oyallon" ], "title": "Decoupled greedy learning of CNNs", "venue": "arXiv preprint arXiv:1901.08164 [cs.LG],", "year": 2019 }, { "authors": [ "Tal Ben-Nun", "Torsten Hoefler" ], "title": "Demystifying parallel and distributed deep learning: An in-depth concurrency analysis", "venue": "arXiv preprint arXiv:1802.09941 [cs.LG],", "year": 2018 }, { "authors": [ "Samy Bengio", "Yoshua Bengio", "Jocelyn Cloutier", "Jan Gecsei" ], "title": "On the optimization of a synaptic learning rule", "venue": "In Preprints Conf. Optimality in Artificial and Biological Neural Networks,", "year": 1992 }, { "authors": [ "Yoshua Bengio", "Samy Bengio", "Jocelyn Cloutier" ], "title": "Learning a Synaptic Learning Rule", "venue": "University of Montreal,", "year": 1990 }, { "authors": [ "Christopher Berner", "Greg Brockman", "Brooke Chan", "Vicki Cheung", "Przemysław Debiak", "Christy Dennison", "David Farhi", "Quirin Fischer", "Shariq Hashme", "Chris Hesse" ], "title": "Dota 2 with large scale deep reinforcement learning", "venue": "arXiv preprint arXiv:1912.06680,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": "arXiv preprint arXiv:2005.14165 [cs.CL],", "year": 2020 }, { "authors": [ "Ciprian Chelba", "Tomas Mikolov", "Mike Schuster", "Qi Ge", "Thorsten Brants", "Phillipp Koehn", "Tony Robinson" ], "title": "One billion word benchmark for measuring progress in statistical language modeling", "venue": "arXiv preprint arXiv:1312.3005,", "year": 2013 }, { "authors": [ "Jianmin Chen", "Xinghao Pan", "Rajat Monga", "Samy Bengio", "Rafal Jozefowicz" ], "title": "Revisiting distributed synchronous SGD", "venue": "arXiv preprint arXiv:1604.00981 [cs.LG],", "year": 2016 }, { "authors": [ "Tianqi Chen", "Mu Li", "Yutian Li", "Min Lin", "Naiyan Wang", "Minjie Wang", "Tianjun Xiao", "Bing Xu", "Chiyuan Zhang", "Zheng Zhang" ], "title": "Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1512.01274,", "year": 2015 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709 [cs.LG],", "year": 2020 }, { "authors": [ "Michiel Coesmans", "John T. Weber", "Chris I. De Zeeuw", "Christian Hansel" ], "title": "Bidirectional parallel fiber plasticity in the cerebellum under climbing fiber control", "venue": null, "year": 2004 }, { "authors": [ "Ronan Collobert", "Koray Kavukcuoglu", "Clement Farabet" ], "title": "Torch7: A matlab-like environment for machine learning", "venue": "In BigLearn, NIPS 2011 Workshop,", "year": 2011 }, { "authors": [ "Dipankar Das", "Sasikanth Avancha", "Dheevatsa Mudigere", "Karthikeyan Vaidynathan", "Srinivas Sridharan", "Dhiraj Kalamkar", "Bharat Kaul", "Pradeep Dubey" ], "title": "Distributed deep learning using synchronous stochastic gradient descent", "venue": "[cs.DC],", "year": 2016 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V. Le", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc’aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang" ], "title": "Large scale distributed deep networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR", "year": 2009 }, { "authors": [ "Li Deng", "Dong Yu", "John Platt" ], "title": "Scalable stacking and learning for building deep architectures", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP", "year": 2012 }, { "authors": [ "Flax Developers" ], "title": "Flax: A neural network library for JAX designed for flexibility, 2020", "venue": "URL https://github.com/google-research/flax/tree/prerelease", "year": 2020 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": "[cs.CV],", "year": 2017 }, { "authors": [ "Leopold Grinberg", "John J. Hopfield", "Dmitry Krotov" ], "title": "Local unsupervised learning for image analysis", "venue": "arXiv preprint arXiv:1908.08993 [cs.CV],", "year": 2019 }, { "authors": [ "Keren Gu", "Sam Greydanus", "Luke Metz", "Niru Maheswaranathan", "Jascha Sohl-Dickstein" ], "title": "Metalearning biologically plausible semi-supervised update rules", "venue": "bioRxiv,", "year": 2019 }, { "authors": [ "Lei Guan", "Wotao Yin", "Dongsheng Li", "Xicheng Lu" ], "title": "Xpipe: Efficient pipeline model parallelism for multi-gpu dnn training, 2019", "venue": null, "year": 2019 }, { "authors": [ "Aaron Harlap", "Deepak Narayanan", "Amar Phanishayee", "Vivek Seshadri", "Nikhil Devanur", "Greg Ganger", "Phil Gibbons" ], "title": "PipeDream: Fast and efficient pipeline parallel DNN training", "venue": "[cs.DC],", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016),", "year": 2016 }, { "authors": [ "Donald O. Hebb" ], "title": "The organization of behavior; a neuropsychological theory", "venue": null, "year": 1949 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "Tom Hennigan", "Trevor Cai", "Tamara Norman", "Igor" ], "title": "Babuschkin. Haiku: Sonnet for JAX, 2020", "venue": "URL http://github.com/deepmind/dm-haiku", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531 [stat.ML],", "year": 2015 }, { "authors": [ "John J. Hopfield" ], "title": "Neural networks and physical systems with emergent collective computational abilities", "venue": "Proceedings of the National Academy of Sciences,", "year": 1982 }, { "authors": [ "Yanping Huang", "Youlong Cheng", "Ankur Bapna", "Orhan Firat", "Mia Xu Chen", "Dehao Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V. Le", "Yonghui Wu", "Zhifeng Chen" ], "title": "GPipe: Easy scaling with micro-batch pipeline parallelism", "venue": "[cs.CV],", "year": 2018 }, { "authors": [ "Yanping Huang", "Youlong Cheng", "Ankur Bapna", "Orhan Firat", "Dehao Chen", "Mia Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V Le", "Yonghui Wu" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Raphael Hunger" ], "title": "Floating Point Operations in Matrix-vector Calculus", "venue": "Munich University of Technology, Inst. for Circuit Theory and Signal,", "year": 2005 }, { "authors": [ "Zhouyuan Huo", "Bin Gu", "Heng Huang" ], "title": "Training neural networks using features replay", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Zhouyuan Huo", "Bin Gu", "Qian Yang", "Heng Huang" ], "title": "Decoupled parallel backpropagation with convergence guarantee", "venue": "arXiv preprint arXiv:1804.10574 [cs.LG],", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Eugene M. Izhikevich", "Niraj S. Desai" ], "title": "Relating STDP to BCM", "venue": "Neural Computation,", "year": 2003 }, { "authors": [ "Max Jaderberg", "Wojciech Marian Czarnecki", "Simon Osindero", "Oriol Vinyals", "Alex Graves", "David Silver", "Koray Kavukcuoglu" ], "title": "Decoupled neural interfaces using synthetic gradients", "venue": "In 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "[cs.CV],", "year": 2014 }, { "authors": [ "Zhe Jia", "Blake Tillman", "Marco Maggioni", "Daniele Paolo Scarpazza" ], "title": "Dissecting the graphcore ipu architecture via microbenchmarking, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling laws for neural language models", "venue": "arXiv preprint arXiv:2001.08361 [cs.LG],", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "CIFAR-10 and CIFAR-100 datasets", "venue": "URl: https://www. cs. toronto. edu/kriz/cifar. html,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Dmitry Krotov", "John J. Hopfield" ], "title": "Unsupervised learning by competing hidden units", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Dmitry Lepikhin", "HyoukJoong Lee", "Yuanzhong Xu", "Dehao Chen", "Orhan Firat", "Yanping Huang", "Maxim Krikun", "Noam Shazeer", "Zhifeng Chen" ], "title": "Gshard: Scaling giant models with conditional computation and automatic sharding", "venue": null, "year": 2006 }, { "authors": [ "Sindy Löwe", "Peter O’Connor", "Bastiaan Veeling" ], "title": "Putting an end to end-to-end: Gradient-isolated learning of representations", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Dominic Masters", "Carlo Luschi" ], "title": "Revisiting small batch training for deep neural networks", "venue": "arXiv preprint arXiv:1804.07612,", "year": 2018 }, { "authors": [ "Sam McCandlish", "Jared Kaplan", "Dario Amodei", "OpenAI Dota Team" ], "title": "An empirical model of large-batch training", "venue": "arXiv preprint arXiv:1812.06162 [cs.LG],", "year": 2018 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Brian Cheung", "Jascha Sohl-Dickstein" ], "title": "Meta-learning update rules for unsupervised representation learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Jeremy Nixon", "Daniel Freeman", "Jascha Sohl-Dickstein" ], "title": "Understanding and correcting pathologies in the training of learned optimizers", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Ruoxi Sun", "C Daniel Freeman", "Ben Poole", "Jascha SohlDickstein" ], "title": "Using a thousand optimization tasks to learn hyperparameter search strategies", "venue": null, "year": 2002 }, { "authors": [ "Feng Niu", "Benjamin Recht", "Christopher Re", "Stephen J. Wright" ], "title": "Hogwild!: A lock-free approach to parallelizing stochastic gradient descent", "venue": "arXiv preprint arXiv:1106.5730 [math.OC],", "year": 2011 }, { "authors": [ "Erkki Oja" ], "title": "Simplified neuron model as a principal component analyzer", "venue": "Journal of Mathematical Biology,", "year": 1982 }, { "authors": [ "Alexander Ororbia", "Ankur Mali", "C Lee Giles", "Daniel Kifer" ], "title": "Continual learning of recurrent neural networks by locally aligning distributed representations", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Alain Petrowski", "Gerard Dreyfus", "Claude Girault" ], "title": "Performance analysis of a pipelined backpropagation parallel algorithm", "venue": "IEEE Transactions on Neural Networks,", "year": 1993 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners. 2019", "venue": "OpenAI Blog", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683 [cs.LG],", "year": 2019 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning internal representations by error propagation", "venue": "Technical report, California Univ San Diego La Jolla Inst for Cognitive Science,", "year": 1985 }, { "authors": [ "Chaitanya K. Ryali", "John J. Hopfield", "Leopold Grinberg", "Dmitry Krotov" ], "title": "Bio-inspired hashing for unsupervised similarity search", "venue": "arXiv preprint arXiv:2001.04907 [cs.LG],", "year": 2020 }, { "authors": [ "Terence D. Sanger" ], "title": "Optimal unsupervised learning in a single-layer linear feedforward neural network", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "John Schulman", "Nicolas Heess", "Theophane Weber", "Pieter Abbeel" ], "title": "Gradient estimation using stochastic computation graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Alexander Sergeev", "Mike Del" ], "title": "Balso. Horovod: fast and easy distributed deep learning in TensorFlow", "venue": "arXiv preprint arXiv:1802.05799,", "year": 2018 }, { "authors": [ "Christopher J. Shallue", "Jaehoon Lee", "Joseph Antognini", "Jascha Sohl-Dickstein", "Roy Frostig", "George E. Dahl" ], "title": "Measuring the effects of data parallelism on neural network training", "venue": "[cs.LG],", "year": 2018 }, { "authors": [ "Noam Shazeer", "Youlong Cheng", "Niki Parmar", "Dustin Tran", "Ashish Vaswani", "Penporn Koanantakool", "Peter Hawkins", "HyoukJoong Lee", "Mingsheng Hong", "Cliff Young" ], "title": "Mesh-tensorflow: Deep learning for supercomputers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Junyoung Chung", "Michael Mathieu", "Max Jaderberg", "Wojciech M Czarnecki", "Andrew Dudzik", "Aja Huang", "Petko Georgiev", "Richard Powell" ], "title": "Alphastar: Mastering the real-time strategy game starcraft", "venue": "ii. DeepMind blog,", "year": 2019 }, { "authors": [ "Yuwen Xiong", "Mengye Ren", "Raquel Urtasun" ], "title": "LoCo: Local contrastive representation learning", "venue": "arXiv preprint arXiv:2008.01342 [cs.LG],", "year": 2020 }, { "authors": [ "Bowen Yang", "Jian Zhang", "Jonathan Li", "Christopher Ré", "Christopher R. Aberger", "Christopher De Sa" ], "title": "Pipemare: Asynchronous pipeline parallel dnn training, 2020", "venue": null, "year": 2020 }, { "authors": [ "Xiru Zhang", "Michael Mckenna", "Jill P. Mesirov", "David L. Waltz" ], "title": "An efficient implementation of the back-propagation algorithm on the connection machine CM-2", "venue": "In Advances in Neural Information Processing Systems", "year": 1989 }, { "authors": [ "Shallue" ], "title": "2018) (3.9), and a loss value slightly lower (3.8)", "venue": "Results can be found in Figure", "year": 2018 }, { "authors": [ "Goyal" ], "title": "2017) and from the first 50 configurations in opt list. The resulting cost wall time", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Backpropagation (Rumelhart et al., 1985) is by far the most common method used to train neural networks. Alternatives to backpropagation are typically used only when backpropagation is impractical due to a non-differentiable loss (Schulman et al., 2015), non-smooth loss landscape (Metz et al., 2019), or due to memory and/or compute requirements (Ororbia et al., 2020). However, progress in deep learning is producing ever larger models in terms of parameter count and depth, in vision (Hénaff et al., 2019; Chen et al., 2020), language (Radford et al., 2019; Brown et al., 2020), and many other domains (Silver et al., 2017; Vinyals et al., 2019; Berner et al., 2019). As model size increases, backpropagation incurs growing computational, memory, and synchronization overhead (Ben-Nun & Hoefler, 2018). This raises the question of whether there are more efficient training strategies, even for models and losses that are considered well matched to training by backpropagation.\nMuch of the work on training large scale models focuses on designing compute infrastructure which makes backpropagation more efficient, despite growing model size (Dean et al., 2012b; Chen et al., 2015; Sergeev & Balso, 2018). One of the most common ways to achieve efficient training of deep neural networks with backpropagation is to scale utilizing data parallelism (Zhang et al., 1989; Chen et al., 2016), training on bigger batch sizes spread across multiple devices. However, diminishing returns have been reported with this method for larger batch sizes, effectively wasting compute (Goyal et al., 2017; Masters & Luschi, 2018; Shallue et al., 2018; McCandlish et al., 2018). Training based on pipeline parallelism has also been introduced, but still requires large batches for efficient training (Petrowski et al., 1993; Ben-Nun & Hoefler, 2018; Huang et al., 2019). Moreover, in addition to the limitation that in the forward pass each layer can only process the input data in sequence (forward locking), the use of backpropagation implies that the network parameters of each layer can only be updated in turn after completing the full forward pass (backward locking). This backward locking results in increased memory overhead, and precludes efficient parallel processing across layers (Jaderberg et al., 2017). The challenges of scaling compute infrastructure to support deep networks trained with backpropagation motivate the need for alternative approaches to training deep neural networks.\nIn this work, we explore how layer-wise local updates (Belilovsky et al., 2019a; Löwe et al., 2019; Xiong et al., 2020) can help overcome these challenges and scale more efficiently with compute than backpropagation. With local updates, each layer is updated before even completing a full forward pass through the network. This remedies the forward and backward locking problems which harm memory efficiency and update latency in standard backprop. Layer-wise local updates are not proportional to gradients of the original loss, and are not even guaranteed to descend a loss function. Nevertheless, in practice they are effective at training neural networks. We refer to this approach of parallelizing compute, which is alternative and complementary to data and model parallelism, as local parallelism.\nOur investigation focuses on the trade-offs of using local update methods as opposed to global backpropagation. To summarize our contributions: (i) We provide the first large scale investigation into local update methods in both vision and language domains. We find training speedups (as measured by the reduction in required sequential compute steps) of up to 10× on simple MLPs, and 2× on Transformer architectures. These training speedups are the result of local training methods being able to leverage more parallel compute than backprop. (ii) We provide insight into how local parallelism methods work, and experimentally compare the similarity of their gradient and features to those from backprop. (iii) We demonstrate a prototype implementation of local parallelism for ResNets, and show up to a 40% increase in sample throughput (number of training points per second) relative to backprop, due to higher hardware utilization. We believe that local parallelism will provide benefits whenever there are diminishing returns from data parallelism, and avoid stale weights from pipelined model parallelism. Additionally, we have released code showing an example of local parallelism, available at hiddenurl." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 PARALLELIZATION IN DEEP LEARNING", "text": "Scaling large models has led to the development of a number of techniques to train deep models in a parallel fashion (Ben-Nun & Hoefler, 2018), summarized in Figure 1.\nData Parallelism: Data Parallelism (Zhang et al., 1989) is an attempt to speed up training of a model by splitting the data among multiple identical models and training each model on a shard of the data independently. Data parallelism is effectively training with larger minibatches (Kaplan et al., 2020). This creates issues around the consistency of a model which then needs to be synchronized (Deng et al., 2012; Dean et al., 2012a). There are two main ways to synchronize weights across model copies: (i) Synchronous optimization, where data parallel training synchronizes at the end of every minibatch (Das et al., 2016; Chen et al., 2016), with a communication overhead that increases with the number of devices; (ii) Asynchronous optimization that implements data parallel training with independent updates of local model parameters without global synchronization (Niu et al., 2011; Dean et al., 2012a) – this increases device utilization, but empirically gradients are computed on stale weights, which results in a poor sample efficiency and thus slower overall training time compared to synchronous optimization.\nModel Parallelism: Model Parallelism is used when a model is too large to fit in the memory of a single device and is instead spread over multiple processors (Krizhevsky et al., 2012; Shazeer et al., 2018; Harlap et al., 2018; Lepikhin et al., 2020). This is increasingly common as state of the art performance continues to improve with increasing model size (Brown et al., 2020). Model parallelism unfortunately has a few downsides: (i) High communication costs – the total training time for larger networks can become dominated by communication costs (Simonyan & Zisserman, 2015), which in the worst case can grow quadratically with the number of devices, and can reach up to 85% of the total training time of a large model such as VGG-16 (Harlap et al., 2018; Simonyan & Zisserman, 2015); (ii) Device under-utilization – forward propagation and backward propagation are both synchronous operations, which can result in processor under-utilization in model-parallel systems. This problem becomes worse as we increase the number of layers (Ben-Nun & Hoefler, 2018; Jia et al., 2014; Collobert et al., 2011; Abadi et al., 2016; Huang et al., 2018).\nPipeline Parallelism: Due to the forward and backward locking, using multiple devices to process consecutive blocks of the deep model would make an inefficient use of the hardware resources. Pipelining (Harlap et al., 2018) concurrently passes multiple mini-batches to multiple layers on multiple devices. This increases device utilization but can introduce staleness and consistency issues which lead to unstable training. Harlap et al. (2018) alleviates the consistency issue by storing past versions of each layer. Huang et al. (2019) addresses the staleness issue by pipelining microbatches and synchronously updating at the end of each minibatch. Guan et al. (2019) builds on this work by introducing a weight prediction strategy and Yang et al. (2020) investigates to what extent the tradeoff between staleness/consistency and device utilization is necessary. Local updates on the other hand can keep device utilization high with both small and large batches and avoid the weight staleness problem.\nLocal Learning Rules: Local learning describes a family of methods that perform parameter updates based only on local information, where locality is defined as dependence of neighboring neurons, layers, or groups of layers. The earliest local method we are aware of is Hebbian Learning (Hebb, 1949) which has further been explored in BCM theory (Izhikevich & Desai, 2003; Coesmans et al., 2004), Oja’s rule (Oja, 1982), Generalized Hebbian Learning (Sanger, 1989), and meta-learned local learning rules (Bengio et al., 1990; 1992; Metz et al., 2018; Gu et al., 2019). Architectures like Hopfield Networks (Hopfield, 1982) and Boltzmann Machines (Ackley et al., 1985) also employ a local update, and predate backprogation in deep learning. Modern variants of local training methods have attempted to bridge the performance gap with backpropagation. These include projection methods such as Hebbian learning rules for deep networks (Krotov & Hopfield, 2019; Grinberg et al., 2019; Ryali et al., 2020), and local layer-wise learning with auxiliary losses (Belilovsky et al., 2019a;b). Most similar to our work is decoupled greedy layer-wise learning (Belilovsky et al., 2019b; Löwe et al., 2019), which trained auxiliary image classifiers greedily, and local contrastive learning (Xiong et al., 2020). These methods mainly focus on matching the performance of backpropagation with respect to training epochs, whereas our work focuses on tradeoffs. Finally, while not local in the sense that parallelized layers still optimize for the global objective, Huo et al. (2018b) parallelize layers by caching gradients and using delayed gradient signals to overcome the backward locking problem and update decoupled layers in parallel." }, { "heading": "3 LOCAL PARALLELISM", "text": "Given a deep neural network, we divide the layers into a sequence of J blocks, which may contain one or more layers. Each block is trained independently with an auxiliary objective, and receives the activations output by the previous block as input or, in the case of the first block, the data from the sampled minibatch. We consider five variants to train this sequence of J blocks: backpropagation, greedy local parallelism, overlapping local parallelism, and chunked local parallelism, as shown in Figure 2. We also include a baseline method of just training the last, or last two, layers. In all of the local methods, training occurs by attaching objective functions to the end of each block and back propagating the signal locally into the corresponding block or blocks. In this work the auxiliary objective functions that we use take the same form as the global objective. For example, to train a classifier on CIFAR-10, we attach auxiliary linear classifiers to each local block. See Belilovsky et al. (2019b) for further discussion on the form of this objective.\nBackpropagation: In our notation, backpropagation groups all layers into one block and thus J = 1. The parameters are updated with one instance of global error correction. While backpropagation ensures that all weights are updated according to the final output loss, it also suffers from forward and backward locking (Jaderberg et al., 2017), an issue that local parallelized methods aim to resolve.\nGreedy local parallelism: A straightforward approach to enable local training is to attach an auxiliary network to each local layer, which generates predictions from the activations of hidden layers. After generating predictions, each local gradient is backpropagated to its respective local block, shown in Figure 2(b). The activations are then passed as input to the next layer. We refer to this approach, introduced in (Belilovsky et al., 2019b), as greedy. Greedy local parallelism is the most parallelizable of all the schemes we consider. However, a potential downside is that fully greedy updates force the layers to learn features that are only relevant to their local objective and preclude inter-layer communication, which may result in lower evaluation performance for the global objective, or worse generalization.\nOverlapping local parallelism: One issue with the purely greedy approach is that features learned for any individual block may not be useful for subsequent blocks, since there is no inter-block propagation of gradient. For this reason, we consider overlapping local architectures where the first layer of each block is also the last layer of the previous block, as shown in Figure 2(c), though overlapping of more layers is also possible. This redundancy enables inter-block propagation of gradient that is still local, since only neighboring blocks overlap. However, this comes at the cost of running additional backward passes. The overlapping architecture has appeared before in Xiong et al. (2020), but was used only for contrastive losses. Ours is the first work to investigate overlapping local architectures for standard prediction objectives in computer vision and language. Overlapping updates are parallelizable, but come with the additional complexity of keeping duplicates of the overlapping components and averaging updates for these layers.\nChunked local parallelism: The greedy architecture is maximally parallel in the sense that it distributes one layer per block. However, it is also possible to have fewer parallel blocks by combining multiple layers into one. We refer to this architecture, shown in Figure 2(d), as chunked local parallelism. This method trades off parallelizability and therefore throughput for an error signal that propagates through more consecutive layers. It differs from overlapping local parallelism by not needing to duplicate any layer. While previous work has investigated the asymptotic performance of chunked parallelism (Belilovsky et al., 2019b), ours is the first to consider the compute efficiency and parallelizability of local parallelism. By stacking multiple layers per each parallelized block, chunked parallelism sits between fully parallelized methods, such as greedy and overlapping updates, and fully sequential methods like backpropagation." }, { "heading": "4 EFFICIENT TRAINING ON PARETO FRONTIERS", "text": "We explore the trade off between total computational cost and the amount of wallclock time needed to train a particular machine learning model to a target performance, similar to the analysis in Mc-\nCandlish et al. (2018). We use floating point operations (FLOPs) as our unit of both cost and time, as they do not couple us to a particular choice of hardware. Cost is proportional to the total FLOPs used. We report time as the number of sequential FLOPs needed assuming we can run each example, and in the case of the local methods, each layer, in parallel. We refer the reader to Appendix A for detailed information on how total and sequential FLOPs are computed for each experiment.\nWe compare how backpropagation scales with compute across a variety of local methods: (i) greedy (Figure 2(b)), (ii) overlapping (Figure 2(c)), (iii) two and three chunk greedy (Figure 2(d)), where we split the network into two or three pieces that are trained in a greedy fashion, (iv) last layer & last two layers, a simple baseline where we only backpropagate through the last one or two layers and keep the rest of the network parameters fixed. We apply these methods on a variety of architectures and data including a dense feed-forward network, a ResNet50 network (He et al., 2016) trained on ImageNet (Russakovsky et al., 2015), and a Transformer (Vaswani et al., 2017) model trained on LM1B (Chelba et al., 2013). In Appendix C, we provide results for additional feed-forward networks, a ResNet18 trained on ImageNet, and a larger Transformer, as well as further architecture details. For each model and training method, we perform a large sweep over batch size as well as other optimization hyperparameters, and only display the best-performing runs on the Pareto optimal frontier. See Appendix B for more detail.\nThe resulting figures all follow the same general structure. Models train with low total cost when the amount of available compute is large. By increasing batch size, the amount of compute utilized per parallel process can be reduced efficiently until a critical batch size is reached, at which point further increasing the batch size results in diminishing returns in terms of compute efficiency, which is similar to results reported for backpropagation in (McCandlish et al., 2018). We find that, in most cases, local updates significantly increase the training speed over deep networks in the highcompute regime, and therefore utilize less total compute than backpropagation. When applicable, we additionally show tables of the best achieved results across all parameters ignoring the time to reach these values. In this setting, we find that backpropagation usually achieves the best performance. This is partially due to the fact that all of these models are trained for a fixed number of examples, and partially due to the fact that backpropagation makes higher use of the capacity of a given model, which we further investigate in Section 5." }, { "heading": "4.1 SYNTHETIC: MLP’S OVER-FITTING TO CIFAR-10", "text": "As a proof of concept we first demonstrate optimization performance on an eight layer MLP with 4096 hidden units, performing classification on the CIFAR-10 dataset (Krizhevsky et al., 2009). Hyperparameter and optimization details can be found in Appendix B.1. From the resulting Pareto frontiers shown in Figure 3, we find that in no circumstance is backpropagation the best method to use. In the high compute regime, we find that local methods enable training up to 10× faster (e.g. in 0.001 cutoff)." }, { "heading": "4.2 LANGUAGE MODELING: TRANSFORMERS ON LM1B", "text": "Next we explore a small (6M parameter) Transformer (Vaswani et al., 2017) trained on LM1B (Chelba et al., 2013). We build off of an implementation in Flax Developers (2020). Hyperparameters and optimization details can be found in Appendix B.2. We find that, for the higher cutoffs, many of the local methods vastly outperform backpropagation. For the lower cuttofs (≤ 4.0), we\nfind that while backpropagation is more efficient in the high-time regime, local methods train significantly faster in the high-compute regime, and can train 2× faster than backpropagation. These local methods do not reach as low of a minimum in the given training time however. See Figure 4." }, { "heading": "4.3 IMAGE CLASSIFICATION: RESNET50 ON IMAGENET", "text": "Next we explore performance of optimization parallelism on a ResNet50 model trained on the ImageNet dataset (Russakovsky et al., 2015) (Figure 5). Hyperparameter and configuration details can be found in Appendix C.1. We find, as before, that for many cutoff values local parallelism shows gains over backpropagation in the high-compute regime. However, at the cutoff of 74% these gains shrink and the local methods are slightly less efficient. We hypothesize this is in-part due to increased overfitting by the local methods. To see this we can observe that local methods are much more competitive when evaluated on training accuracy. This suggests that given more data these local methods will be competitive." }, { "heading": "5 PROBING BEHAVIOR OF LOCAL UPDATES", "text": "In the previous section we showed that in some cases local parallelism can provide large speedups over backpropagation but suffers in terms of the best achievable performance. In this section we explore why and how these methods work, and discuss limitations.\nGradient Angles: Local parallelism does not follow the gradient of the underlying function. Instead it computes a local, greedy approximation. To check the quality of this approximation we measure the angle between the true gradient, and the gradient computed with our greedy method (Figure 6a). We find positive angles which imply that these directions are still descent directions. As one moves further away from the end of the network these similarities shrink.\nLarger Block Sizes Improve Generalization: As noted in Huo et al. (2018a;b) and Belilovsky et al. (2019b), using chunked local parallelism with more parallel blocks can decrease performance. Here we show that practically this reduction in performance seems to stem mainly from a worsening generalization gap, with train and test results shown for various chunk sizes in Figure 6. A chunk size of nine is simply backprop, and a chunksize of one is fully greedy.\nCapacity: Ability to Fit Random Labels: Throughout our work we find that models trained with local updates don’t make as efficient use of model capacity. This is not necessarily a problem, but represents a tradeoff. Researchers have found that increased model sizes can be used to train faster without leveraging the extra capacity to its fullest (Raffel et al., 2019; Kaplan et al., 2020). Additionally, techniques like distillation can be used to reduce model size (Hinton et al., 2015). We demonstrate this capacity issue by fitting random labels with a ResNet on CIFAR-10, shown in Figure 6.\nLocal Methods Learn Different Features: One way to show differences between local and nonlocal methods is to look at the features learned. For each method we test we take the best performing model and visualize the first layer features. The results are shown in Figure 7. Qualitatively, we see similar first layer features from Backprop and Two/Three Chunk local parallelism. The more greedy approaches (Overlap, Greedy) yield a different set of features with fewer edge detectors. Finally, when training with only the last layers, the input layer is not updated, and the features are random." }, { "heading": "6 REALIZED PERFORMANCE GAINS", "text": "Here we show that performance gains of local parallelism can be realized on real hardware, and that they are similar to or better than pipelined backpropagation despite the increased computation needed for auxiliary losses. We train ResNet34, ResNet50 and ResNet101 (He et al., 2016) on the ImageNet dataset (Deng et al., 2009), and compare throughput (images per second) between chunked local parallelism and synchronous pipelined backpropagation (Huang et al., 2019). We implement the models in TensorFlow (Abadi et al., 2016) and train them across 4 or 8 Intelligence Processing Units (IPUs – see details in Appendix E). Note that neither local nor pipeline configurations make use of data parallelism which could be applied identically in both cases. We use activation recomputation in the case of pipelined backprop (see discussion in Appendix D.3). The results in Table 1 show that chunked local parallelism can achieve similar or greater throughput compared to pipelined backpropagation, for the same local batch size. This provides evidence that local parallelism can enable similar hardware efficiency without necessitating an increase of minibatch size. It is therefore amenable to a greater level of data parallelism before performance degradation due to a large global batch size. The difference in throughput between backpropagation and local parallelism with the same local batch size is primarily due to the poor utilisation during the “ramp-up” and “ramp-down” phases of the pipelined backpropagation. This can be mitigated by running the pipeline in the steady state for more stages (compare rows 4 and 5 of Table 1). However, this results in the accumulation of gradients from a larger number of local batches, thus costing a larger effective batch size. With greedy local parallelism, updates can be applied asynchronously and the pipeline can be run in steady state indefinitely, after an initial ramp-up phase. Hardware utilization analysis and further discussion can be found in Appendix D." }, { "heading": "7 CONCLUSION", "text": "In this work we demonstrated that local parallelism is a competitive alternative to backpropagation in the high-compute training regime, and explored design decisions and trade-offs inherent in training with local parallelism. We summarize some main takeaways from our work:\n• Speed vs. Performance: Greedy local parallelism should be used if speed and computeefficiency are the primary objectives. Chunked local parallelism should be used if performance is the primary objective.\n• Gains in High-Compute Regime: Local parallelism can be useful to prolong computeefficient scaling, and therefore faster training, with larger batch sizes once data parallelism begins to saturate.\n• Comprehensive Analysis: Local parallelism can be applied across multiple modalities (vision, language) and architectures (MLPs, ResNets, Transformers).\nWe hope that local methods will enable new research into large models. By lowering communication requirements – particularly latency requirements surrounding synchronization – we believe that local parallelism can be used to scale up and train more massive models in a more distributed fashion." }, { "heading": "A CALCULATION OF TOTAL FLOPS AND SEQUENTIAL FLOPS", "text": "To construct the Pareto curves used in this work we need some estimate of compute time. Obtaining hardware independent measurements of compute cost and compute time is desirable, but in general impossible, as different hardware makes different trade offs for compute efficiency. In this work we choose to use a theoretical estimate of compute costs based on floating point operation (FLOP) counting. In all three models, we divide the costs up into three measurements: FLOPs needed for a forward pass through a layer, flops needed for the auxiliary loss computation, and a multiplier to compute the number of flops for a backward pass. For simplicity, we average the compute costs across layers. While this is strictly not feasible in reality with a batch size of one per device, we can come close to approximating it by using more or less parallel hardware per layer. This is relatively simple to implement given the minimal communication overhead. We additionally take into account optimizer flops, which we we approximate as ten times the number of parameters, but this results negligible." }, { "heading": "A.1 CALCULATIONS PER MODEL", "text": "MLP: An MLP is parameterized by the hidden size, N , and the number of layers, L. The first layer’s total flops are from matrix vector multiplication, a bias add of size N , and a ReLU (which we assume costs 1 FLOP per entry). This yields a total size of (2 ∗ I ∗ N − I) + N + 2 ∗ N FLOPs, where I is the input size (Hunger, 2005). The auxiliary classifiers consist of a matrix vector multiplication to size 10, a bias add, and a softmax cross entropy loss. We assume the softmax costs 5 flops per estimate leading to a flop estimate of (2 ∗N ∗ 10−N) + 10 + 5 ∗ 10. For this problem, we approximate the backward multiplier to be 1.5. For the MLP model used in the main text (with hidden size N = 4096 and L = 8 layers), the average forward cost per layer is 32514176.0 flops, and the auxiliary loss 77884.0 flops.\nFor the remaining models, we compute our estimates of these components by first using JAX to convert our models to TensorFlow functions, and then leveraging TensorFlow’s tf.compat.v1.profiler.profiler.\nResNet50: This model has L = 17 layers and contains 38711720 parameters. We find that the average forward flop count per example, per layer is 5479411.176470588, the auxiliary loss per layer is 3382457.3529411764, and the backward multiplier is 2.0280375672996596.\nResNet18: This model has L = 9 layers, and has 13170792 parameters. We find that the average forward flop count per example, per layer is 1640544.352941176, the auxiliary loss flop count per example per layer is 565900.6470588235, and the backward multiplier is 2.08565879129763.\nTransformer small: This model has L = 4 layers. We find that the average forward cost per example, per layer is 13837446.0, the auxiliary loss is 1163904.0, and the backward multiplier is 1.6581083035860107.\nTransformer large: This model has L = 6 layers. We find that the average forward cost per example, per layer is 51037318.0, the auxiliary cost is 4653696.0, and the backward multiplier is 1.7526391044859857." }, { "heading": "A.2 CALCULATIONS PER METHOD", "text": "In all cases, we first obtain the total computation cost in terms of flops and then compute time (or sequential flops) by dividing by the max amount of parallelism (assuming that each example and each layer are run concurrently). As stated before, this is not strictly possible to implement in hardware. In reality, however, we expect more than one example to be used per device in combination with data parallelism and thus appropriate load balancing can be done.\nAll of these calculations are a function of the 4 numbers described above (forward cost, auxiliary cost, backward multiplier and the optimizer cost) in addition to batch size and the number of gradients steps until the target loss is reached.\nBackprop: Per step, backprop involves running one forward pass and one backward pass of the entire network plus plus one auxiliary head for the last layer loss computation. The cost per example\nis computed as follows:\ncost per example = (1 + backward multiplier) ∗ (forward cost ∗ layers + aux cost) cost = cost per step example ∗ steps ∗ batch size + steps ∗ optimizer cost time = cost/batch size\nGreedy: Per step, the greedy method requires running one forward and backward pass for L layers and L auxilary loss computations.\ncost per example = (1 + backward multiplier) ∗ ((forward cost + aux cost) ∗ layers) cost = cost per step example ∗ steps ∗ batch size + steps ∗ optimizer cost time = cost/(batch size ∗ layers)\nOverlapping: Because we are using overlapping chunks of layers, additional compute must be performed. This method uses one full forward pass though the entire network plus two backward passes for each non terminal layer. The terminal layer only requires one less layer of computation. We additionally need one forward and backward pass of each auxiliary loss. An additional average of gradients is required which incurs extra compute per layer.\ncost per example =(forward cost + aux cost) ∗ layers+ (layers − 1) ∗ backward multiplier ∗ (2 ∗ forward cost + aux cost)+ backward multiplier ∗ (forward cost + aux cost)\ncost =cost per step example ∗ steps ∗ batch size + steps ∗ (optimizer cost + 2 ∗ parameters) time =cost/(batch size ∗ layers)\nTwo/Three chunk: In this, we perform a full forward + backward pass for each layer plus two or three auxiliary losses. Lets call the number of chunks K for the equations bellow.\ncost per example = (1 + backward multiplier)(forward cost +K ∗ aux cost) cost = cost per step example ∗ steps ∗ batch size + steps ∗ optimizer cost time = cost/(batch size ∗K)\nLast One/Two Layers: These methods require a full forward pass, a single auxilary loss computation and then a backward pass on the last K layers. To calculate time, we assume this last K layers is the smallest atomic chunk that can be run and we divide up the remaining layers accordingly.\ncost per example = (layers ∗ forward costaux cost) + backward multiplier ∗ (K ∗ forward cost + aux cost) cost = cost per step example ∗ steps ∗ batch size + steps ∗ optimizer cost\nnum parallel = (layers +K ∗ backward mult)/(K ∗ (1 + backward mult)) time = cost/(batch size ∗ num parallel)" }, { "heading": "B HYPERPARAMETER AND CONFIGURATION DETAILS FOR EXPERIMENTAL RESULTS", "text": "" }, { "heading": "B.1 MLP ON CIFAR-10", "text": "We sweep the batch size from 64-524,288 in powers of 2. At each batch size, we train models using learning rate tuned Adam (with six values log spaced between 1e-4 and 3e-2) as well as the first 50 optimizers taken from opt list to provide a stronger baseline (Metz et al., 2020). All models are trained for three million examples on an eight core TPU-V2 using gradient accumulation to control memory usage. We select a sequence of cut off values (the loss for which we attempt to reach in the shortest time) and plot the Pareto frontier of the different training methodology in Figure 3." }, { "heading": "B.2 TRANSFORMERS ON LM1B", "text": "Our Transformer has 4 layers, 8 heads per attention layer, 128-dimensional query, key, and value vectors, 256-dimensional hidden layers, and 128-dimensional embeddings. We train on length 128 sequences formed from subword tokenization with a vocabulary size of 8k. Each Transformer layer is treated as a separate parallelizable component. Our auxiliary classifiers consist of layer norm and a linear projection back to the vocabulary, with a softmax cross entropy loss. We sweep batch sizes in powers of two from 32 to 524,288. At each batch-size we either train Adam with six different learning rates taken evenly spaced on a log scale between 1e-4 and 3e-2 and the first 50 optimizers from opt list (Metz et al., 2020). All models are run until they have processed 50 million sequences, an an 8-core TPU-V2 with gradient accumulation to control memory. We chose four cutoff values computed on validation loss to show early in training (a value of 5.0 and 4.0), the value chosen by Shallue et al. (2018) (3.9), and a loss value slightly lower (3.8). Results can be found in Figure 4." }, { "heading": "C ADDITIONAL PARETO CURVES EXPERIMENTS", "text": "We provide additional Pareto curves for different architecture models." }, { "heading": "C.1 RESNETS ON IMAGENET", "text": "We build our code off of the Haiku implementation (Hennigan et al., 2020). We break the network up by putting the first convolution, and each residual block into a separate parallelizable component For auxiliary losses we apply batch normalization (Ioffe & Szegedy, 2015), then ReLU, then compute mean across the spatial dimensions, and finally perform a linear projection to the output classes. We sweep batch sizes from 8 to 524,288 in powers of 2. For each batch size we randomly sample optimizer hyperparameters for both the SGDM optimizer with a staircase schedule described in Goyal et al. (2017) and from the first 50 configurations in opt list. The resulting cost wall time Pareto curves for both validation accuracy and training accuracy are shown in Figure 5." }, { "heading": "C.2 MLPS", "text": "We provide MLP’s trained matching Section 4.1 but using a different number of hidden units. In addition to 4096 units, we show 1024, 256, and 64 units in Figure 8. We find the last 2 layers performs well for larger networks, as there is enough capacity, but is considerably less useful as model size shrinks." }, { "heading": "C.3 TRANSFORMER LARGE", "text": "In this section we explore a larger transformer than that in Section 4.2. This transformer matches the default settings of of (Flax Developers, 2020). It has has 6 layers, 8 heads per attention layer, 512-dimensional query, key, and value vectors, 512-dimensional hidden layers, and 512-dimensional embeddings. We train on length 128 sequences formed from subword tokenization with a vocab size of 32k. We show results in Figure 9. Unlike in the small transformer and due to increased compute costs, we random sample configurations instead of running all of them." }, { "heading": "C.4 RESNET18", "text": "In addition to the ResNet50 we explored in the main text (Section 4.3) we also explore a ResNet18 trained with the same protocols. We find similar results in Figure 10." }, { "heading": "D HARDWARE UTILIZATION", "text": "" }, { "heading": "D.1 EXECUTION TRACES", "text": "The execution traces shown in Figure 11 illustrate hardware utilization. A single cycle of pipelined backpropagation over 8 microbatches is shown in Figure 11a. This comprises a ramp-up phase (blue dashed box), a steady state where all processors are used at each step (e.g. green dashed box), and a ramp-down phase (orange dashed box). Gradients for all steps are accumulated and applied at\nthe end of the cycle. It is clear that there is significant device underutilization in the ramp-up and ramp-down phases. Moreover, in the steps which follow the ramp-up the gradient signal has not yet reached the earlier layers of the network, preventing backwards passes from being executed on these processors and causing poor load balancing. A similar effect is present in the steps which precede ramp down, where no further forward passes are run in shallower layers of the network. These effects contribute to the higher throughput of local parallelism relative to backpropagation.\nDuring the steps where all processors calculate a forward and backward pass (dashed green box in Figure 11a, enlarged in Figure 11b), the utilization of backpropagation is at its highest. More of these steps may be executed in each cycle, which reduces the fractional overhead of ramp-up and ramp-down. However, this results in gradients being accumulated over more microbatches, and therefore in the use of a larger overall minibatch size, thus reducing the potential for data-parallel replication of the pipeline.\nDuring the peak-utilization steps, the operations executed are largely the same as that for chunked local parallelism (Figure 11c), with the exception of the auxiliary classifiers necessary for local parallelism. Note that in the pipelined backpropagation case (Figure 11b) the backward pass is executed before the forward pass (with different microbatches), whereas in the chunked local update case the forward pass is executed before the backward pass (for the same minibatch). This difference in ordering can be seen in the tall, thin “spike-like” operations which occur at the start of the backwards pass. In the pipelined backpropagation step they are run at the start of the step, while they are run mid way through the local update step.\nD.2 INTER-PROCESSOR COMMUNICATION\nOur profiling of the code found that the total data communicated between processors for local parallel training was half that of pipelined backprop, for all network/batch size configurations presented in Table 1. As an example, Table 2 reports the data communicated between IPUs for a single training step of ResNet34. Note this corresponds to a single microbatch in the case of pipelined backpropagation. There is a clear reduction in the total data communicated with local parallelism relative to backpropagation. The reduction is as expected: IPU 1, processing the first layers, receives no data from other IPUs as no gradient signal from later layers are communicated backward. IPU 4 does not transmit any data backward for the same reason. Overall the total data communicated is 43.2MB for chunked local parallelism and 86.4MB for backpropagation. These results are consistent with the expectation that pipelined local parallelism should result in half as much inter-processor communication as pipelined backpropagation." }, { "heading": "D.3 MEMORY CONSUMPTION", "text": "Table 3 contains the memory consumption statistics for different network and training configurations. We can draw a number of conclusions. First, activation recomputation drastically reduces the memory consumption for pipelined backpropagation. Note that we do not observe a reduction in throughput with recomputation, as the operations to read and write stored activations also take\nsignificant numbers of cycles, when not recomputing1. Thus, we are confident that the comparison presented in Table 1 remains valid. Even with recomputation, we see that local parallelism generally reduces the average memory consumption, and corresponds in all cases to lower or similar max memory.\nThe difference in memory consumption depends on the local batch size and number of processors. While recomputation reduces the number of activations that must be stored for pipelined backprop, we must still store the input to each processor for each “live” microbatch. Thus for larger microbatches this overhead increases. Further, a larger number of processors mean that there are more live microbatches at any one time, also increasing the memory overhead. Conversely, local parallelism introduces extra parameters in the auxiliary classifiers, which are not needed for backpropagation. These observations explain why the memory consumption for pipelined backpropagation increases over that of local parallelism as the local batch size and/or the number of processors grow. Thus local parallelism can reduce the memory consumption in a high throughput regime." }, { "heading": "E HARDWARE BACKGROUND", "text": "The Intelligence Processor Unit (IPU) (Jia et al., 2019) is an accelerator designed for machine intelligence workloads. An IPU contains several parallel processing elements, called tiles, each of which has its own local high-speed memory (SRAM) and is able to run a number of parallel threads. For example, in the second generation IPU (MK2), each IPU chip has 1472 compute tiles; each tile is equipped with six parallel threads and 600KB of SRAM, equivalent to a total of 8832 parallel threads and 900MB of on chip memory with an aggregate 47.5 TB/s memory bandwidth per chip. This design benefits from efficient execution of fine-grained operations across the very large number of parallel threads, and allows these threads to access data efficiently. This is particularly advantageous when the data access patterns are irregular, sparse or incoherent.\n1For example, for ResNet50, batchsize 4× 8 over 4 IPUs we observe that pipelined backprop with recomputation has 1.5% higher throughput than without recomputation." } ]
2,020
null
SP:a3e5acdd322677d019a4582db78dab2dc1102818
[ "This paper discusses a well-known problem of VAE training that decoder produces blurry reconstruction with constant variance. While much existing work addressed this problem by introducing independent variance training (as of the original VAE model) or additional hyper-parameters, those approaches usually come with additional training/tuning difficulty and even break the ELBO assumption. This paper proposed a simple $\\sigma$-VAE that addresses the above problem by optimizing a single variance variable. This also could be easily connected to the well known $\\beta$-VAE works. The experiment results in Tables 2 and 3 show the proposed model obtains a better FID score than the existing works on multiple datasets." ]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions. However, training VAEs often requires considerable hyperparameter tuning to determine the optimal amount of information retained by the latent variable. We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution and can determine this amount of information automatically, on the VAE performance. While many methods for learning calibrated decoders have been proposed, many of the recent papers that employ VAEs rely on heuristic hyperparameters and ad-hoc modifications instead. We perform the first comprehensive comparative analysis of calibrated decoder and provide recommendations for simple and effective VAE training. Our analysis covers a range of datasets and several single-image and sequential VAE models. We further propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically. We observe empirically that using heuristic modifications is not necessary with our method.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}", "year": 2016 }, { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Information dropout: Learning optimal representations through noisy computation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Alexander A Alemi", "Ben Poole", "Ian Fischer", "Joshua V Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a broken elbo", "venue": "arXiv preprint arXiv:1711.00464,", "year": 2017 }, { "authors": [ "Shun-Ichi Amari" ], "title": "Natural gradient works efficiently in learning", "venue": "Neural computation,", "year": 1998 }, { "authors": [ "Georgios Arvanitidis", "Lars Kai Hansen", "Søren Hauberg" ], "title": "Latent space oddity: on the curvature of deep generative models", "venue": "arXiv preprint arXiv:1710.11379,", "year": 2017 }, { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H. Campbell", "Sergey Levine" ], "title": "Stochastic variational video", "venue": null, "year": 2018 }, { "authors": [ "Jonathan T Barron" ], "title": "A general and adaptive robust loss function", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Lluis Castrejon", "Nicolas Ballas", "Aaron Courville" ], "title": "Improved conditional vrnns for video prediction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xi Chen", "Diederik P Kingma", "Tim Salimans", "Yan Duan", "Prafulla Dhariwal", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Variational lossy autoencoder", "venue": "arXiv preprint arXiv:1611.02731,", "year": 2016 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": null, "year": 2015 }, { "authors": [ "Bin Dai", "David Wipf" ], "title": "Diagnosing and enhancing vae models", "venue": "arXiv preprint arXiv:1903.05789,", "year": 2019 }, { "authors": [ "A Philip Dawid" ], "title": "The well-calibrated bayesian", "venue": "Journal of the American Statistical Association,", "year": 1982 }, { "authors": [ "Morris H DeGroot", "Stephen E Fienberg" ], "title": "The comparison and evaluation of forecasters", "venue": "Journal of the Royal Statistical Society: Series D (The Statistician),", "year": 1983 }, { "authors": [ "E. Denton", "R. Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": null, "year": 2018 }, { "authors": [ "Prafulla Dhariwal", "Heewoo Jun", "Christine Payne", "Jong Wook Kim", "Alec Radford", "Ilya Sutskever" ], "title": "Jukebox: A generative model for music", "venue": "arXiv preprint arXiv:[TODO],", "year": 2020 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Towards a neural statistician", "venue": "arXiv preprint arXiv:1606.02185,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Deep visual foresight for planning robot", "venue": null, "year": 2017 }, { "authors": [ "Partha Ghosh", "Mehdi SM Sajjadi", "Antonio Vergari", "Michael Black", "Bernhard Schölkopf" ], "title": "From variational to deterministic autoencoders", "venue": "arXiv preprint arXiv:1903.12436,", "year": 2019 }, { "authors": [ "Karol Gregor", "Ivo Danihelka", "Alex Graves", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Draw: A recurrent neural network for image generation", "venue": "arXiv preprint arXiv:1502.04623,", "year": 2015 }, { "authors": [ "Karol Gregor", "Frederic Besse", "Danilo Jimenez Rezende", "Ivo Danihelka", "Daan Wierstra" ], "title": "Towards conceptual compression", "venue": null, "year": 2016 }, { "authors": [ "Ishaan Gulrajani", "Kundan Kumar", "Faruk Ahmed", "Adrien Ali Taiga", "Francesco Visin", "David Vazquez", "Aaron Courville" ], "title": "Pixelvae: A latent variable model for natural images", "venue": "arXiv preprint arXiv:1611.05013,", "year": 2016 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "arXiv preprint arXiv:1706.04599,", "year": 2017 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "arXiv preprint arXiv:1912.01603,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels. 2019b", "venue": null, "year": 2019 }, { "authors": [ "Mikael Henaff", "Alfredo Canziani", "Yann LeCun" ], "title": "Model-predictive policy learning with uncertainty regularization for driving in dense traffic", "venue": null, "year": 1901 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2017 }, { "authors": [ "Jonathan Ho", "Ajay Jain", "Pieter Abbeel" ], "title": "Denoising diffusion probabilistic models", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Michael I Jordan", "Zoubin Ghahramani", "Tommi S Jaakkola", "Lawrence K Saul" ], "title": "An introduction to variational methods for graphical models", "venue": "Machine learning,", "year": 1999 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": null, "year": 2014 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Simon Kohl", "Bernardino Romera-Paredes", "Clemens Meyer", "Jeffrey De Fauw", "Joseph R Ledsam", "Klaus Maier-Hein", "SM Ali Eslami", "Danilo Jimenez Rezende", "Olaf Ronneberger" ], "title": "A probabilistic u-net for segmentation of ambiguous images", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "A.X. Lee", "R. Zhang", "F. Ebert", "P. Abbeel", "C. Finn", "S. Levine" ], "title": "Stochastic adversarial video", "venue": "prediction. arXiv:1804.01523,", "year": 2018 }, { "authors": [ "Alex X Lee", "Anusha Nagabandi", "Pieter Abbeel", "Sergey Levine" ], "title": "Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model", "venue": null, "year": 1907 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "James Lucas", "George Tucker", "Roger B Grosse", "Mohammad Norouzi" ], "title": "Don’t blame the elbo! a linear vae perspective on posterior collapse", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Lars Maaløe", "Marco Fraccaro", "Valentin Liévin", "Ole Winther" ], "title": "Biva: A very deep hierarchy of latent variables for generative modeling", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Pierre-Alexandre Mattei", "Jes Frellsen" ], "title": "Leveraging the exact likelihood of deep latent variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Radford M Neal", "Geoffrey E Hinton" ], "title": "A view of the em algorithm that justifies incremental, sparse, and other variants", "venue": "In Learning in graphical models,", "year": 1998 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Georgios Pavlakos", "Vasileios Choutas", "Nima Ghorbani", "Timo Bolkart", "Ahmed AA Osman", "Dimitrios Tzionas", "Michael J Black" ], "title": "Expressive body capture: 3d hands, face, and body from a single image", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xue Bin Peng", "Angjoo Kanazawa", "Sam Toyer", "Pieter Abbeel", "Sergey Levine" ], "title": "Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow", "venue": "arXiv preprint arXiv:1810.00821,", "year": 2018 }, { "authors": [ "Jan Peters", "Stefan Schaal" ], "title": "Reinforcement learning of motor skills with policy gradients", "venue": "Neural networks,", "year": 2008 }, { "authors": [ "Vitchyr H Pong", "Murtaza Dalal", "Steven Lin", "Ashvin Nair", "Shikhar Bahl", "Sergey Levine" ], "title": "Skew-fit: State-covering self-supervised reinforcement learning", "venue": null, "year": 1903 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": null, "year": 2014 }, { "authors": [ "Jason Tyler Rolfe" ], "title": "Discrete variational autoencoders", "venue": "arXiv preprint arXiv:1609.02200,", "year": 2016 }, { "authors": [ "Mihaela Rosca", "Balaji Lakshminarayanan", "Shakir Mohamed" ], "title": "Distribution matching in variational inference", "venue": "arXiv preprint arXiv:1802.06847,", "year": 2018 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": null, "year": 2015 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Andrew Stirn", "David A Knowles" ], "title": "Variational variance: Simple and reliable predictive variance parameterization", "venue": "arXiv preprint arXiv:2006.04910,", "year": 2020 }, { "authors": [ "Hiroshi Takahashi", "Tomoharu Iwata", "Yuki Yamanaka", "Masanori Yamada", "Satoshi Yagi" ], "title": "Student-t variational autoencoder for robust density estimation", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "Under review as a conference paper at ICLR 2021 Figure 5: Samples from the σ-VAE (left) and the Gaussian VAE (right) on the SVHN dataset. The Gaussian VAE produces blurry results with muted colors, while the σ-VAE is able to produce accurate images", "venue": null, "year": 2021 }, { "authors": [ "models. ICLR", "2016. Manuel Watter", "Jost Springenberg", "Joschka Boedecker", "Martin Riedmiller" ], "title": "Embed to control", "venue": null, "year": 2016 }, { "authors": [ "corresponding bin" ], "title": "Kingma et al., 2016) uses the logistic distribution discretized in this manner", "venue": null, "year": 2016 }, { "authors": [ "Salimans" ], "title": "2017) suggests to make all bins except the first and the last be of equal size, whereas the first and the last bin include, respectively, the intervals (−∞, 0] and [1,∞)", "venue": "Salimans et al", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep density models based on the variational autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) have found ubiquitous use in probabilistic modeling and representation learning as they are both conceptually simple and are able to scale to very complex distributions and large datasets. These VAE techniques are used for tasks such as future frame prediction (Castrejon et al., 2019), image segmentation (Kohl et al., 2018), generating speech (Chung et al., 2015) and music (Dhariwal et al., 2020), as well as model-based reinforcement learning (Hafner et al., 2019a). However, in practice, many of these approaches require careful manual tuning of the balance between two terms that correspond to distortion and rate from information theory (Alemi et al., 2017). This balance trades off fidelity of reconstruction and quality of samples from the model: a model with low rate would not contain enough information to reconstruct the data, while allowing the model to have high rate might lead to unrealistic samples from the prior as the KL-divergence constraint becomes weaker (Alemi et al., 2017; Higgins et al., 2017). While a proper variational lower bound does not expose any free parameters to control this tradeoff, many prior works heuristically introduce a weight on the prior KL-divergence term, often denoted β. Usually, β needs to be tuned for every dataset and model variant as a hyperparameter, which slows down development and can lead to poor performance as finding the optimal value is often prohibitively computationally expensive. Moreover, using β 6= 1 precludes the appealing interpretation of the VAE objective as a bound on the data likelihood, and is undesirable for applications like density modeling.\nWhile many architectures for calibrating decoders have been proposed in the literature (Kingma & Welling, 2014; Kingma et al., 2016; Dai & Wipf, 2019), more applied work typically employs VAEs with uncalibrated decoding distributions, such as Gaussian distributions without a learned variance, where the decoder only outputs the mean parameter (Castrejon et al., 2019; Denton & Fergus, 2018; Lee et al., 2019; Babaeizadeh et al., 2018; Lee et al., 2018; Hafner et al., 2019b; Pong et al., 2019; Zhu et al., 2017; Pavlakos et al., 2019), or uses other ad-hoc modifications to the objective (Sohn et al., 2015; Henaff et al., 2019). Indeed, it is well known that attempting to learn the variance in a Gaussian decoder may lead to numerical instability (Rezende & Viola, 2018; Dai & Wipf, 2019), and naı̈ve approaches often lead to poor results. As a result, it remains unclear whether practical empirical performance of VAEs actually benefits from calibrated decoders or not.\nTo rectify this, our first contribution is a comparative analysis of various calibrated decoder architectures and practical recommendations for simple and effective VAE training. We find that, while naı̈ve calibrated decoders often lead to worse results, a careful choice of the decoder distribution can work very well, and removes the need to tune the additional parameter β. Indeed, we note that the entropy of the decoding distribution controls the mutual information I(x; z). Calibrated decoders allow the model to control I(x; z) automatically, instead of relying on manual tuning. Our second contribution is a simple but novel technique for optimizing the decoder variance analytically, without requiring the decoder network to produce it as an additional output. We call the resulting approach to learning the Gaussian variance the σ-VAE. In our experiments, the σ-VAE outperforms the alternative of learning the variance through gradient descent, while being simpler to implement and extend. We validate our results on several VAE and sequence VAE models and a range of image and video datasets." }, { "heading": "2 RELATED WORK", "text": "Prior work on variational autoencoders has studied a number of different decoder parameterizations. Kingma & Welling (2014); Rezende et al. (2014) use the Bernoulli distribution for the binary MNIST data and Kingma & Welling (2014) use Gaussian distributions with learned variance parameter for grayscale images. However, modeling images with continuous distributions is prone to instability as the variance can converge to zero (Rezende & Viola, 2018; Mattei & Frellsen, 2018; Dai & Wipf, 2019). Some work has attempted to rectify this problem by using dequantization (Gregor et al., 2016), which is theoretically appealing as it is tightly related to the log-likelihood of the original discrete data (Theis et al., 2016), optimizing the variance in a two-stage procedure (Arvanitidis et al., 2017), or training a post-hoc prior (Ghosh et al., 2019). Takahashi et al. (2018); Barron (2019) proposed more expressive distributions. Additionally, different choices for representing such variance exist, including diagonal covariance (Kingma & Welling, 2014; Sønderby et al., 2016; Rolfe, 2016), or a single shared parameter (Kingma et al., 2016; Dai & Wipf, 2019; Edwards & Storkey, 2016; Rezende & Viola, 2018). We analyze these and notice that learning a single variance parameter shared across images leads to stable training and good performance, without the use of dequantization or even clipping the variance, although these techniques can be used with our decoders; and further improve the estimation of this variance with an analytic solution.\nEarly work on discrete VAE decoders for color images modeled them with the Bernoulli distribution, treating the color intensities as probabilities (Gregor et al., 2015). Further work has explored various parameterizations based on discretized continuous distributions, such as discretized logistic (Kingma et al., 2016). More recent work has improved expressivity of the decoder with a mixture of discretized logistics (Chen et al., 2016; Maaløe et al., 2019). However, these models also employ powerful autoregressive decoders (Chen et al., 2016; Gulrajani et al., 2016; Maaløe et al., 2019), and the latent variables in these models may not represent all of the significant factors of variation in the data, as some factors can instead be modeled internally by the autoregressive decoder (Alemi et al., 2017).1\nWhile many calibrated decoders have been proposed, outside the core generative modeling community uncalibrated decoders are ubiquitous. They are used in work on video prediction (Denton & Fergus, 2018; Castrejon et al., 2019; Lee et al., 2018; Babaeizadeh et al., 2018), image segmentation (Kohl et al., 2018), image-to-image translation (Zhu et al., 2017), 3D human pose (Pavlakos et al., 2019), as well as model-based reinforcement learning (Henaff et al., 2019; Hafner et al., 2019b;a), and representation learning (Lee et al., 2019; Watter et al., 2015; Pong et al., 2019). Most of these works utilize the heuristic hyperparameter β instead, which is undesirable both as the resulting objective is no longer a bound on the likelihood, and as β usually requires extensive tuning. In this work, we analyze the common pitfalls of using calibrated decoders that may have prevented the practitioners from using them, propose a simple and effective analytic way of learning such calibrated distribution, and provide a comprehensive experimental evaluation of different decoding distributions.\nAlternative discussions of the hyperparameter β are presented by Zhao et al. (2017); Higgins et al. (2017); Alemi et al. (2017); Achille & Soatto (2018), who show that it controls the amount of information in the latent variable, I(x; z). Peng et al. (2018); Rezende & Viola (2018) further discuss constrained optimization objectives for VAEs, which also yield a similar hyperparameter. Here, we focus on β-VAEs with Gaussian decoders with constant variance, as commonly used in recent work, and show that the hyperparameter β can be incorporated in the decoding likelihood for these models.\n1BIVA (Maaløe et al., 2019) uses the Mixture of Logistics decoder proposed in (Salimans et al., 2017) that produces the channels for each pixel autoregressively, see also App D." }, { "heading": "3 ANALYSING DECODING DISTRIBUTIONS", "text": "The generative model of a VAE (Kingma & Welling, 2014; Rezende et al., 2014) with parameters θ is specified with a prior distribution over the latent variable pθ(z), commonly unit Gaussian, and a decoding distribution pθ(x|z), which for color images is commonly a conditional Gaussian parameterized with a neural network. We would like to fit this generative model to a given dataset by maximizing the evidence lower bound (ELBO (Neal & Hinton, 1998; Jordan et al., 1999; Kingma & Welling, 2014; Rezende et al., 2014)), which uses an approximate posterior distribution qφ(z|x), also commonly a conditional Gaussian specified with a neural network. In this work, we focus on the form of the decoding distribution pθ(x|z). To achieve the best results, we want a decoding distribution that represents the required probability p(x|z) accurately In this section, we will review and analyze various choices of decoding distributions that enable better decoder calibration, including expressive decoding distributions that can represent both the prediction of the image and the uncertainty about such prediction, or even multimodal predictions." }, { "heading": "3.1 GAUSSIAN DECODERS", "text": "We first analyse the commonly used Gaussian decoders. We note that the commonly used MSE reconstruction loss between the reconstruction x̂ and ground truth data x is equivalent to the negative log-likelihood objective with a Gaussian decoding distribution with constant variance:\n− ln p(x|z) = 1 2 ||x̂− x||2 +D ln\n√ 2π = 1\n2 ||x̂− x||2 + c = D 2 MSE(x̂, x) + c,\nwhere p(x|z) ∼ N (x̂, I), the prediction x̂ is produced with a neural network x̂ = µθ(z), and D is the dimensionality of x.\nThis demonstrates a drawback of methods that rely simply on the MSE loss (Castrejon et al., 2019; Denton & Fergus, 2018; Lee et al., 2019; Hafner et al., 2019b; Pong et al., 2019; Zhu et al., 2017; Henaff et al., 2019), as it is equivalent to assuming a particular, constant variance of the Gaussian decoding distribution. By learning this variance, we can achieve much better performance due to better calibration of the decoder. There are several ways in which we can specify this variance. An expressive way to specify the variance is to specify a diagonal covariance matrix for the image, with one value per pixel (Kingma & Welling, 2014; Sønderby et al., 2016; Rolfe, 2016). This can be done, for example, by letting a neural network σθ output the diagonal entries of the covariance matrix given a latent sample z:\npθ(x|z) ∼ N ( µθ(z), σθ(z) 2 ) . (1)\nThis parameterization of the decoding distribution outputs one variance value per each pixel and channel. While powerful, we observe in Section 5.3 that this approach attains suboptimal performance, and is moreover prone to numerical instability. Instead, we will find experimentally that a simpler parameterization, in which the covariance matrix is specified with a single shared (Kingma et al., 2016; Dai & Wipf, 2019; Edwards & Storkey, 2016; Rezende & Viola, 2018) parameter σ as Σ = σI often works better in practice:\npθ,σ(x|z) ∼ N ( µθ(z), σ 2I ) . (2)\nThe parameter σ can be optimized together with parameters of the neural network θ with gradient descent. Of particular interest is the interpretation of this parameter. Writing out the expression for the decoding likelihood, we obtain\n− ln p(x|z) = 1 2σ2 ||x̂−x||2+D lnσ\n√ 2π = 1\n2σ2 ||x̂−x||2+D lnσ+c = D lnσ+ D 2σ2 MSE(x̂, x)+c.\nThe full objective of the resulting Gaussian σ-VAE is:\nLθ,φ,σ = D lnσ + D\n2σ2 MSE(x̂, x) +DKL(q(z|x)||p(z)). (3)\nNote that σ may be viewed as a weighting parameter between the MSE reconstruction term and the KL-divergence term in the objective. Moreover, this objective explicitly specifies how to select the optimal variance: the variance should be selected to minimize the (weighted) MSE loss while also minimizing the logarithm of the variance.\nDecoder Calibration It is important that the decoder distribution be calibrated in the statistical sense, that is, the predicted probabilities should correspond to the frequencies of seeing a particular value of x given that prediction (DeGroot & Fienberg, 1983; Dawid, 1982). The calibration of a neural network can be usually improved by estimating the uncertainty of that prediction (Guo et al., 2017), such as the variance of a Gaussian (Kendall & Gal, 2017). Since the naive MSE loss assumes a constant variance, it does not effectively represent the uncertainty of the prediction, and is often poorly calibrated. Instead, learning the variance as in Eq. 3 leads to better uncertainty estimation and better calibration. In Sec 5.1, we show that learning a good estimate of this uncertainty is crucial for the quality of the VAE generations.\nConnection to β-VAE. The β-VAE objective (Higgins et al., 2017) for a Gaussian decoder with unit variance is:\nLβ = D 2 MSE(x̂, x) + βDKL(q(z|x)||p(z)). (4)\nWe see that it can be interpreted as a particular case of the objective (3), where the variance is constant and the term D lnσ can be ignored during optimization. The β-VAE objective is then equivalent to a σ-VAE with a constant variance σ = √ β/2 (for a particular learning rate setting). In recent work (Zhu et al., 2017; Denton & Fergus, 2018; Lee et al., 2019), β-VAE models are often used in this exact regime. By tuning the β term, practitioners are able to tune the variance of the decoder, manually producing a more calibrated decoder. However, by re-interpreting the β-VAE objective as a special case of the VAE and introducing the missing D lnσ term, we can both obtain a valid evidence lower bound, and remove the need to manually select β. Instead, the variance σ can instead simply be learned end-to-end, reducing the need for hyperparameter tuning.\nAn alternative discussion of this connection in the context of linear VAEs is also presented by Lucas et al. (2019). While the β term is not necessary for good performance if the decoder is calibrated, it can still be employed if desired, such as when the aim is to attain better disentanglement (Higgins et al., 2017) or a particular rate-distortion tradeoff (Alemi et al., 2017). However, we found that with calibrated decoders, the best sample quality is obtained when β = 1.\nLoss implementation details. For the correct evidence lower bound computation, it is necessary to add the values of the MSE loss and the KL divergence across the dimensions. We observe that common implementations of these losses (Denton & Fergus, 2018; Abadi et al., 2016; Paszke et al., 2019) use averaging instead, which will lead to poor results if the number of image dimensions is significantly different from the number of the latent dimensions. While this can be conveniently ignored in the β-VAE regime, where the balance term is tuned manually anyway, for the σ-VAE it is essential to compute the objective value correctly.\nVariance implementation details. Since the variance is non-negative, we parameterize it logarithmically as σ2 = e2λ, where λ is the logarithm of the standard deviation. For some models, such as per-pixel variance decoders, we observed that it is necessary to restrict the variance range for numerical stability. We do so by using the soft clipping operations proposed by Chua et al. (2018):\nλ := λmax − softplus(λmax − λ); λ := λmin + softplus(λ− λmin). We observe that setting λmin = −6 to lower bound the standard deviation to be at least half of the distance between allowed color values works well in practice. We also observe that this clipping is unnecessary when learning a shared σ value." }, { "heading": "3.2 DISCRETE DECODERS", "text": "It is possible to use discrete decoding distributions to generate images, as color values are commonly restricted to a fixed set of integer pixel intensities (e.g. 0..255). Indeed, for discrete color values, discrete distributions are arguably more appropriate. In the most general case, a discrete decoding distribution factorized per each pixel and channel would be specified by a probability mass vector x̂ with 256 entries, one per each possible intensity value, similarly to a per-pixel classifier of the intensity value. We can implement it with a soft-max layer, yielding the following log-likelihood loss (sometimes called the cross-entropy loss) for a true pixel with intensity i:\n− ln p(x|z) = − ln exp(x̂i)∑ j exp(x̂j) ,\nWe will evaluate these and further choices of discrete decoders, described in Appendix D. We recommend choosing the decoder distribution that best suits the structure of the data, such as discrete decoders for discrete data and continuous decoders for continuous data." }, { "heading": "4 OPTIMAL VARIANCE ESTIMATION FOR CALIBRATED GAUSSIAN DECODERS", "text": "In this section, we propose a simple but novel analytic way of obtaining a calibrated decoder for continuous distributions that further improves performance. The Gaussian decoders with learned variance described in Section 3.1 are calibrated and work better than naı̈ve unit variance decoders. However, for σ-VAE optimized with gradient descent or Adam (Kingma & Ba, 2015), we observe that careful learning rate tuning can yield significantly better performance, which is in line with prior work that reported poor performance of gradient descent for optimizing Gaussian distributions (Amari, 1998; Peters & Schaal, 2008). A smaller learning rate often produces better performance, but slows down the training, as the likelihood values p(x|z) will be very suboptimal in the beginning. Instead, here we propose an analytic solution for the value of σ, which computes it analytically and does not require gradient descent.\nThe maximum likelihood estimate of the variance given a known mean is the average squared distance from the mean:\nσ∗ = arg max σ N (x|µ, σ2I) = MSE(x, µ), (5)\nwhere MSE(x, µ) = 1D ∑ i(xi − µi)2. Eq. 5 can be easily shown using manual differentiation, and is a generalization of the fact that the MLE estimate of the variance is the sample variance.\nThe optimal variance for the decoder distribution under the maximum likelihood criterion is then simply the average MSE loss over the data and the encoder distribution. We leverage this to create an optimal analytic solution for the variance. In the batch setting, the optimal variance would be simply the MSE loss, and can be updated after every gradient update for the other parameters of the decoder. In the mini-batch setting, we use a batchwise estimate of the variance computed for the current minibatch. We analyze these approximations in Appendix C. At test time, a running average of the variance over the training data is used. This method, which we call optimal σ-VAE, allows us to learn very efficiently as we use the optimal variance estimate at every training step. It is also easier to implement, as no separate optimizer for the variance parameter is needed. If the variance is not needed at test time, it can also be simply discarded after training.\nPer-image optimal σ-VAE. Optimal σ-VAE uses a single variance value shared across all data points. However, the optimal σ-VAE also allows more powerful variance estimates, such as learning a variance value per each pixel, or even a variance value per each image, the difference in implementation simply being the dimensions across which the averaging in Equation 5 operates. This approach can be interpreted as variational variance prediction in the framework of Stirn & Knowles (2020).\n5 EXPERIMENTAL RESULTS\nWe now provide an empirical analysis of different decoding distributions, and validate the benefits of our σ-VAE approach. We use a small convolutional VAE model on SVHN (Netzer et al., 2011), a larger hierarchical HVAE model (Maaløe et al., 2019) on the CelebA (Liu et al., 2015) and CIFAR (Krizhevsky et al., 2009) datasets, and a sequence VAE model called SVG (Denton & Fergus, 2018) on the BAIR Pushing dataset (Finn & Levine, 2017). We evaluate the ELBO values as well as visual quality measured by the Fréchet Inception Distance (FID, Heusel et al. (2017)). Images are 28 × 28 for SVHN and 32×32 for CelebA and CIFAR, while video experiments were performed on 64× 64 frames\nfollowing Denton & Fergus (2018). We do not use KL annealing as it did not improve the results in our experiments. Further experimental details are in App. B.\n5.1 DO CALIBRATED DECODERS BALANCE THE VAE OBJECTIVE WITHOUT TUNING β?\nAs detailed in Section 3.1, a β-VAE with a unit variance Gaussian decoder commonly used in prior work is equivalent to a σ-VAE with constant, manually tuned variance. There is a simple relationship between beta and the variance: σ = √ β/2. To compare the variance that the σ-VAE learns to the manually tuned variance in the case of the β-VAE, we compare the ELBO values and the corresponding values of β in Table 1. We find that learning the variance produces similar values of β to the manually tuned values in the β-VAE case, indicating that the σ-VAE is able to learn the balance between the two objective terms in a single training run, without hyperparameter tuning. Moreover, the σ-VAE\noutperforms the best β-VAE run. This is because end-to-end learning produces better estimates of\nthe variance than is possible with manual search, improving the likelihood (as measured by the lower bound) and the visual quality. Figure 3 shows the qualitative results from this experiment.\nWe further validate our results on both single-image and sequential VAE models on a range of datasets in Table 2 and Figure 2. Single-sample ELBO values are reported, and ELBO values on discretized data are reported for discrete distributions. We see that learning a shared variance in a Gaussian decoders (shared σ-VAE) outperforms the naı̈ve unit variance decoder (Gaussian VAE) as well as tuning the β constant for the Gaussian VAE manually. We also see that calibrated discrete decoders, such as full categorical distribution or mixture of discretized logistics, perform better than the naı̈ve Gaussian VAE. Using Bernoulli distribution by treating the color intensities as probabilities (Gregor et al., 2015; Watter et al., 2015) performs poorly. Our results further improve upon the sequence VAE method of Denton & Fergus (2018), which uses a unit variance Gaussian with the β-VAE objective." }, { "heading": "5.2 HOW DOES LEARNING CALIBRATED DECODERS IMPACT THE LATENT VARIABLE INFORMATION CONTENT?", "text": "We saw above that calibrated decoders result in higher log-likelihood bounds. Are calibrated decoders also beneficial for representation learning? We evaluate the mutual information Ie(x; z) between the data pd(x) and encoder samples q(z|x), as well as the mismatch between the prior p(z) and the marginal encoder distribution m(z) = Epd(x)q(z|x), measured by the marginal KL DKL(m(z)||p(z)). These terms are related to the rate term of the VAE objective as follows (Alemi et al., 2017):\nEpd(x) [DKL(q(z|x)||p(z))] = Epd(x) [DKL(q(z|x)||m(z))] +DKL(m(z)||p(z)) = Ie(x; z) +DKL(m(z)||p(z)).\n(6)\nThat is, the rate term decomposes into the true mutual information and the marginal KL term. We want to learn expressive latent variables with high mutual information. However, doing so by tuning the β value relaxes the constraint that the encoder and the prior distributions match, and leads to degraded quality of samples from the prior, which creates a trade-off between expressive representations and ability to generate good samples. To compare the β-VAE and σVAE in terms of these quantities, we estimate the marginal KL term via Monte Carlo sampling, as proposed by Rosca et al. (2018), and plot the results in Figure 4. As expected, we see that lower β values lead to higher mutual information. However, after a certain point, lower values of β also cause a significant mismatch between the marginal and the prior distributions. By calculating the “effective” β for the σ-VAE, as per Section 4, we can see that the σ-VAE captures an inflection point in the DKL(m(z)||p(z)) term, learning a representation with the highest possible MI, but without degrading sample quality. This explains the high visual quality of the optimal σ-VAE samples: since the marginal and the prior distributions match, the samples from\nthe prior look similar to reconstructions, while for a β-VAE with low β, the samples from the prior are poor. We see that, in contrast to the β-VAE, where the mutual information is controlled by a hyperparameter, the σ-VAE can adjust the appropriate amount of information automatically and is able to find the setting that produces both informative latents and high quality samples.\nAn alternative discussion of tuning β is presented by Alemi et al. (2017), who show that β controls the rate-distortion trade-off. Here, we show that the crucial trade-off also controlled by β is the\ntrade-off between two components of the rate itself, which control expressivity of representations and the match between the variational and the prior distributions, respectively." }, { "heading": "5.3 WHAT ARE THE COMMON CHALLENGES IN LEARNING THE VARIANCE THAT PREVENT PRACTITIONERS FROM USING IT, AND HOW TO RECTIFY THEM?", "text": "If learning the decoder variance improves generation, why are learned variances not used more often? In this section, we discuss how the naı̈ve approach to learning variances, where the decoder outputs a variance for each pixel along with the mean, leads to poor results. First, we find that this method often diverges very quickly due to numerical instability, as the network is able to predict certain pixels with very high certainty, leading to degenerate variances. In contrast, learning a shared variance is always numerically stable in our experiments. We can rectify this numerical instability by bounding the output variance (Section 3.1). However, even with bounded variance, we observe that learning per-pixel variances leads to poor results in Table 2. While the per-pixel variance achieves a good ELBO value, it produces very poor samples, as measured by FID and visual inspection.\nWe see that the specific form of learned variance: a shared variance, a per-image variance, or a per-pixel variance, can lead to very different performance in practice. We hypothesize the per-pixel decoder performs poorly as it incentivizes the model to focus on particular pixels that can be predicted well, instead of focusing equally on all parts of the image. This is consistent with prior work on denoising diffusion models which noted that likelihood-based models place too much focus on imperceptible details, which leads to deteriorated results (Ho et al., 2020). The shared and per-image variance models mitigate this issue at the cost of introducing more bias, and work better in practice." }, { "heading": "5.4 CAN AN ANALYTIC SOLUTION FOR OPTIMAL VARIANCE FURTHER IMPROVE LEARNING?", "text": "We evaluate the optimal σ-VAE which uses an analytic solution for the variance (Section 4). Table 2 shows that it achieves superior results in terms of log-likelihood. We also note that the optimal σ-VAE converges to a good variance estimate instantaneously, which speeds up learning (highlighted in Figure 9 in the Appendix). In addition, we evaluate the per-image optimal σ-VAE, in which a single variance is computed per image. This model achieves significantly higher visual quality. While producing this per-image variance with a neural network would require additional architecture tuning, optimal σ-VAE is extremely simple to implement (it can be implemented simply as changing the axes of summation), not requiring any new tunable parameters." }, { "heading": "6 CONCLUSION", "text": "We presented a simple and effective method for learning calibrated decoders, as well as an evaluation of different decoding distributions with several VAE and sequential VAE models. The proposed\nmethod outperforms methods that use naı̈ve unit variance Gaussian decoders and tune a heuristic weight β on the KL-divergence loss, as commonly done in prior work. Moreover, it does not use the heuristic weight β, making it easier to train than this prior work. We expect that the simple techniques for learning calibrated decoders can allow practitioners to speed up the development cycle, obtain better results, and reduce the need for manual hyperparameter tuning." }, { "heading": "A ADDITIONAL EXPERIMENTAL RESULTS", "text": "In this section, we provide more qualitative results in Figures 7, 6, 8, 5 as well as a graph showing the convergence properties of the variance for different models in Fig. 9. In order to validate our method with a different architecture, we also report performance of different decoders with a small 5-layer convolutional architecture on the CelebA and CIFAR dataset in Table 3. We see that the ordering of the methods is consistent with this smaller architecture." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "For the small convolutional network test on SVHN, the encoder has 3 convolutional layers followed by a fully connected layer, while the decoder has a fully connected layer followed by 3 convolutional layers. The β was tuned from 100 to 0.0001 for β-VAE. The number of channels in the convolutional layers starts with 32 and increases 2 times in every layer. The dimension of the latent variable is 20. Adam (Kingma & Ba, 2015) with learning rate of 1e-3 is used for optimization. Batch size of 128 was used and all models were trained for 10 epochs. We additionally evaluate this small convolutional network on CelebA, CIFAR, and Frey Face2 datasets in Table 3. Unit Gaussian prior and Gaussian posteriors with diagonal covariance were used. For the larger hierarchical VAE, we used the official pytorch implementation of (Maaløe et al., 2019). We use the baseline hierarchical VAE with 15 layers of latent variables, without the top-down and bottom-up connections. For the hierarchical VAE and\n2Available at https://cs.nyu.edu/˜roweis/data.html\nthe SVG-LP model, we use the default hyperparameters in the respective implementations. We use the standard train-val-test split for all datasets. All models were trained on a single high-end GPU. We use the official PyTorch implementation of the Inception network to compute FID. All methods are compared on the same hyperparameters.\nC EMPIRICAL ANALYZIS OF APPROXIMATIONS FOR OPTIMAL σ-VAE\nThe optimal σ-VAE requires computing the following estimate of the variance\nσ∗ = arg max σ\nEx∼DataEq(z|x) [ ln p(x|µθ(z), σ2I) ] = Ex∼DataEq(z|x)MSE(x, µθ(z)). (7)\nThis requires computing two expectations, with respect to the data in the dataset, and with respect to the encoder distribution. We use MC sampling with one sample per data point to approximate both expectations. Inspired by common practices in VAEs, we use one sample per data point to approximate the inner expectation. On SVHN, the standard error of this approximation is 0.26% of the value of sigma. We further approximate the outer expectation with a single batch instead of the entire dataset. On SVHN, the standard error of this approximation is 2% of the value of sigma. We see that both approximations are accurate in practice. The second approximation yields a biased estimate of the evidence lower bound because the same batch is used to approximate the variance and compute the lower bound estimate. However, this bias can be corrected by using a different batch, or with a running average of the variance with an appropriate decay. This running average can also be used to reduce the variance of the estimate and to achieve convergence guarantees, but we did not find it necessary in our experiments." }, { "heading": "D ALTERNATIVE DECODER CHOICES", "text": "We describe the alternative decoders evaluated in Table 2: using the bitwise-categorical, and the logistic mixture distributions.\nBitwise-categorical VAE While the 256-way categorical decoder described in Section 3.2 is very powerful due to the ability to specify any possible intensity distribution, it suffers from high computational and memory requirements. Because 256 values need to be kept for each pixel and channel, simply keeping this distribution in memory for one 3-channel 1024× 1024 image would require 3 GiB of memory, compared to 0.012 GiB for the Gaussian decoder. Therefore, training deep\nneural networks with this full categorical distribution is impractical for high-resolution images or videos. The bitwise-categorical VAE improves the memory complexity by defining the distribution over 256 values in a more compact way. Specifically, it defines a binary distribution over each bit in the pixel intensity value, requiring 8 values in total, one for each bit. This distribution can be thought of as a classifier that predicts the value of each bit in the image separately. In our implementation of the bitwise-categorical likelihood, we convert the image channels to binary format and use the standard binary cross-entropy loss (which reduces to binary log-likelihood since all bits in the image are deterministically either zero or one). While in our experiments the bitwise-categorical distribution did not outperform other choices, it often performs on par with our proposed method. We expect this distribution to be useful due to its generality as it is able to represent values stored in any digital format by converting them into binary.\nLogistic mixture VAE For this decoder, we adapt the discretized logistic mixture from Salimans et al. (2017). To define a discrete 256-way distribution, it divides the corresponding continuous distribution into 256 bins, where the probability mass is defined as the integral of the PDF over the corresponding bin. (Kingma et al., 2016) uses the logistic distribution discretized in this manner for the decoder. Salimans et al. (2017) suggests to make all bins except the first and the last be of equal size, whereas the first and the last bin include, respectively, the intervals (−∞, 0] and [1,∞). Salimans et al. (2017) further suggests using a mixture of discretized logistics for improved capacity. Our implementation largely follows the one in Salimans et al. (2017), however, we note that the original implementation is not suitable for learning latent variable models, as it generates the channels autoregressively. This will cause the latent variable to lose color information since it can be represented by the autoregressive decoder. We therefore adapt the mixture of discretized logistics to the pure latent variable setup by removing the mean-adjusting coefficients from (Salimans et al., 2017). In our experiments, the logistic mixture outperformed other discrete distributions." } ]
2,020
null
SP:3a1d7f7165762299ba2d9bab4144576660b9a784
[ "This paper proposes a sampling free technique based on variance propagation to model predictive distributions of deep learning models. Estimating uncertainty of deep learning models is an important line of research for understanding the reliability of predictions and ensuring robustness to out-of-distribution data. Results are shown using synthetic data, perplexity analysis for a language modeling task and out-of-distribution detection performance using a convolutional network." ]
Uncertainty evaluation is a core technique when deep neural networks (DNNs) are used in real-world problems. In practical applications, we often encounter unexpected samples that have not seen in the training process. Not only achieving the high-prediction accuracy but also detecting uncertain data is significant for safetycritical systems. In statistics and machine learning, Bayesian inference has been exploited for uncertainty evaluation. The Bayesian neural networks (BNNs) have recently attracted considerable attention in this context, as the DNN trained using dropout is interpreted as a Bayesian method. Based on this interpretation, several methods to calculate the Bayes predictive distribution for DNNs have been developed. Though the Monte-Carlo method called MC dropout is a popular method for uncertainty evaluation, it requires a number of repeated feed-forward calculations of DNNs with randomly sampled weight parameters. To overcome the computational issue, we propose a sampling-free method to evaluate uncertainty. Our method converts a neural network trained using dropout to the corresponding Bayesian neural network with variance propagation. Our method is available not only to feed-forward NNs but also to recurrent NNs including LSTM. We report the computational efficiency and statistical reliability of our method in numerical experiments of language modeling using RNNs, and the out-of-distribution detection with DNNs.
[]
[ { "authors": [ "Roberto Cipolla" ], "title": "Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding", "venue": "Proceedings of the British Machine Vision Conference (BMVC),", "year": 2017 }, { "authors": [ "M. Christopher Bishop" ], "title": "Pattern Recognition and Machine Learning (Information Science and Statistics)", "venue": null, "year": 2006 }, { "authors": [ "Sungjoon Choi", "Kyungjae Lee", "Sungbin Lim", "Songhwai Oh" ], "title": "Uncertainty-aware learning from demonstration using mixture density networks with sampling-free variance modeling", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep Learning for Classical Japanese Literature", "venue": "URL https://arxiv.org/abs/1812.01718", "year": 2067 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "André van Schaik" ], "title": "EMNIST: an extension of MNIST to handwritten letters", "venue": null, "year": 2017 }, { "authors": [ "T.M. Cover", "J.A. Thomas" ], "title": "Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)", "venue": null, "year": 2006 }, { "authors": [ "G. Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of Control, Signals, and Systems (MCSS),", "year": 1989 }, { "authors": [ "Jean Daunizeau" ], "title": "Semi-analytical approximations to statistical moments of sigmoid and softmax mappings of normal variables, 2017", "venue": null, "year": 2017 }, { "authors": [ "Cohn A. David", "Ghahramani Zoubin", "Jordan Michael" ], "title": "Active learning with statistical models", "venue": "In Journal of Artificial Intelligence Research,", "year": 1996 }, { "authors": [ "Brendan J. Frey", "Geoffrey E. Hinton" ], "title": "Variational learning in nonlinear gaussian belief networks", "venue": "Neural Computation,", "year": 1999 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "A Theoretically Grounded Application of Dropout in Recurrent Neural Networks", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep Bayesian Active Learning with Image Data", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Klaus Greff", "K. Srivastava", "Rupesh", "Jan Koutnik", "Bas R. Steunebrink", "Jügen Schmidhuber" ], "title": "LSTM: A search space odyssey", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2017 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On Calibration of Modern Neural Networks", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Maximilian Henne", "Adrian Schwaiger", "Gereon Weiss" ], "title": "Managing Uncertainty of AI-based Perception for Autonomous Systems", "venue": "In Proceedings of the Workshop on Artificial Intelligence Safety 2019 co-located with the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Alex Holub", "Pietro Perona", "C. Michael Burl" ], "title": "Entropy- based active learning for object recognition", "venue": "In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2008 }, { "authors": [ "Seong Jae Hwang", "Ronak Mehta", "Hyunwoo J. Kim", "Sterling C. Johnson", "Vikas Singh" ], "title": "Sampling-free uncertainty estimation in gated recurrent units with applications to normative modeling in neuroimaging", "venue": "In Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Byeongmoon Ji", "Hyemin Jung", "Jihyeun Yoon", "Kyungyul Kim", "Younghak Shin" ], "title": "Bin-wise Temperature Scaling (BTS): Improvement in Confidence Calibration Performance through Simple Scaling Techniques. aug 2019", "venue": null, "year": 2019 }, { "authors": [ "Michael Kampffmeyer", "Arnt Borre Salberg", "Robert Jenssen" ], "title": "Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks", "venue": "In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops,", "year": 1467 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "M.T. Le", "F. Diehl", "T. Brunner", "A. Knol" ], "title": "Uncertainty estimation for deep neural object detectors in safety-critical applications", "venue": "In 2018 21st International Conference on Intelligent Transportation Systems (ITSC),", "year": 2018 }, { "authors": [ "Yann Lecun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE, 86(11):2278–2324,", "year": 1998 }, { "authors": [ "Xin Li", "Yuhong Guo" ], "title": "Adaptive active learning for image classification", "venue": "In the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2013 }, { "authors": [ "Min Lin", "Qiang Chen", "Shuicheng Yan" ], "title": "Network in network, 2013. cite arxiv:1312.4400Comment: 10 pages, 4 figures, for iclr2014", "venue": null, "year": 2014 }, { "authors": [ "Mi Lu", "Wang Hao", "Tian Yonglong", "Shavit Nir" ], "title": "Training-free uncertainty estimation for neural networks, 2017", "venue": null, "year": 2017 }, { "authors": [ "David J.C. MacKay" ], "title": "The evidence framework applied to classification networks", "venue": "NEURAL COMPUTATION,", "year": 1992 }, { "authors": [ "John C. Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "venue": "In ADVANCES IN LARGE MARGIN CLASSIFIERS,", "year": 1999 }, { "authors": [ "Janis Postels", "Francesco Ferroni", "Huseyin Coskun", "Nassir Navab", "Federico Tombari" ], "title": "SamplingFree Epistemic Uncertainty Estimation Using Approximated Variance Propagation", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Alexander Shekhovtsov", "Boris Flach" ], "title": "Feed-forward propagation in probabilistic neural networks with categorical and max layers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Changjian Shui", "Fan Zhou", "Christian Gagné", "Boyu Wang" ], "title": "Deep active learning: Unified and principled method for query and training", "venue": "Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Mattias Teye", "Hossein Azizpour", "Kevin Smith" ], "title": "Bayesian Uncertainty Estimation for Batch Normalized Deep Networks", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Kush R. Varshney" ], "title": "Engineering safety in machine learning", "venue": "Institute of Electrical and Electronics Engineers Inc., mar 2016", "year": 2016 }, { "authors": [ "Kush R. Varshney", "Homa Alemzadeh" ], "title": "On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products", "venue": "Big Data,", "year": 2017 }, { "authors": [ "Hao Wang", "SHI Xingjian", "Dit-Yan Yeung" ], "title": "Natural-parameter networks: A class of probabilistic neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Sida Wang", "Christopher Manning" ], "title": "Fast dropout training", "venue": "Proceedings of the 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Jeannette M Wing" ], "title": "Cyber-Physical Systems", "venue": "Computing Research News,", "year": 2009 }, { "authors": [ "Anqi Wu", "Sebastian Nowozin", "Ted Meeds", "Richard E. Turner", "Jose Miguel Hernadez-Lobato", "Alexander L. Gaunt" ], "title": "Deterministic variational inference for robust bayesian neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking", "venue": "Machine Learning Algorithms", "year": 2017 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever", "Oriol Vinyals" ], "title": "Recurrent neural network regularization", "venue": "CoRR, abs/1409.2329,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Uncertainty evaluation is a core technique in practical applications of deep neural networks (DNNs). As an example, let us consider the Cyber-Physical Systems (CPS) such as the automated driving system. In the past decade, machine learning methods are widely utilized to realize the environment perception and path-planing components in the CPS. In particular, the automated driving system has drawn a huge attention as a safety-critical and real-time CPS (NITRD CPS Senior Steering Group, 2012; Wing, 2009). In the automated driving system, the environment perception component is built using DNN-based predictive models.\nIn real-world applications, the CPS is required to deal with unexpected samples that have not seen in the training process. Therefore, not only achieving the high-prediction accuracy under the ideal environment but providing uncertainty evaluation for real-world data is significant for safety-critical systems (Henne et al., 2019). The CPS should prepare some options such as the rejection of the recommended action to promote the user’s intervention when the uncertainty is high. Such an interactive system is necessary to build fail-safe systems (Varshney & Alemzadeh, 2017; Varshney, 2016).\nOn the other hand, the uncertainty evaluation is useful to enhance the efficiency of learning algorithms, i.e., samples with high uncertainty are thought to convey important information for training networks. Active data selection based on the uncertainty has been studied for long time under the name of active learning (David et al., 1996; Gal et al., 2017; Holub et al., 2008; Li & Guo, 2013; Shui et al., 2020).\nIn statistics and machine learning, Bayesian estimation has been commonly exploited for uncertainty evaluation (Bishop, 2006.). In the Bayesian framework, the prior knowledge is represented as the prior distribution of the statistical model. The prior distribution is updated to the posterior distribution based on observations. The epistemic model uncertainty is represented in the prior distribution,\nand upon observing data, those beliefs can be updated in the form of a posterior distribution, which yields model uncertainty conditioned on observed data. The entropy or the variance is representative of uncertainty measures (Cover & Thomas, 2006). For complicated models such as DNNs, however, a direct application of Bayesian methods is prohibited as the computation including the high-dimensional integration highly costs.\nIn deep learning, Bayesian methods are related to stochastic learning algorithms. This relation is utilized to approximate the posterior over complex models. The stochastic method called dropout is a powerful regularization method for DNNs (Srivastava et al., 2014). In each layer of the DNN, some units are randomly dropped in the learning using stochastic gradient descent methods. Gal & Ghahramani (2016a) revealed that the dropout is interpreted as the variational Bayes method. Based on this interpretation, they proposed a simple sampling method of DNN parameters from the approximate posterior distribution. Furthermore, the uncertainty of the DNN-based prediction is evaluated using the Monte-Carlo (MC) method called MC dropout.\nWhile the Bayesian DNN trained using dropout is realized by a simple procedure, the computational overhead is not ignorable. In the MC dropout, dropout is used also at the test time with a number of repeated feed-forward calculations to effectively sample from the approximate posterior. Hence, the naive MC dropout is not necessarily relevant to the system demanding the real-time response.\nIn this work, we propose a sampling-free method to evaluate the uncertainty of the DNN-based prediction. Our method is computationally inexpensive comparing to the MC dropout and provides reliable uncertainty evaluation. In the following, we will first outline related works. Section 3 is devoted to show the detailed formulae of calculating the uncertainty. In our method, an upper bound of the variance is propagated in each layer to evaluate the uncertainty of the output. We show that the our method alleviates the overconfident prediction. This property is shared with scaling methods for the calibration of the class-probability on test samples. In Section 4, we study the relation between our method and scaling methods. In Section 5, we demonstrate the computational efficiency and statistical reliability of our method through some numerical experiments using both DNNs and RNNs." }, { "heading": "2 RELATED WORKS", "text": "The framework of Bayesian inference is often utilized to evaluate the uncertainty of DNN-based predictions. In Bayesian methods, the uncertainty is represented by the predictive distribution defined from the posterior distribution of the weight parameters. MacKay (1992) proposed a simple approximation method of the posterior distribution for neural networks, and demonstrated that the Bayesian method improves the prediction performance on classification tasks. Graves (2011) showed that the variational method efficiently works to approximate the posterior distribution of complex neural network models.\nThere are many approaches to evaluate the uncertainty of modern DNNs (Alex Kendall & Cipolla, 2017; Choi et al., 2018; Lu et al., 2017; Le et al., 2018). We briefly review MC-based methods and sampling-free methods.\nMonte-Carlo methods based on Stochastic Learning: The randomness in the learning process can be interpreted as a prior distribution. In particular, the dropout is a landmark of stochastic regularization method to train DNNs (Srivastava et al., 2014). Gal & Ghahramani (2016a) proposed a simple method to generate weight parameters from the posterior distribution induced from the prior corresponding to the dropout regularization. The predictive distribution is approximated by the MC dropout, which compute the expected output over the Monte-Carlo sampling of the weight parameters. Gal & Ghahramani (2016b) reported that the MC dropout efficiently works not only for feed-forward DNNs but for recurrent neural networks (RNNs). Another sampling based method is the ensemble-based posteriors with different random seeds (Lakshminarayanan et al., 2017). However, the computation cost is high as the bootstrap method requires repeated training of parameters using resampling data.\nSampling-free methods: Though the MC dropout is a simple and practical method to evaluate the uncertainty, a number of feed-forward computations are necessary to approximate the predictive distribution. Recently, some sampling-free methods have been proposed for the uncertainty\nevaluation. Probabilistic network is a direct way to deal with uncertainty. The parameters of the probabilistic model, say the mean and the variance of the Gaussian distribution, are propagated in probabilistic neural networks. Then, the uncertainty evaluation is given by a single feed-forward calculation. Choi et al. (2018) used the mixture of Gaussian distributions as a probabilistic neural network and Wang et al. (2016) proposed natural-parameter networks as a class of probabilistic neural networks based on exponential families. For a given input vector, the network outputs the parameters of the distribution. For the recurrent neural networks, Hwang et al. (2019) proposed a variant of the natural-parameter networks. Instead of parameters of statistical models, Wu et al. (2019) developed a sampling-free method to propagate the first and second order moments of the posterior distribution.\nSampling-free methods can evaluate the uncertainty with a one-pass computation for neural networks. However, specialized learning algorithms are required to train the probabilistic networks. Our method is applicable to DNNs and RNNs trained by common learning methods with the dropout. Postels et al. (2019) and Shekhovtsov & Flach (2019) proposed similar methods that propagate the uncertainty of the network to the output layer. Differently from the past works, our method takes the upper limit of the correlations among the inputs at the affine layer into account when the uncertainty is evaluated. In addition, we show that our method efficiently works even for RNNs." }, { "heading": "3 UNCERTAINTY EVALUATION WITH VARIANCE PROPAGATION", "text": "In this work, we assume that we can access to the weight parameters in the DNN and the dropout probability in the training process. As the variance is a common measure of uncertainty, we propose a variance propagation algorithm for the trained DNN.\nImplementation of our method called nn2vpbnn is presented in Section A in the appendix. In our method, we need only the DNN or RNN trained using dropout. Unlike various kinds of probabilistic NNs, we do not need any specialized training procedure to evaluate the uncertainty. This is a great advantage for our implementation. Furthermore, the representative values of the predictive distribution, i.e. the mean and variance, are obtained by a one-path feed-forward calculation. Hence, we can circumvent iterative Monte-Carlo calculations." }, { "heading": "3.1 UNCERTAINTY IN AFFINE LAYER", "text": "Let us consider the output of the affine layer y = Wx + b for the random input x, where W = (Wij) ∈ R`×m and b = (bi)`i=1 ∈ R`. Suppose that the random vector x has the mean vector E[x] and the variance covariance matrix (Σx)i,j = Cov(xi, xj) for i, j = 1, . . . ,m. Then, the mean vector E[y] and the variance covariance matrix Σy of y are given by E[y] = WE[x] + b and Σy = WΣxW T .\nAs the estimation of the full variable-covariance matrix is not necessarily reliable, we use only the variances of each xi and an upper bound of the absolute correlation coefficient to evaluate the uncertainty. For W = (Wij), the variance Var[yi] is Var[yi] = ∑ jW 2 ijVar[xj ] +∑\nj,j′:j 6=j′WijWij′Cov(xj , xj′). Suppose the absolute correlation coefficient among x1, . . . , xm is bounded above by ρ, 0 ≤ ρ ≤ 1. Using the relation between the correlation and variance, we have\nVar[yi] ≤ ∑ j W 2ijVar[xj ] + ρ ∑ j,j′:j 6=j′ |Wij ||Wij′ | √ Var(xj) √ Var(xj′)\n= (1− ρ) ∑ j |Wij |2Var[xj ] + ρ (∑ j |Wij | √ Var(xj) )2 , i = 1, . . . , `. (1)\nUnder the independent assumption, i.e., ρ = 0, the minimum upper bound is obtained. The prediction with a small variance leads to overconfident decision making. Hence, the upper bounding of the variance is important to build fail-safe systems. A simple method of estimating ρ is presented in Section 3.5.\nUsing the above formula, the mean and an upper bound of the variance of y are computed using the mean and an upper bound of the variance of x. In this paper, such a computation is referred to as\nthe Variance Propagation or VP for short. Let us define the variance vector of the m-dimensional random vector x = (x1, . . . , xm) ∈ Rm by Var[x] = (Var[x1], . . . ,Var[xm]) ∈ Rm. Furthermore, we denote the concatenated vector of the mean and variance of z or its approximation as U(z), i.e., U(z) = (E[z],Var[z]). The VP at the affine layer is expressed by the function Taff ,\nU(y) = (m,v) = Taff(U(x)), (2)\nwhere m = WE[x] + b ∈ Rm and each element of v ∈ Rm is defined by equation 1. The average pooling layer, global average pooling layer (Lin et al., 2013), and the batch normalization layer (Ioffe & Szegedy, 2015) are examples of the affine layer. Hence, the VP of the affine layer also works to evaluate the uncertainty of these layers.\nThe distribution of yi is well approximated by the univariate Gaussian distribution if the correlation among x is small (Wang & Manning, 2013; Wu et al., 2019). Based on this fact, the uncertainty of yi can be represented by the univariate Gaussian distribution N(E[yi],Var[yi]). In our method, the variance Var[yi] of the approximate Gaussian is given by the variance v in equation 2." }, { "heading": "3.2 OUTPUT OF DROPOUT LAYER", "text": "Let us consider the uncertainty induced from the dropout layer (Srivastava et al., 2014). The dropout probability is denoted by p. In the dropout layer, the m-dimensional random input vector x = (x1, . . . , xm) is transformed by the element-wise product z = xd, where d = (d1, . . . , dm) is the i.i.d. Bernoulli random variables, i.e., Bernoulli(p). As x and d are independent, the VP in the dropout layer is given by (E[z],Var[z]) = Tdrop(U(x)), where E[z] = pE[x] and Var[z] = pVar[x] + p(1− p)E[x]2. According to the Bayesian interpretation of the dropout revealed by Gal et al. (2017), the approximate posterior distribution of the output from the affine layer trained using dropout is given by the distribution of the random variable yi = ∑m j=1Wijxjdj + bi, d1, . . . , dm ∼ Bernoulli(p). The\nmean and the variance of yi satisfy E[y] = pWE[x]+b and Var[yi] ≤ (1−ρ) ∑ j |Wij |2Var[xjdj ]+\nρ( ∑ j |Wij | √ Var[xjdj ])2. Since the stochastic input and the weight parameter in the dropout layer are independent, one can exactly calculate the variance of the product using each expectation and variance. The VP at the affine layer with the dropout is given by the composite function,\n(m,v) = Taff ◦ Tdrop(U(x)).\nThe uncertainty of yi is then represented by the Gaussian distribution N(mi, vi). A similar formula is found in the uncertainty evaluation of the LSTM unit in Section 3.4 with the explicit expressions." }, { "heading": "3.3 UNCERTAINTY VIA ACTIVATION FUNCTIONS", "text": "The nonlinear activation function is an important component of neural network models in order to achieve high representation ability and accurate prediction (Cybenko, 1989). The ReLU, sigmoid function, and their variants are common activation functions. In several works, the expectation and the variance of the output from activation functions have been calculated (Frey & Hinton, 1999; MacKay, 1992; Daunizeau, 2017). Let us introduce the transformed distribution by the ReLU and sigmoid function.\nThe ReLU function is defined by y = max(x, 0). For xi ∼ N(E[xi],Var[xi]), the exact expectation and variance of y are expressed by the probability density φ and the cumulative function Φ of the standard Gaussian distribution (Frey & Hinton, 1999; Wu et al., 2019): E[y] = E[x]Φ(E[x]/ √ Var[x]) + √ Var[x]φ(E[x]/ √ Var[x]) and Var[y] = (E[x]2 +\nVar[x])Φ(E[x]/ √ Var[x]) + E[x] √ Var[x]φ(E[x]/ √\nVar[x])− E[y]2 using the element-wise operations for the two vectors E[x] and Var[x].\nThe sigmoid function is defined by yi = s(xi) = 1/(1 + e −xi). For xi ∼ N(E[xi],Var[xi]), MacKay (1992) and Daunizeau (2017) derived the approximate expectation and variance of y, E[y] ≈ s( E[x]√\n1+cVar[x] ), Var[y] ≈ s( E[x]√ 1+cVar[x] )(1 − s( E[x]√ 1+cVar[x] )(1 − 1√ 1+cVar[x] ), where the\nconstant c depends on the approximation method. The common choice is c = π/8 ≈ 0.393, while\nDaunizeau (2017) found c = 0.368 based on numerical optimization. In the same way, one can calculate approximate expectation and variance of tanh(y). The VP at the activation layer is expressed by U [y] = Tact(U [x]), where the operation Tact depends on the activation function. The output U [y] is defined by the above expectation and variance. In the multiclass classification problems, the softmax function is commonly used at the last layer in DNNs. However, the expectation of the softmax function does not have analytic expression under the multivariate Gaussian distribution. Daunizeau (2017) utilized the approximate expectation of the sigmoid function to approximate the expected softmax output. However, the variance of the softmax function was not provided. In this paper, we interpret the multiclass classification problem as the multi-label problem and at the last layer, we use the sigmoid functions as many as the number of labels. Given the transformations zk 7−→ s(zk), k = 1, . . . , G at the last layer for the classification with G labels, the prediction is given by the label that attains the maximum value of s(zk). The advantage of this replacement is that the reliable evaluation of the uncertainty is possible for the sigmoid function as shown above. In numerical experiments, we show that the multi-label formulation with several sigmoid functions provides a comparable prediction accuracy as the standard multi-class formulation using the softmax function, while it also gives a reliable uncertainty evaluation." }, { "heading": "3.4 LSTM UNIT WITH DROPOUT", "text": "The uncertainty evaluation of the Recurrent Neural Networks (RNNs) is an important task as the RNNs are widely used in real-world problems. This section is devoted to the uncertainty propagation in the LSTM unit when the dropout is used to train the weight parameters (Gal & Ghahramani, 2016b). According to Greff et al. (2017), the standard form of the LSTM unit is defined by\n(i f g o) = (s s tanh s) ◦ (ht−1 xt) ( Ũi Ũf Ũg Ũo W̃i W̃f W̃g W̃o ) ,\nht = o tanh(ct), ct = fct−1 + ig\nusing the sigmoid function s, where the multiplication of two vectors is the element-wise operation and ◦ is the composition of the linear layer and the activation function, i.e., i = s(ht−1Ũi + xtW̃i), g = tanh(ht−1Ũg + xtW̃g), etc. The matrices W̃ ’s and Ũ ’s are the input weights and recurrent weights, respectively. The vectors, i, f ,g, and o, denote the input gate, forget gate, new candidate vector, and output gate. The cell state ct and the hidden state ht retain the long and short term memory.\nHere, Ũ∗ and W̃∗ are regarded as random matrices distributed from the posterior distribution induced from the dropout using Bernoulli(p). Hence, each row of Ũ∗ and W̃∗ are set to the null row vector with probability 1 − p. When the tied dropout is used for LSTM, the same rows of all Ũ∗ are randomly dropped and the same rule is applied to W̃∗. On the other hand, in the untied dropout layer, the dropout is separately executed for each Ũ∗ and W̃∗. Detail of the tied and untied dropout is found in Gal & Ghahramani (2016b).\nLet us consider the map from U(ht−1, ct−1) to U(ht, ct). The map depends on the data xt. Since the computation in the LSTM with the dropout is expressed as the composite function of the dropout layer, affine layer and the activation function, we have\nU(i, f ,g,o) = Tact ◦ Taff ◦ Tdrop(U(ht−1,xt)).\nHence, the mean and variance vectors of ht and ct are obtained from those of i, f ,g,o and ct−1. This computation is shown below. We need an appropriate assumption to calculate E[ct] and Var[ct] as we do not use the correlations. The simplest assumption is the independence of random vectors. When f , ct−1, i and g are independent, we obtain\nE[ct] = E[fct−1] + E[ig] = E[f ]E[ct−1] + E[i]E[g], (3) Var[ct] = Var[f ]Var[ct−1] + Var[f ]E[ct−1]2 + E[f ]2Var[ct−1]\n+ Var[i]Var[g] + Var[i]E[g]2 + E[i]2Var[g]. (4)\nThis is the VP for the cell state vector ct−1 in the LSTM. Likewise, the VP for ht is obtained. The above update function to compute the uncertainty of ht and ct from i, f ,g,o and ct−1 is denoted by Tcell. As a result, we have\nU(ht, ct) = Tcell(U(i, f ,g,o),U(ct−1)) = Tcell(Tact ◦ Taff ◦ Tdrop(U(ht−1,xt)),U(ct−1)).\nThis is the VP formula from (ct−1,ht−1) to (ct,ht). Repeating the above computation with the observed sequence {xt}Tt=1, one can evaluate the uncertainty of the cell state vectors and the outputs {yt}Tt=1, where yt = ht, t = 1, . . . , T . Let us consider the validity of the above independence assumption. For given ht−1, the conditional independence of i, f ,g,o and ct−1 holds when the untied dropout is used to train the LSTM unit, i.e., the equality p(i, f ,g,o, ct−1|ht−1) = p(ct−1|ht−1) ∏ s∈{i,f ,g,o} p(s|ht−1) holds for the posterior distribution. The randomness comes from the Bayesian interpretation of the untied dropout. Here, the observation xt is regarded as a constant without uncertainty. Then, equation 3 and 4 exactly hold by replacing the mean and variance with conditional expectation and the conditional variance under the condition of ht−1. If the variance of ht−1 is small, the independence assumption is expected to be approximately valid. When the uncertainty of ht−1 is not ignorable, the sampling from the Gaussian distribution representing the uncertainty of ht−1 is available with the formulae E[ct] = Eht−1 [E[ct|ht−1]] and Var[ct] = Eht−1 [Var[ct|ht−1]] to compute E[ct] and Var[ct] approximately." }, { "heading": "3.5 ESTIMATION OF CORRELATION PARAMETER", "text": "When we evaluate the uncertainty in the affine layer, we need to determine the correlation parameter ρ in equation 1. If the correlation of input to the affine layer is not ignored, the upper bound of the variance with an appropriate ρ is used to avoid overconfidence. Hence, the estimation of the parameter ρ is important. A simple method of estimating an appropriate ρ is to use the validation set as follows.\n1. For each candidate of ρ, execute the following steps.\n(a) For each data (xi,yi) in the validation set, compute the mean vector mi and variance vector vi of the output of the network for given xi using VPBNN.\n(b) Compute the predictive log-likelihood on the validation set, LLρ = ∑\ni:validation set\nlog p(yi;mi,vi),\nwhere p(y;m,v) is the probability density of the uncorrelated normal distribution with the mean mj and variance vj for each element.\n2. Choose ρ that maximizes LLρ.\nThough we need to prepare several candidates of the correlation parameter, the computation cost is still lower than MC dropout. To evaluate the uncertainty on Ntest test samples, MC dropout with T samplings requires NtestT feed-forward calculations. The VPBNN with adaptive choice of ρ needs approximately 2NvalKcor + 2Ntest feed-forward calculates, where Nval is the number of the validation set and Kcor is the number of candidates of ρ. The factor 2 comes from the computation of both mean and variance. Usually,Kcor is much less than T andNval is not extremely large in comparison to Ntest. In practice, we do not need a large validation set to estimate ρ, as shown in numerical experiments. Hence, VPBNN with adaptive ρ is computationally efficient than MC dropout. If distinct correlation parameters are used in each affine layer, the computation cost becomes large. In numerical experiments, we find that the uncertainty evaluation using the same ρ in all the affine layers works well." }, { "heading": "4 SCALING METHODS FOR CALIBRATION AND VARIANCE PROPAGATION", "text": "The variance propagation is regarded as a calibration based on the uncertainty. There are some scaling methods for the calibration. Platt scaling (Platt, 1999) is a classical calibration method for multiclass classification problems, and the temperature scaling (Guo et al., 2017; Ji et al., 2019) is a simplified method of the Platt scaling.\nLet us consider the conditional probability function Pr(y|z) = ezy/ ∑m j=1 e\nzj for z = (z1, . . . , zm) ∈ Rm and y = 1, . . . ,m. The temperature scaling of Pr(y|z) is given by Pr(y|z/T ), where T > 0 is the temperature parameter. Usually T is greater than one, and it softens the class probability. The Platt scaling of Pr(y|z) is defined by the scaling of z such that Pr(y|Wz + b), where W is a diagonal matrix and b is a m-dimensional vector. The Platt scaling is the coordinatewise calibration, while the temperature scaling is the homogeneous scaling for the feature vector z. Another expansion of the temperature scaling is the bin-wise temperature scaling (Ji et al., 2019). In the bin-wise scaling, the sample space are divided into K bins. The label probability is calibrated by Pr(y|z/Tk(z)) in which k(z) is the index of the bin including the sample z, and Tk > 0 is the temperature for the calibration at the k-th bin. Each Tk is determined from validation data in the k-th bin. Intuitively, an extremely large label probability tends to yield an overconfident prediction. At the point z having the large maximum probability, the large scaling parameter Tk is introduced to soften the overconfidence at the region including z.\nThe calculation of the VP at the sigmoid activation layer for the multi-label classification is given by s(zj) 7−→ s(E[zj ]/ √ 1 + cVar[zj ]), j = 1, . . . ,m. When the uncertainty of the random vector z is not taken into account, the prediction using s(E[zj ]), j = 1, . . . ,m is apt to be overconfident. Comparing to s(E[zj ]), the uncertainty defined by the variance Var[zj ] works as a coordinate-wise calibration like the Platt scaling. If the variance is isotropic, the scaling does not change the ranking of the label probability like the temperature scaling.\nIn the variance propagation using the Taylor approximation (Postels et al., 2019), the class probability is calculated as the standard manner without calibration, while the variance is propagated along the layers in DNNs in order to evaluate the uncertainty. Hence, the calibration effect is not incorporated into the naive Taylor approximation method." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 NUMERICAL EXPERIMENTS ON SYNTHETIC DATA", "text": "We assess the uncertainty computed by the VPBNN and the Taylor approximation using synthetic data. Let us consider the uncertainty evaluation for regression problems. The target function is the real-valued function on one-dimensional input space shown in Figure 1. We compared three methods; MC dropout with 1000 samples from the posterior distribution associated with the dropout, Taylor approximation, and VPBNN with adaptive ρ and several fixed ρ’s. The architecture of the NN model is shown in Figure 4 of the Appendix. The results are presented in Figure 1. The Taylor approximation and the VPBNN with ρ = 0 tends to be overconfident and the VPBNN with ρ = 0.15 gives a similar result to the MC dropout. We find that the adaptive choice of ρ can avoid the overconfidence, while providing a meaningful result.\nIn Section C in the appendix, we present the uncertainty evaluation for RNNs. Likewise, we find that the appropriate choice of ρ relaxes the overconfident prediction and that the adaptive ρ provides a meaningful result as well as MC dropout.\nOverall, the VPBNN with appropriate ρ provides similar results to the MC dropout. As the VPBNN needs only the one-path calculation as well as the VP using Taylor approximation, the computation cost is much less than the MC dropout. The adaptive choice of ρ using the validation set efficiently works to produce a similar result as MC dropout.\nFurther numerical results are presented in Section B of the appendix. The Taylor approximation for the uncertainty evaluation proposed by Postels et al. (2019) also leads to a computationally efficient single-shot method to compute the uncertainty. However, we find that Taylor approximation tends to lead overconfident result compared to our method in the present experiments." }, { "heading": "5.2 RNN FOR LANGUAGE MODELING", "text": "We report numerical experiments of language modeling problem. The problem setup is the same as the problem considered by Zaremba et al. (2014) and Gal & Ghahramani (2016b). We use Penn Treebank, which is a standard benchmark in this field. In the experiments, the LSTM consisting of two-layers with 650 units in each layer is used. The model architecture and most of hyper parameters\nare set to those used by Gal & Ghahramani (2016b). Figure 7 in the appendix shows the RNN and the converted VPBNN. The weight decay parameter is set to 10−7 according to the code in Github provided by the authors of Gal & Ghahramani (2016b), as the parameter was not explicitly written in their paper.\nThe results are shown in Table 1. The prediction performance is evaluated by the perplexity on the test set. In the table, the standard dropout approximation propagates the mean of each approximating distribution as input to the next layer (Gal & Ghahramani, 2016b). As the Taylor approximation computes the mean of the output without using the variance, it must provide the same result as the standard dropout approximation. In our experiment, both methods produced almost identical perplexity scores. This result means that we approximately reproduced the numerical results reported in the past papers. The MC dropout and the VPBNN with ρ = 0 achieved a lower perplexity than the others. Our method using only a one-path calculation can provide almost the same accuracy as the MC dropout that requires more than 1000 times feed-forward calculations of the output values.\nNote that the VPBNN is not the approximation of MC dropout. Both MC dropout and VPBNN are an approximation of the posterior distribution, though MC dropout with a sufficient number of feedforward calculations tends to provide a satisfactory result. The numerical experiments indicates that the number of feed-forward calculations in MC dropout is not sufficient for this task." }, { "heading": "5.3 OUT-OF-DISTRIBUTION DETECTION", "text": "Let us consider the out-of-distribution detection problem. The task is to find samples whose distribution is different from that of the training samples. The uncertainty of samples is evaluated for this task. First of all, the neural network is trained using Fashion-MNIST dataset (Xiao et al., 2017). Then, several methods for uncertainty evaluation are used to detect samples from non-training\ndatasets. In this experiments, we use MNIST (Lecun et al., 1998), EMNIST-MNIST (Cohen et al., 2017), Kannada (Prabhu, 2019), and Kuzushiji (Clanuwat et al., 2018) as non-training datasets. The detection accuracy of each method is evaluated by the AUC measure on the test dataset.\nWe compared MC dropout with 100 sampling (MC100) or 2000 sampling (MC2000), Taylor approximation, and VPBNN with adaptive ρ. The network architecture is the CNN shown in Figure 8 of the Appendix. At the output layer, the multi-label sigmoid function is used. Numerical results for the softmax function are reported in Section E of the appendix. In the training process, Adam optimizer with early stopping on validation dataset is used. The 60k training data was divided into 50k training data for weight parameter tuning and 10k validation data for hyper-parameter tuning.\nWe confirmed that all methods achieve almost the same prediction accuracy on the test data of Fashion-MNIST. The result is shown in the “Test accuracy” column in Table 2. The prediction is done by using the top-ranked labels. Though MC dropout and VPBNN tend to relax the overconfident prediction, the calibration does not significantly affect the label prediction accuracy of this problem.\nFor the uncertainty evaluation, we used two criteria. One is the entropy computed from the mean value of the output, H[y] = − 1m ∑m i=1 {E[yi] logE[yi] + (1− E[yi]) log(1− E[yi])}, for the output y = (yi, . . . , ym) of the NN, and the other is the mean-standard deviation (meanstd) (Kampffmeyer et al., 2016; Gal et al., 2017) that is the averaged standard deviation, i.e., σ(y) = 1m ∑m i=1 √ Var[yi]. In Section E of the appendix, we report the results of other uncertainty measure using not only sigmoid function but the softmax functions. Overall, we find that the mean-standard deviation outperforms the entropy measure in the out-of-distribution detection. This is because the uncertainty is rather related to the variance than expectation of the output value. Table 2 shows the results of the mean-standard deviation. In the adaptive VPBNN of this task, the estimated correlation parameter ρ approximately ranges from 0.0 to 0.0005. Hence, VPBNN with the independent assumption, i.e., ρ = 0, also works well. Taylor approximation method fails to detect the sample from non-training distribution, This is because the estimation accuracy of the variance is not necessarily high as shown in Section B of the appendix.\nLet us consider the computation cost. In our experiments, the computation time of MC dropout with 100 sampling is 15.6[sec]± 28.3[ms] on average for the uncertainty evaluation of 10K test samples. For MC dropout with 2000 sampling, the computation cost is approximately 20 times higher since it is proportional to the number of sampling. For the adaptive VPBNN, the computation time is 175[ms] ± 31.7[ms] including the adaptive choice of ρ from 10 candidates when 10K validation samples are used. VPBNN with adaptive ρ provides comparable performance to MC dropout using a sufficient number of sampling while keeping much less computation cost. As discussed in language modeling, the number of feed-forward calculations in MC dropout is considered not sufficient for this task." }, { "heading": "6 CONCLUSION", "text": "We developed a sampling-free method for uncertainty evaluation. Our method requires only a onepath calculation of DNNs or RNNs, while the MC-based methods need thousands of feed-forward calculations to evaluate the uncertainty. In the numerical experiments, we show that our method provides more reliable results than existing sampling-free methods such as the Taylor approximation." }, { "heading": "B SUPPLEMENTARY ON APPROXIMATION ACCURACY", "text": "Let us consider the numerical accuracy of VPBNN and Taylor approximation. The target is to compute the mean and variance of output value y = f(Wx + b) for random input vector x. The ReLU or sigmoid function is used as the activation function f . These two methods are compared with the Monte-Carlo method with sufficiently many samples. The distribution of input variable is the Gaussian distribution or the uniform distribution. The mean (resp. the standard deviation) of input variable varies in the interval [−10, 10] (resp. [0.1, 10]) for both Gaussian distribution and the uniform distribution.\nThe absolute error of VPBNN and Taylor approximation from the MC method is shown in Figure 3. The horizontal axis and vertical axis denote the variance and mean of the input distribution, respectively. In numerical experiments, VPBNN achieved higher accuracy than Taylor approximation. We find that Taylor approximation tends to yield extremely large variance even when the sigmoid function is used as the activation function. Overall, VPBNN has a preferable property compared to the Taylor approximation.\nThe architectures of the NN and RNN used in Section 5.1 are shown in Figure 4 and 5." }, { "heading": "C NUMERICAL EXPERIMENTS ON SYNTHETIC DATA: RNN", "text": "Numerical experiments of RNNs using synthetic data are also conducted. The RNN model is shown in Figure 5 of the Appendix. We evaluated the uncertainty of Bayesian RNN trained with dropout,\nwhere the input is 30 length sequence. The results are shown in Figure 6. Again the Taylor approximation and the VPBNN with the independent assumption (ρ = 0) tend to yield overconfident results compared to the MC dropout. We find that the VPBNN with adaptive ρ provides a similar results to the MC dropout." }, { "heading": "D SUPPLEMENTARY OF RNN FOR LANGUAGE MODELING", "text": "The architectures of RNN used in Section 5.2 is shown in Figure 7." }, { "heading": "E SUPPLEMENTARY OF OUT-OF-DISTRIBUTION", "text": "Let us consider the out-of-distribution detection. First of all, the neural network is trained using Fashion-MNIST (Xiao et al., 2017). Then, several methods for uncertainty evaluation are used to\ndetect samples from non-training datasets, MNIST (Lecun et al., 1998), EMNIST-MNIST (Cohen et al., 2017), Kannada (Prabhu, 2019), and Kuzushiji (Clanuwat et al., 2018). The detection accuracy of each method is evaluated by the AUC measure on the test dataset.\nThe network architecture used in Section 5.3 is the CNN in Figure 8 of the Appendix. In addition to the CNN with the softmax function provided in Keras, we implemented an another CNN with multi-label sigmoid functions at the output layer.\nIn the training process of the NNs, Adam optimizer with early stopping on validation dataset is exploited. The 60k training data was divided into 50k training data for the weight parameter tuning and 10k validation data for hyper-parameter tuning. For the uncertainty evaluation, we used two criteria; one is the entropy computed from the mean value of the output, and the other is the meanstandard deviation (mean-std) (Kampffmeyer et al., 2016; Gal et al., 2017) that is computed from\nthe variance. More precisely, the entropy defined from the softmax output is\nH[y] = − m∑ i=1 E[yi] logE[yi],\nand the entropy defined from the sigmoid function for multi-label setting is given by\nH[y] = − 1 m m∑ i=1 {E[yi] logE[yi] + (1− E[yi]) log(1− E[yi])} .\nThe mean-std is defined by\nσ(y) = 1\nm m∑ i=1 √ Var[yi].\nThe results are presented in Table 3 in the appendix. For the out-of-distribution detection, we find that the mean-std based method outperforms the other methods using entropy criterion. Moreover, our method provides the uncertainty by only one-path feed-forward calculation, while the MC dropout needs more than hundreds of feed-forward calculations. The Taylor approximation fails to detect the sample from non-training distribution. This is because the approximation accuracy of the Taylor approximation is not necessarily high as shown in Section B.\nOn the other hand, all the methods considered here achieve almost the same prediction accuracy on the test data of Fashion MNIST as shown in Table 4. The prediction is done by using the top-ranked labels. In this experiment, the calibration of the label probability does not significantly affect the ranking of the label probability." } ]
2,020
null
SP:72d1283f3602edc22896934271fcec5b03f25d9e
[ "This paper studies the differential private synthetic dataset generation. Unlike previous DP based GAN models, this paper aims to boost the sample quality of after the training stage. In particular, the final synthetic dataset is sampled from the sequence of generators obtained during GAN training. The distribution is obtained by a private two-player game between the privately selected discriminator and a sampler from the mixture of generators. The results are demonstrated on gaussian data and tabular data." ]
Differentially private GANs have proven to be a promising approach for generating realistic synthetic data without compromising the privacy of individuals. Due to the privacy-protective noise introduced in the training, the convergence of GANs becomes even more elusive, which often leads to poor utility in the output generator at the end of training. We propose Private post-GAN boosting (Private PGB), a differentially private method that combines samples produced by the sequence of generators obtained during GAN training to create a high-quality synthetic dataset. To that end, our method leverages the Private Multiplicative Weights method (Hardt and Rothblum, 2010) to reweight generated samples. We evaluate Private PGB on two dimensional toy data, MNIST images, US Census data and a standard machine learning prediction task. Our experiments show that Private PGB improves upon a standard private GAN approach across a collection of quality measures. We also provide a non-private variant of PGB that improves the data quality of standard GAN training.
[ { "affiliations": [], "name": "POST-GAN BOOSTING" }, { "affiliations": [], "name": "Marcel Neunhoeffer" }, { "affiliations": [], "name": "Zhiwei Steven Wu" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H. Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS", "year": 2016 }, { "authors": [ "John M. Abowd" ], "title": "The U.S. census bureau adopts differential privacy", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018,", "year": 2018 }, { "authors": [ "Christian Arnold", "Marcel Neunhoeffer" ], "title": "Really useful synthetic data–a framework to evaluate the quality of differentially private synthetic data", "venue": "arXiv preprint arXiv:2004.07740,", "year": 2020 }, { "authors": [ "Sanjeev Arora", "Elad Hazan", "Satyen Kale" ], "title": "The multiplicative weights update method: a metaalgorithm and applications", "venue": "Theory of Computing,", "year": 2012 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Yingyu Liang", "Tengyu Ma", "Yi Zhang" ], "title": "Generalization and equilibrium in generative adversarial nets (GANs)", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Samaneh Azadi", "Catherine Olsson", "Trevor Darrell", "Ian J. Goodfellow", "Augustus Odena" ], "title": "Discriminator rejection sampling", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Brett K. Beaulieu-Jones", "Zhiwei Steven Wu", "Chris Williams", "Ran Lee", "Sanjeev P. Bhavnani", "James Brian Byrd", "Casey S. Greene" ], "title": "Privacy-preserving generative deep neural networks support clinical data sharing", "venue": "Circulation: Cardiovascular Quality and Outcomes,", "year": 2019 }, { "authors": [ "Kamalika Chaudhuri", "Staal A Vinterbo" ], "title": "A stability-based validation procedure for differentially private machine learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": "In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Bolin Ding", "Janardhan Kulkarni", "Sergey Yekhanin" ], "title": "Collecting telemetry data privately", "venue": "In Advances in Neural Information Processing Systems 30,", "year": 2017 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In Proceedings of the 3rd Theory of Cryptography Conference,", "year": 2006 }, { "authors": [ "Úlfar Erlingsson", "Vasyl Pihur", "Aleksandra Korolova" ], "title": "RAPPOR: Randomized aggregatable privacy-preserving ordinal response", "venue": "In Proceedings of the 2014 ACM Conference on Computer and Communications Security,", "year": 2014 }, { "authors": [ "Yoav Freund", "Robert E Schapire" ], "title": "A decision-theoretic generalization of on-line learning and an application to boosting", "venue": "Journal of Computer and System Sciences,", "year": 1997 }, { "authors": [ "Lorenzo Frigerio", "Anderson Santana de Oliveira", "Laurent Gomez", "Patrick Duverger" ], "title": "Differentially private generative adversarial networks for time series, continuous, and discrete open data", "venue": "In ICT Systems Security and Privacy Protection - 34th IFIP TC 11 International Conference,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2,", "year": 2014 }, { "authors": [ "Moritz Hardt", "Guy N. Rothblum" ], "title": "A multiplicative weights mechanism for privacy-preserving data analysis", "venue": "In 51th Annual IEEE Symposium on Foundations of Computer Science,", "year": 2010 }, { "authors": [ "Moritz Hardt", "Katrina Ligett", "Frank McSherry" ], "title": "A simple and practical algorithm for differentially private data release", "venue": "In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Quan Hoang", "Tu Dinh Nguyen", "Trung Le", "Dinh Phung" ], "title": "MGAN: Training generative adversarial nets with multiple generators", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Jyrki Kivinen", "Manfred K. Warmuth" ], "title": "Exponentiated gradient versus gradient descent for linear predictors", "venue": "Information and Computation,", "year": 1997 }, { "authors": [ "Jingcheng Liu", "Kunal Talwar" ], "title": "Private selection from private candidates", "venue": "In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2019 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "arXiv preprint arXiv:1611.00712,", "year": 2016 }, { "authors": [ "Frank McSherry", "Kunal Talwar" ], "title": "Mechanism design via differential privacy", "venue": "In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science,", "year": 2007 }, { "authors": [ "Ilya Mironov" ], "title": "Rényi differential privacy", "venue": "IEEE Computer Security Foundations Symposium,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Shuang Song", "Ilya Mironov", "Ananth Raghunathan", "Kunal Talwar", "Úlfar Erlingsson" ], "title": "Scalable private learning with PATE", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Steven Ruggles", "Sarah Flood", "Ronald Goeken", "Josiah Grover", "Erin Meyer", "Jose Pacas", "Matthew Sobek" ], "title": "Ipums usa: Version 9.0 [dataset", "venue": "Minneapolis, MN: IPUMS,", "year": 2019 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Joshua Snoke", "Gillian M. Raab", "Beata Nowok", "Chris Dibben", "Aleksandra Slavkovic" ], "title": "General and specific utility measures for synthetic data", "venue": "Journal of the Royal Statistical Society: Series A (Statistics in Society),", "year": 2018 }, { "authors": [ "Jiaming Song", "Stefano Ermon" ], "title": "Bridging the gap between f -gans and wasserstein gans, 2020", "venue": null, "year": 2020 }, { "authors": [ "Akash Srivastava", "Lazar Valkov", "Chris Russell", "Michael U Gutmann", "Charles Sutton" ], "title": "Veegan: Reducing mode collapse in gans using implicit variational learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Daniel J Stekhoven", "Peter Bühlmann" ], "title": "Missforest—non-parametric missing value imputation for mixed-type data", "venue": null, "year": 2012 }, { "authors": [ "Ilya Tolstikhin", "Sylvain Gelly", "Olivier Bousquet", "Carl-Johann Simon-Gabriel", "Bernhard Schölkopf" ], "title": "Adagan: Boosting generative models", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Reihaneh Torkzadehmahani", "Peter Kairouz", "Benedict Paten" ], "title": "DP-CGAN: differentially private synthetic data and label generation", "venue": "CoRR, abs/2001.09700,", "year": 2020 }, { "authors": [ "Ryan Turner", "Jane Hung", "Eric Frank", "Yunus Saatchi", "Jason Yosinski" ], "title": "Metropolis-Hastings generative adversarial networks", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Liyang Xie", "Kaixiang Lin", "Shu Wang", "Fei Wang", "Jiayu Zhou" ], "title": "Differentially private generative adversarial network. CoRR, abs/1802.06739, 2018", "venue": "URL http://arxiv.org/abs/1802", "year": 2018 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "PATE-GAN: Generating synthetic data with differential privacy guarantees", "venue": "In International Conference on Learning Representations,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The vast collection of detailed personal data, including everything from medical history to voting records, to GPS traces, to online behavior, promises to enable researchers from many disciplines to conduct insightful data analyses. However, many of these datasets contain sensitive personal information, and there is a growing tension between data analyses and data privacy. To protect the privacy of individual citizens, many organizations, including Google (Erlingsson et al., 2014), Microsoft (Ding et al., 2017), Apple (Differential Privacy Team, Apple, 2017), and more recently the 2020 US Census (Abowd, 2018), have adopted differential privacy (Dwork et al., 2006) as a mathematically rigorous privacy measure. However, working with noisy statistics released under differential privacy requires training.\nA natural and promising approach to tackle this challenge is to release differentially private synthetic data—a privatized version of the dataset that consists of fake data records and that approximates the real dataset on important statistical properties of interest. Since they already satisfy differential privacy, synthetic data enable researchers to interact with the data freely and to perform the same analyses even without expertise in differential privacy. A recent line of work (Beaulieu-Jones et al., 2019; Xie et al., 2018; Yoon et al., 2019) studies how one can generate synthetic data by incorporating differential privacy into generative adversarial networks (GANs) (Goodfellow et al., 2014). Although GANs provide a powerful framework for synthetic data, they are also notoriously hard to train and privacy constraint imposes even more difficulty. Due to the added noise in the private gradient updates, it is often difficult to reach convergence with private training.\nIn this paper, we study how to improve the quality of the synthetic data produced by private GANs. Unlike much of the prior work that focuses on fine-tuning of network architectures and training techniques, we propose Private post-GAN boosting (Private PGB)—a differentially private method that boosts the quality of the generated samples after the training of a GAN. Our method can be viewed as a simple and practical amplification scheme that improves the distribution from any ex-\nisting black-box GAN training method – private or not. We take inspiration from an empirical observation in Beaulieu-Jones et al. (2019) that even though the generator distribution at the end of the private training may be a poor approximation to the data distribution (due to e.g. mode collapse), there may exist a high-quality mixture distribution that is given by several generators over different training epochs. PGB is a principled method for finding such a mixture at a moderate privacy cost and without any modification of the GAN training procedure.\nTo derive PGB, we first formulate a two-player zero-sum game, called post-GAN zero-sum game, between a synthetic data player, who chooses a distribution over generated samples over training epochs to emulate the real dataset, and a distinguisher player, who tries to distinguish generated samples from real samples with the set of discriminators over training epochs. We show that under a “support coverage” assumption the synthetic data player’s mixed strategy (given by a distribution over the generated samples) at an equilibrium can successfully “fool” the distinguisher–that is, no mixture of discriminators can distinguish the real versus fake examples better than random guessing. While the strict assumption does not always hold in practice, we demonstrate empirically that the synthetic data player’s equilibrium mixture consistently improves the GAN distribution.\nThe Private PGB method then privately computes an approximate equilibrium in the game. The algorithm can be viewed as a computationally efficient variant of MWEM (Hardt & Rothblum, 2010; Hardt et al., 2012), which is an inefficient query release algorithm with near-optimal sample complexity. Since MWEM maintains a distribution over exponentially many “experts” (the set of all possible records in the data domain), it runs in time exponential in the dimension of the data. In contrast, we rely on private GAN to reduce the support to only contain the set of privately generated samples, which makes PGB tractable even for high-dimensional data.\nWe also provide an extension of the PGB method by incorporating the technique of discriminator rejection sampling (Azadi et al., 2019; Turner et al., 2019). We leverage the fact that the distinguisher’s equilibrium strategy, which is a mixture of discriminators, can often accurately predict which samples are unlikely and thus can be used as a rejection sampler. This allows us to further improve the PGB distribution with rejection sampling without any additional privacy cost since differential privacy is preserved under post-processing. Our Private PGB method also has a natural non-private variant, which we show improves the GAN training without privacy constraints.\nWe empirically evaluate both the Private and Non-Private PGB methods on several tasks. To visualize the effects of our methods, we first evaluate our methods on a two-dimensional toy dataset with samples drawn from a mixture of 25 Gaussian distributions. We define a relevant quality score function and show that the both Private and Non-Private PGB methods improve the score of the samples generated from GAN. We then show that the Non-Private PGB method can also be used to improve the quality of images generated by GANs using the MNIST dataset. Finally, we focus on applications with high relevance for privacy-protection. First we synthesize US Census datasets and demonstrate that the PGB method can improve the generator distribution on several statistical measures, including 3-way marginal distributions and pMSE. Secondly, we evaluate the PGB methods on a dataset with a natural classification task. We train predictive models on samples from Private PGB and samples from a private GAN (without PGB), and show that PGB consistently improves the model accuracy on real out-of-sample test data.\nRelated work. Our PGB method can be viewed as a modular boosting method that can improve on a growing line of work on differentially private GANs (Beaulieu-Jones et al., 2019; Xie et al., 2018; Frigerio et al., 2019; Torkzadehmahani et al., 2020). To obtain formal privacy guarantees, these algorithms optimize the discriminators in GAN under differential privacy, by using private SGD, RMSprop, or Adam methods, and track the privacy cost using moments accounting Abadi et al. (2016); Mironov (2017). Yoon et al. (2019) give a private GAN training method by adapting ideas from the PATE framework (Papernot et al., 2018).\nOur PGB method is inspired by the Private Multiplicative Weigths method (Hardt & Rothblum, 2010) and its more practical variant MWEM (Hardt et al., 2012), which answer a large collection of statistical queries by releasing a synthetic dataset. Our work also draws upon two recent techniques (Turner et al. (2019) and Azadi et al. (2019)) that use the discriminator as a rejection sampler to improve the generator distribution. We apply their technique by using the mixture discriminator computed in PGB as the rejection sampler. There has also been work that applies the idea of boosting to (non-private) GANs. For example, Arora et al. (2017) and Hoang et al. (2018) propose methods\nthat directly train a mixture of generators and discriminators, and Tolstikhin et al. (2017) proposes AdaGAN that reweighes the real examples during training similarly to what is done in AdaBoost (Freund & Schapire, 1997). Both of these methods may be hard to make differentially private: they either require substantially more privacy budget to train a collection of discriminators or increase the weights on a subset of examples, which requires more adding more noise when computing private gradients. In contrast, our PGB method boosts the generated samples post training and does not make modifications to the GAN training procedure." }, { "heading": "2 PRELIMINARIES", "text": "Let X denote the data domain of all possible observations in a given context. Let pd be a distribution over X . We say that two datasets X,X ′ ∈ Xn are adjacent, denoted by X ∼ X ′, if they differ by at most one observation. We will write pX to denote the empirical distribution over X . Definition 1 (Differential Privacy (DP) (Dwork et al., 2006)). A randomized algorithm A : Xn → R with output domain R (e.g. all generative models) is (ε, δ)-differentially private (DP) if for all adjacent datasets X,X ′ ∈ Xn and for all S ⊆ R: P (A(X) ∈ S) ≤ eεP (A(X ′) ∈ S) + δ.\nA very nice property of differential privacy is that it is preserved under post-processing. Lemma 1 (Post-processing). LetM be an (ε, δ)-differentially private algorithm with output range R and f : R→ R′ be any mapping, the composition f ◦M is (ε, δ)-differentially private.\nAs a result, any subsequent analyses conducted on DP synthetic data also satisfy DP.\nThe exponential mechanism (McSherry & Talwar, 2007) is a private mechanism for selecting among the best of a discrete set of alternativesR, where “best” is defined by a quality function q : Xn×R → R that measures the quality of the result r for the dataset X . The sensitivity of the quality score q is defined as ∆(q) = maxr∈RmaxX∼X′ |q(X, r)−q(X ′, r)|. Then given a quality score q and privacy parameter ε, the exponential mechanismME(q, ε,X) simply samples a random alternative from the rangeR such that the probability of selecting each r is proportional to exp(εq(X, r)/(2∆(q)))." }, { "heading": "2.1 DIFFERENTIALLY PRIVATE GAN", "text": "The framework of generative adversarial networks (GANs) (Goodfellow et al., 2014) consists of two types of neural networks: generators and discriminators. A generator G is a function that maps random vectors z ∈ Z drawn from a prior distribution pz to a sample G(z) ∈ X . A discriminator D takes an observation x ∈ X as input and computes a probability D(x) that the observation is real. Each observation is either drawn from the underlying distribution pd or the induced distribution pg from a generator. The training of GAN involves solving the following joint optimization over the discriminator and generator:\nmin G max D Ex∼pX [f(D(x))] + Ez∼pz [f(1−D(G(z)))]\nwhere f : [0, 1] → R is a monotone function. For example, in standard GAN, f(a) = log a, and in Wasserstein GAN (Arjovsky et al., 2017), f(a) = a. The standard (non-private) algorithm iterates between optimizing the parameters of the discriminator and the generator based on the loss functions:\nLD = −Ex∼pX [f(D(x))]− Ez∼pz [f(1−D(G(z)))], LG = Ez∼pz [f(1−D(G(z)))] The private algorithm for training GAN also performs the same alternating optimization, but it optimizes the discriminator under differential privacy while keeping the generator optimization the same. In general, the training proceeds over epochs τ = 1, . . . , N , and at the end of each epoch τ the algorithm obtains a discriminator Dτ and a generator Gτ by optimizing the loss functions respectively. In Beaulieu-Jones et al. (2019); Xie et al. (2018), the private optimization on the discriminators is done by running the private SGD method Abadi et al. (2016) or its variants. Yoon et al. (2019) performs the private optimization by incorporating the PATE framework Papernot et al. (2018). For all of these private GAN methods, the entire sequence of discriminators {D1, . . . , DN} satisfies privacy, and thus the sequence of generators {G1, . . . , GN} is also private since they can be viewed as post-processing of the discriminators. Our PGB method is agnostic to the exact private GAN training methods." }, { "heading": "3 PRIVATE POST-GAN BOOSTING", "text": "The noisy gradient updates impede convergence of the differentially private GAN training algorithm, and the generator obtained in the final epoch of the training procedure may not yield a good approximation to the data distribution. Nonetheless, empirical evidence has shown that a mixture over the set of generators can be a realistic distribution (Beaulieu-Jones et al., 2019). We now provide a principled and practical scheme for computing such a mixture subject to a moderate privacy budget. Recall that during private GAN training method produces a sequence of generators G = {G1, . . . , GN} and discriminators D = {D1, . . . , DN}. Our boosting method computes a weighted mixture of the Gj’s and a weighted mixture of the Dj’s that improve upon any individual generator and discriminator. We do that by computing an equilibrium of the following post-GAN (training) zero-sum game." }, { "heading": "3.1 POST-GAN ZERO-SUM GAME.", "text": "We will first draw r independent samples from each generator Gj , and let B be the collection of the rN examples drawn from the set of generators. Consider the following post-GAN zero-sum game between a synthetic data player, who maintains a distribution φ over the data in B to imitate the true data distribution pX , and a distinguisher player, who uses a mixture of discriminators to tell the two distributions φ and pX apart. This zero-sum game is aligned with the minimax game in the original GAN formulation, but is much more tractable since each player has a finite set of strategies. To define the payoff in the game, we will adapt from the Wasserstein GAN objective since it is less sensitive than the standard GAN objective to the change of any single observation (changing any single real example changes the payoff by at most 1/n), rendering it more compatible with privacy tools. Formally, for any x ∈ B and any discriminator Dj , define the payoff as U(x,Dj) = Ex′∼pX [Dj(x′)] + (1−Dj(x)) For any distribution φ over B, let U(φ, ·) = Ex∼φ[U(x, ·)], and similarly for any distribution ψ over {D1, . . . , DN}, we will write U(·, ψ) = ED∼ψ[U(·, D)]. Intuitively, the payoff function U measures the predictive accuracy of the distinguisher in classifying whether the examples are drawn from the synthetic data player’s distribution φ or the private dataset X . Thus, the synthetic data player aims to minimize U while the distinguisher player aims to maximize U .\nDefinition 2. The pair (D,φ) is an α-approximate equilibrium of the post-GAN game if\nmax Dj∈D U(φ,Dj) ≤ U(φ,D) + α, and min φ∈∆(B) U(φ,D) ≥ U(φ,D)− α (1)\nBy von Neumann’s minimax theorem, there exists a value V – called the game value – such that\nV = min φ∈∆(B) max j∈[N ] U(φ,Dj) = max ψ∈∆(D) min x∈B U(x, ψ)\nThe game value corresponds to the payoff value at an exact equilibrium of the game (that is α = 0). When the set of discriminators cannot predict the real versus fake examples better than random guessing, the game value V = 1. We now show that under the assumption that the generated samples in B approximately cover the support of the dataset X , the distinguisher player cannot distinguish the real and fake distributions much better than by random guessing. Theorem 1. Fix a private dataset X ∈ (Rd)n. Suppose that for every x ∈ X , there exists xb ∈ B such that ‖x − xb‖2 ≤ γ. Suppose D includes a discriminator network D1/2 that outputs 1/2 for all inputs, and assume that all networks in D are L-Lipschitz. Then there exists a distribution φ ∈ ∆(B) such that (φ,D1/2) is a Lγ-approximate equilibrium, and so 1 ≤ V ≤ 1 + Lγ.\nWe defer the proof to the appendix. While the support coverage assumption is strong, we show empirically the synthetic data player’s mixture distribution in an approximate equilibrium improves on the distribution given by the last generatorGN even when the assumption does not hold. We now provide a method for computing an approximate equilibrium of the game." }, { "heading": "3.2 BOOSTING VIA EQUILIBRIUM COMPUTATION.", "text": "Our post-GAN boosting (PGB) method computes an approximate equilibrium of the post-GAN zero-sum game by simulating the so-called no-regret dynamics. Over T rounds the synthetic data\nplayer maintains a sequence of distributions φ1, . . . , φT over the set B, and the distinguisher plays a sequence of discriminatorsD1, . . . , DT . At each round t, the distinguisher first selects a discriminatorD using the exponential mechanismME with the payoff U(φt, ·) as the score function. This will find an accurate discriminator Dt against the current synthetic distribution φt, so that the synthetic data player can improve the distribution. Then the synthetic data player updates its distribution to φt based on an online no-regret learning algorithm–the multiplicative weights (MW) method Kivinen & Warmuth (1997). We can view the set of generated examples in B as a set of “experts”, and the algorithm maintains a distribution over these experts and, over time, places more weight on the examples that can better “fool” the distinguisher player. To do so, MW updates the weight for each x ∈ B with\nφt+1(x) ∝ φt exp ( −ηU(x,Dt) ) ∝ exp ( ηDt(x) ) (2)\nwhere η is the learning rate. At the end, the algorithm outputs the average plays (D,φ) for both players. We will show these form an approximate equilibrium of the post-GAN zero-sum game (Freund & Schapire, 1997).\nAlgorithm 1 Differentially Private Post-GAN Boosting Require: a private dataset X ∈ Xn, a synthetic dataset B generated by the set of generators G, a\ncollection of discriminators {D1, . . . , DN}, number of iterations T , per-round privacy budget 0, learning rate parameter η. Initialize φ1 to be the uniform distribution over B for t = 1, . . . , T do\nDistinguisher player: Run exponential mechanism ME to select a discriminator Dt using quality score q(X,Dj) = U(φt, Dj) and privacy parameter 0. Synthetic data player: Multiplicative weights update on the distribution over B: for each example b ∈ B:\nφt+1(b) ∝ φt(b) exp(ηDt(b)) Let D be the discriminator defined by the uniform average over the set {D1, . . . , DT }, and φ be the distribution defined by the average over the set {φ1, . . . , φT }\nNote that the synthetic data player’s MW update rule does not involve the private dataset, and hence is just a post-processing step of the selected discriminator Dt. Thus, the privacy guarantee follows from applying the advacned composition of T runs of the exponential mechanism.1\nTheorem 2 (Privacy Guarantee). For any δ ∈ (0, 1), the private MW post-amplification algorithm satisfies ( , δ)-DP with = √ 2 log(1/δ)T 0 + T 0(exp( 0)− 1).\nNote that if the private GAN training algorithm satisfies ( 1, δ1)-DP and the Private PGB method satisfies ( 2, δ2)-DP, then the entire procedure is ( 1 + 2, δ1 + δ2)-DP.\nWe now show that the pair of average plays form an approximate equilibrium of the game.\nTheorem 3 (Approximate Equilibrium). With probability 1−β, the pair (D,φ) is an α-approximate equilibrium of the post-GAN zero-sum game with α = 4η + log |B|ηT + 2 log(NT/β) n 0\n. If T ≥ n2 and η = 12 √ log(|B|)/T , then\nα = O\n( log(nN |B|/β)\nn 0 ) We provide a proof sketch here and defer the full proof to the appendix. By the result of Freund & Schapire (1997), if the two players have low regret in the dynamics, then their average plays form an approximate equilibrium, where the regret of the two players is defined as Rsyn =∑T t=1 U(φ t, Dt)−minb∈B ∑T t=1 U(b,D t) andRdis = maxDj ∑T t=1 U(φ t, Dj)− ∑T t=1 U(φ\nt, Dt). Then the approximate equilibrium guarantee directly follows from bounding Rsyn with the regret bound of MW and Rdis with the approximate optimality of the exponential mechanism.\n1Note that since the quality scores from the GAN Discriminators are assumed to be probabilities and the score function takes an average over n probabilities (one for each private example), the sensitivity is ∆(q) = 1\nn .\nNon-Private PGB. The Private PGB method has a natural non-private variant: in each round, instead of drawing from the exponential mechanism, the distinguisher player will simply compute the exact best response: Dt = arg maxDj U(φ t, Dj). Then if we set learning rate η = 12 √\nlog(|B|)/T and run for T = log(|B|)/α2 rounds, the pair (D,φ) returned is an α-approximate equilibrium.\nExtension with Discriminator Rejection Sampling. The mixture discriminator D at the equilibrium provides an accurate predictor on which samples are unlikely. As a result, we can use D to further improve the data distribution φ by the discriminator rejection sampling (DRS) technique of Azadi et al. (2019). The DRS scheme in our setting generates a single example as follows: first draw an example x from φ (the proposal distribution), and then accept x with probability proportional to D(x)/(1 − D(x)). Note that the optimal discriminator D∗ that distinguishes the distribution φ from true data distribution pd will accept x with probability proportional to pd(x)/pφ(x) = D\n∗(x)/(1−D∗(x)). Our scheme aims to approximate this ideal rejection sampling by approxinatingD∗ with the equilibrium strategyD, whereas prior work uses the last discriminator DN as an approximation." }, { "heading": "4 EMPIRICAL EVALUATION", "text": "We empirically evaluate how both the Private and Non-Private PGB methods affect the utility of the generated synthetic data from GANs. We show two appealing advantages of our approach: 1) non-private PGB outperforms the last Generator of GANs, and 2) our approach can significantly improve the synthetic examples generated by a GAN under differential privacy.\nDatasets. We assess our method with a toy dataset drawn from a mixture of 25 Gaussians, which is commonly used to evaluate the quality of GAN (Srivastava et al., 2017; Azadi et al., 2019; Turner et al., 2019) and synthesize MNIST images. We then turn to real datasets from the American Census, and a standard machine learning dataset (Titanic).\nPrivacy budget. For the tasks with privacy, we set the privacy budget to be the same across all algorithms. Since Private PGB requires additional privacy budget this means that the differentially private GAN training has to be stopped earlier as compared to running only a GAN to achieve the same privacy guarantee. Our principle is to allocate the majority of the privacy budget to the GAN training, and a much smaller budget for our Private PGB method. Throughout we used 80% to 90% of the final privacy budget on DP GAN training.2\nUtility measures. Utility of synthetic data can be assessed along two dimensions; general utility and specific utility (Snoke et al., 2018; Arnold & Neunhoeffer, 2020). General utility describes the overall distributional similarity between the real data and synthetic datasets, but does not capture specific use cases of synthetic data. To assess general utility, we use the propensity score mean squared error (pMSE) measure (Snoke et al., 2018) (detailed in the Appendix). Specific utility of a synthetic dataset depends on the specific use an analyst has in mind. In general, specific utility can be defined as the similarity of results for analyses using synthetic data instead of real data. For each of the experiments we define specific utility measures that are sensible for the respective example. For the toy dataset of 25 gaussians we look at the number of high quality samples. For the American Census data we compare marginal distributions of the synthetic data to marginal distributions of the true data and look at the similarity of regression results." }, { "heading": "4.1 EVALUATION OF NON-PRIVATE PGB", "text": "Mixture of 25 Gaussians. We first examine the performance of our approach on a two dimensional dataset with a mixture of 25 multivariate Gaussian distributions, each with a covariance matrix of 0.0025I . The left column in Figure 1 displays the training data. Each of the 25 clusters consists\n2Our observation is that the DP GAN training is doing the “heavy lifting”. Providing a good “basis” for PGB requires a substantial privacy expenditure in training DP GAN. The privacy budget allocation is a hyperparameter for PGB that could be tuned. In general, the problem of differentially private hyperparameter selection is extremely important and the literature is thin (Liu & Talwar, 2019; Chaudhuri & Vinterbo, 2013).\nof 1, 000 observations. The architecture of the GAN is the same across all results.3 To compare the utility of the synthetic datasets with the real data, we inspect the visual quality of the resultsand calculate the proportion of high quality synthetic examples similar to Azadi et al. (2019),Turner et al. (2019) and Srivastava et al. (2017).4\nVisual inspection of the results without privacy (in the top row of Figure 1) shows that our proposed method outperforms the synthetic examples generated by the last Generator of the GAN, as well as the last Generator enhanced with DRS. PGB over the last 100 stored Generators and Discriminators trained for T = 1, 000 update steps, and the combination of PGB and DRS, visibly improves the results. The visual impression is confirmed by the proportion of high quality samples. The data from the last GAN generator have a proportion of 0.904 high quality samples. The synthetic data after PGB achieves a higher score of 0.918. The DRS samples have a proportion of 0.826 high quality samples, and the combination of PGB and DRS a higher proportion of 0.874 high quality samples.5\nMNIST Data. We further evaluate the performance of our method on an image generation task with the MNIST dataset. Our results are based on the DCGAN GAN architecture (Radford et al., 2015) with the KL-WGAN loss (Song & Ermon, 2020). To evaluate the quality of the generated images we use a metric that is based on the Inception score (IS) (Salimans et al., 2016), where instead of the Inception Net we use a MNIST Classifier that achieves 99.65% test accuracy. The theoretical best score of the MNIST IS is 10, and the real test images achieve a score of 9.93. Without privacy the last GAN Generator achieves a score of 8.41, using DRS on the last Generator slightly decreases the score to 8.21, samples with PGB achieve a score of 8.76, samples with the combination of PGB and DRS achieve a similar score of 8.77 (all inception scores are calculated on 5,000 samples). Uncurated samples for all methods are included in the Appendix." }, { "heading": "4.2 EVALUATION OF PRIVATE PGB", "text": "Mixture of 25 Gaussians. To show how the differentially private version of PGB improves the samples generated from GANs that were trained under differential privacy, we first re-run the experiment with the two-dimensional toy data.6 Our final value of is 1 and δ is 12N . For the results with PGB, the GAN training contributes 1 = 0.9 to the overall and the Private PGB algorithm 2 = 0.1. Again a first visual inspection of the results in Figure 1 (in the bottom row) shows that post-processing the results of the last GAN Generator with Private PGB is worthwhile. Private PGB over the last 100 stored Generators and Discriminators trained for T = 1, 000 update steps, again, visibly improves the results. Again, our visual impression is confirmed by the proportion of high quality samples. The last Generator of the differentially private GAN achieves a proportion of 0.031 high quality samples. With DRS on top of the last Generator, the samples achieve a quality score of 0.035. The GAN enhanced with Private PGB achieves a proportion of 0.044 high quality samples, the combination of Private PGB and DRS achieves a quality score of 0.053.\nMNIST Data. On the MNIST data, with differential privacy ( = 10, δ = 12N ) the last DP GAN Generator achieves an inception score of 8.07, using DRS on the last Generator the IS improves to 8.18. With Private PGB the samples achieve an IS of 8.58, samples with the combination of Private PGB and DRS achieve the highest IS of 8.66.7 Uncurated samples for all methods are included in the Appendix.\nPrivate Synthetic 1940 American Census Samples. While the results on the toy dataset are encouraging, the ultimate goal of private synthetic data is to protect the privacy of actual persons in\n3A description of the architecture is in the Appendix. The code for the GANs and the PGB algorithm will be made available on GitHub.\n4Note that the scores in Azadi et al. (2019) and Turner et al. (2019) do not account for the synthetic data distribution across the 25 modes. We detail our evaluation of high quality examples in the Appendix.\n5The lower scores for the DRS samples are due to the capping penalty in the quality metric. Without the capping penalty the scores are 0.906 for the last Generator, 0.951 for PGB , 0.946 for DRS and 0.972 for the combination of PGB and DRS.\n6To achieve DP, we trained the Discriminator with a DP optimizer as implemented in tensorflow privacy or the opacus library. We keep track of the values of and δ by using the moments accountant (Abadi et al., 2016; Mironov, 2017).\n7All inception scores are calculated on 5, 000 samples.\ndata collections, and to provide useful data to interested analysts. In this section we report the results of synthesizing data from the 1940 American Census. We rely on the public use micro data samples (PUMS) as provided in Ruggles et al. (2019).8 For 1940 we synthesize an excerpt of the 1% sample of all Californians that were at least 18 years old.9 Our training sample consists of 39,660 observations and 8 attributes (sex, age, educational attainment, income, race, Hispanic origin, marital status and county). The test set contains another 9,915 observations. Our final value of is 1 and δ is\n1 2N ≈ 6.3×10 −6 (after DP GAN training with 1 = 0.8 and PGB with 2 = 0.2, δ1 = 12N , δ2 = 0). The general utility scores as measured by the pMSE ratio score are 2.357 (DP GAN), 2.313 (DP DRS), 2.253 (DP PGB), and 2.445 (DP PGB+DRS). This indicates that PGB achieves the best general utility. To assess the specific utility of our synthetic census samples we compare one-way marginal distributions to the same marginal distributions in the original data. In panel (A) of Figure 2 we show the distribution of race membership. Comparing the synthetic data distributions to the true distribution (in gray), we conclude that PGB, improves upon the last Generator. To underpin the visual impression we calculate the total variation distance between each of the synthetic distributions and the real distribution, the data from the last GAN Generator has a total variation distance of 0.58, DP DRS of 0.44, DP PGB of 0.22 and DP PGB+DRS of 0.13. Furthermore, we evaluate whether more complex analysis models, such as regression models, trained on synthetic samples could be used to make sensible out-of-sample predictions. Panel (B) of Figure 2 shows a parallel coordinate plot to compare the out-of-sample root mean squared error of regression models trained on real data and trained on synthetic data. The lines show the RMSE for predicted income for all linear regression models trained with three independent variables from the set of on the synthetic data generated with Private PGB as compared to the last GAN generator and other post processing methods like DRS.\n8Further experiments using data from the 2010 American Census can be found in the appendix. 9A 1% sample means that the micro data contains 1% of the total American (here Californian) population.\nMachine Learning Prediction with Synthetic Data. In a final set of experiments we evaluate the performance of machine learning models trained on synthetic data (with and without privacy) and tested on real out-of-sample data. We synthesize the Kaggle Titanic10 training set (891 observations of Titanic passengers on 8 attributes) and train three machine learning models (Logistic Regression, Random Forests (RF) (Breiman, 2001) and XGBoost (Chen & Guestrin, 2016)) on the synthetic datasets to predict whether someone survived the Titanic catastrophe. We then evaluate the performance on the test set with 418 observations. To address missing values in both the training set and the test set we independently impute values using the MissForest (Stekhoven & Bühlmann, 2012) algorithm. For the private synthetic data our final value of is 2 and δ is 12N (for PGB this implies DP GAN training with 1 = 1.6 and PGB 2 = 0.4). The models trained on synthetic data generated with our approaches (PGB and PGB+DRS) consistently perform better than models trained on synthetic data from the last generator or DRS – with or without privacy.11" }, { "heading": "ACKNOWLEDGMENTS", "text": "This work began when the authors were at the Simons Institute participating in the “Data Privacy: Foundations and Applications” program. We thank Thomas Steinke, Adam Smith, Salil Vadhan, and the participants of the DP Tools meeting at Harvard for helpful comments. Marcel Neunhoeffer is supported by the University of Mannheim’s Graduate School of Economic and Social Sciences funded by the German Research Foundation. Zhiwei Steven Wu is supported in part by an NSF S&CC grant 1952085, a Google Faculty Research Award, and a Mozilla research grant. Cynthia Dwork is supported by the Alfred P. Sloan Foundation, “Towards Practicing Privacy” and NSF CCF-1763665.\n10https://www.kaggle.com/c/titanic/data 11Table 1 in the appendix summarizes the results in more detail. We present the accuracy, ROC AUC and PR\nAUC to evaluate the performance." }, { "heading": "A PROOFS", "text": "A.1 PROOF OF THEOREM 1\nProof of Theorem 1. Note that if the synthetic data player plays the distribution over X , then U(pX , D) = Ex∼pX [D(x′)] + Ex∼φ[1 − D(x)] = 1 for any discriminator D ∈ D. Now let us replace each element in X with its γ-approximation in B and obtain a new dataset XB , and let pXB denote the empirical distribution over XB . By the Lipschitz conditions, we then have |U(pX , D)− U(pXB , D)| ≤ Lγ. This means U(pXB , D) ∈ [1− Lγ, 1 + Lγ] for all D. Also, for all φ ∈ ∆(B), we have U(φ,D1/2) = 1. Thus, (pXb , D1/2) satisfies equation 1 with α = Lγ.\nA.2 PROOF OF THE APPROXIMATE EQUILIBRIUM\nProof. We will use the seminal result of Freund & Schapire (1997), which shows that if the two players have low regret in the dynamics, then their average plays form an approximate equilibrium. First, we will bound the regret from the data player. The regret guarantee of the multiplicative weights algorithm (see e.g. Theorem 2.3 of Arora et al. (2012)) gives\nT∑ t=1 U(φt, Dt)−min b∈B T∑ t=1 U(b,Dt) ≤ 4ηT + log |B| η\n(3)\nNext, we bound the regret of the distinguisher using the accuracy guarantee of the exponential mechanism (McSherry & Talwar, 2007). For each t, we know with probability (1− β/T ),\nmax Dj\nU(φt, Dj)− U(φt, Dt) ≤ 2 log(NT/β)\nn 0\nTaking a union bound, we have this accuracy guarantee holds for all t, and so\nmax Dj T∑ t=1 U(φt, Dj)− T∑ t=1 U(φt, Dt) ≤ 2T log(NT/β) n 0\n(4)\nThen following the result of Freund & Schapire (1997), their average plays (D,φ) is an αapproximate equilibrium with\nα = 4η + log |B| ηT + 2 log(NT/β)\nn 0\nPlugging in the choices of T and η gives the stated bound." }, { "heading": "B ADDITIONAL DETAILS ON THE QUALITY EVALUATION", "text": "B.1 ON THE CALCULATION OF THE PMSE.\nTo calculate the pMSE one trains a discriminator to distinguish between real and synthetic examples. The predicted probability of being classified as real or synthetic is the propensity score. Taking all propensity scores into account the mean squared error between the propensity scores and the proportion of real data examples is calculated. A synthetic dataset has high general utility, if the model can at best predict probabilities of 0.5 for both real and synthetic examples, then the pMSE would be 0.\nB.2 SPECIFIC UTILITY MEASURE FOR THE 25 GAUSSIANS.\nIn the real data, given the data generating process outlined in section 4.1, at each of the 25 modes 99% of the observations lie within a circle with radius r = √ 0.0025 · 9.21034 around the mode centroids, where 9.21034 is the critical value at p = 0.99 of a χ2 distribution with 2 degrees of freedom, and 0.0025 is the variance of the spherical gaussian.\nTo calculate the quality score we count the number of observations within each of these 25 circles. If one of the modes contains more points than we would expect given the true distribution the count is capped accordingly. Our quality score for the toy dataset of 25 gaussians can be expressed as Q = ∑25 i (min(p i real ·Nsyn, N isyn)/Nsyn), where i indexes the clusters, preal is the true distribution of points per cluster,N isyn the number of observations at a cluster within radius r, andNsyn the total number of synthetic examples." }, { "heading": "C GAN ARCHITECTURES", "text": "C.1 DETAILS ON THE EXPERIMENTS WITH THE 25 GAUSSIANS.\nThe generator and discriminator are neural nets with two fully connected hidden layers (Discriminator: 128, 256; Generator: 512, 256) with Leaky ReLu activations. The latent noise vector Z is of dimension 2 and independently sampled from a gaussian distribution with mean 0 and standard deviation of 1. For GAN training we use the KL-WGAN loss (Song & Ermon, 2020). Before passing the Discriminator scores to PGB we transform them to probabilities using a sigmoid activation.\nC.2 GAN ARCHITECTURE FOR THE 1940 AMERICAN CENSUS DATA.\nThe GAN networks consist of two fully connected hidden layers (256, 128) with Leaky ReLu activation functions. To sample from categorical attributes we apply the Gumbel-Softmax trick (Maddison et al., 2016; Jang et al., 2016) to the output layer of the Generator. We run our PGB algorithm over the last 150 stored Generators and Discriminators and train it for T = 400 update steps." }, { "heading": "D PRIVATE SYNTHETIC 2010 AMERICAN DECENNIAL CENSUS SAMPLES.", "text": "We conducted further experiments on more recent Census files. The 2010 data is similar to the data that the American Census is collecting for the 2020 decennial Census. For this experiment, we synthesize a 10% sample for California with 3,723,669 observations of 5 attributes (gender, age, Hispanic origin, race and puma district membership). Our final value of is 0.795 and δ is 12N ≈ 1.34 × 10\n−7 (for PGB the GAN training contributes = 0.786 and PGB = 0.09). The pMSE ratio scores are 1.934 (DP GAN), 1.889 (DP DRS), 1.609 (DP PGB) and 1.485 (DP PGB+DRS), here PGB achieves the best general utility. For specific utility, we compare the accuracy of three-way marginals on the synthetic data to the proportions in the true data.12 We tabulate race (11 answer categories in the 2010 Census) by Hispanic origin (25 answer categories in the 2010 Census) by gender (2 answer categories in the 2010 Census) giving us a total of 550 cells. To assess the specific utility for these three-way marginals we calculate the average accuracy across all 550 cells. Compared to the true data DP GAN achieves 99.82%, DP DRS 99.89%, DP PGB 99.89% and the combination of DP PGB and DRS 99.93%. Besides the average accuracy across all 550 cells another interesting metric of specific utility is the number of cells in which each synthesizer achieves the highest accuracy compared to the other methods, this is the case 43 times for DP GAN, 30 times for DP DRS, 90 times for DP PGB and 387 times for DP PGB+DRS. Again, this shows that our proposed approach can improve the utility of private synthetic data." }, { "heading": "E DETAILED RESULTS OF MACHINE LEARNING PREDICTION WITH SYNTHETIC DATA", "text": "Table 1 summarizes the results for the machine learning prediction experiment with the Titanic data. We present the accuracy, ROC AUC and PR AUC to evaluate the performance. It can be seen that the models trained on synthetic data generated with our approach consistently perform better than models trained on synthetic data from the last generator or DRS – with or without privacy. To put these values into perspective, the models trained on the real training data and tested on the same out-of-sample data achieve the scores in table 2.\n12A task that is similar to the tables released by the Census." }, { "heading": "F SYNTHETIC MNIST SAMPLES", "text": "Figure 3 shows uncurated samples from the last Generator after 30 epochs of training without differential privacy in Panel 3a and with differential privacy ( =, δ =) in Panel 3b. Figure 4 shows uncurated samples with DRS on the last Generator. Figure 5 shows uncurated samples after PGB and Figure 6 shows uncurated samples after the combination of PGB and DRS. In Figure 7 we show the 100 samples with the highest PGB probabilities." } ]
2,021
null
SP:a6280b6605e621403de6ac4c3fc80fa71184ab6d
[ "In this paper, the authors propose a post-processing method for removing bias from a trained model. The bias is defined as conditional statistical parity — for a given partitioning of the data, the predicted label should be conditionally uncorrelated with the sensitive (bias inducing) attribute for each partition. The authors relax this strong requirement to an epsilon-constraint on the conditional covariance for each partition. As an example, race (sensitive attribute) should be conditionally uncorrelated to whether an individual will default on their loan (predicted target) for each city (data partition). The authors propose a constrained optimization problem that takes the input data, sensitive attribute, partitioning and a trained model to yield a probabilistic decision rule. Subsequently, they propose an iterative solution to the problem, proving some theoretical properties as well as showing how the method compares to different baselines." ]
We present an efficient and scalable algorithm for debiasing trained models, including deep neural networks (DNNs), which we prove to be near-optimal by bounding its excess Bayes risk. Unlike previous black-box reduction methods to cost-sensitive classification rules, the proposed algorithm operates on models that have been trained without having to retrain the model. Furthermore, as the algorithm is based on projected stochastic gradient descent (SGD), it is particularly attractive for deep learning applications. We empirically validate the proposed algorithm on standard benchmark datasets across both classical algorithms and modern DNN architectures and demonstrate that it outperforms previous postprocessing approaches for unbiased classification.
[]
[ { "authors": [ "M.A. Bruckner" ], "title": "The promise and perils of algorithmic lenders’ use of big data,", "venue": "Chi.-Kent L. Rev.,", "year": 2018 }, { "authors": [ "R.C. Deo" ], "title": "Machine learning in medicine,", "venue": "Circulation,", "year": 2015 }, { "authors": [ "T. Brennan", "W. Dieterich", "B. Ehret" ], "title": "Evaluating the predictive validity of the COMPAS risk and needs assessment system,", "venue": "Criminal Justice and Behavior,", "year": 2009 }, { "authors": [ "E. Awad", "S. Dsouza", "R. Kim", "J. Schulz", "J. Henrich", "A. Shariff", "J.-F. Bonnefon", "I. Rahwan" ], "title": "The moral machine experiment,", "venue": null, "year": 2018 }, { "authors": [ "J. Kleinberg", "S. Mullainathan", "M. Raghavan" ], "title": "Inherent trade-offs in the fair determination of risk scores,", "venue": null, "year": 2017 }, { "authors": [ "D. Ingold", "S. Soper" ], "title": "Amazon doesnt consider the race of its customers. should it,", "venue": "Bloomberg News,", "year": 2016 }, { "authors": [ "C. Dwork", "M. Hardt", "T. Pitassi", "O. Reingold", "R. Zemel" ], "title": "Fairness through awareness,", "venue": "in Innovations in Theoretical Computer Science,", "year": 2012 }, { "authors": [ "M.B. Zafar", "I. Valera", "M. Gomez Rodriguez", "K.P. Gummadi" ], "title": "Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment,", "venue": "in International Conference on World Wide Web,", "year": 2017 }, { "authors": [ "M. Hardt", "E. Price", "N. Srebro" ], "title": "Equality of opportunity in supervised learning,", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "A. Chouldechova" ], "title": "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments,", "venue": "Big data,", "year": 2017 }, { "authors": [ "S. Corbett-Davies", "E. Pierson", "A. Feller", "S. Goel", "A. Huq" ], "title": "Algorithmic decision making and the cost of fairness,", "venue": "in International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "N. Mehrabi", "F. Morstatter", "N. Saxena", "K. Lerman", "A. Galstyan" ], "title": "A survey on bias and fairness in machine learning,", "venue": null, "year": 1908 }, { "authors": [ "C.L. Blake", "C.J. Merz" ], "title": "UCI repository of machine learning databases,", "venue": null, "year": 1998 }, { "authors": [ "M.B. Zafar", "I. Valera", "M. Gomez-Rodriguez", "K.P. Gummadi" ], "title": "Fairness Constraints: A Flexible Approach for Fair Classification,", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "J.K. Lum" ], "title": "Johndrow, “A statistical framework for fair predictive algorithms,", "venue": "arXiv preprint arXiv:1610.08077,", "year": 2016 }, { "authors": [ "T. Bolukbasi", "K.-W. Chang", "J.Y. Zou", "V. Saligrama", "A.T. Kalai" ], "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings,", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "F. Calmon", "D. Wei", "B. Vinzamuri", "K.N. Ramamurthy", "K.R. Varshney" ], "title": "Optimized preprocessing for discrimination prevention,", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "D. Madras", "E. Creager", "T. Pitassi", "R. Zemel" ], "title": "Learning adversarially fair and transferable representations,", "venue": "in International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "F. Kamiran", "T. Calders" ], "title": "Data preprocessing techniques for classification without discrimination,", "venue": "Knowledge and Information Systems,", "year": 2012 }, { "authors": [ "F. Locatello", "G. Abbati", "T. Rainforth", "S. Bauer", "B. Schölkopf", "O. Bachem" ], "title": "On the fairness of disentangled representations,", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "B.H. Zhang", "B. Lemoine", "M. Mitchell" ], "title": "Mitigating unwanted biases with adversarial learning,", "venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2018 }, { "authors": [ "N. Grgić-Hlača", "M.B. Zafar", "K.P. Gummadi", "A. Weller" ], "title": "Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning,", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "A. Agarwal", "A. Beygelzimer", "M. Dudik", "J. Langford", "H. Wallach" ], "title": "A reductions approach to fair classification,", "venue": "in International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "A.K. Menon", "R.C. Williamson" ], "title": "The cost of fairness in binary classification,", "venue": "in Conference on Fairness, Accountability and Transparency,", "year": 2018 }, { "authors": [ "L.E. Celis", "L. Huang", "V. Keswani", "N.K. Vishnoi" ], "title": "Classification with fairness constraints: A meta-algorithm with provable guarantees,", "venue": "in Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "B. Fish", "J. Kun", "Á.D. Lelkes" ], "title": "A confidence-based approach for balancing fairness and accuracy,", "venue": "in International Conference on Data Mining,", "year": 2016 }, { "authors": [ "F. Kamiran", "A. Karim", "X. Zhang" ], "title": "Decision theory for discrimination-aware classification,", "venue": "IEEE 12th International Conference on Data Mining. IEEE,", "year": 2012 }, { "authors": [ "L. Bottou" ], "title": "Large-scale machine learning with stochastic gradient descent,", "venue": "in International Conference on Computational Statistics,", "year": 2010 }, { "authors": [ "M.B. Zafar", "I. Valera", "M.G. Rogriguez", "K.P. Gummadi" ], "title": "Fairness Constraints: Mechanisms for Fair Classification,", "venue": "Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "J. Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,", "venue": "Advances in Large Margin Classifiers,", "year": 1999 }, { "authors": [ "V. Nair", "G.E. Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines,", "venue": "in International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "M. Saerens", "P. Latinne", "C. Decaestecker" ], "title": "Adjusting the outputs of a classifier to new a priori probabilities: a simple procedure,", "venue": "Neural Computation,", "year": 2002 }, { "authors": [ "Z. Wang", "K. Qinami", "I.C. Karakozis", "K. Genova", "P. Nair", "K. Hata", "O. Russakovsky" ], "title": "Towards fairness in visual recognition: Effective strategies for bias mitigation,", "venue": "in Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "A. Blum", "K. Stangl" ], "title": "Recovering from biased data: Can fairness constraints improve accuracy?", "venue": "FROC,", "year": 2020 }, { "authors": [ "Z. Liu", "P. Luo", "X. Wang", "X. Tang" ], "title": "Deep learning face attributes in the wild,", "venue": "in International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition,", "venue": "in Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "A.G. Howard", "M. Zhu", "B. Chen", "D. Kalenichenko", "W. Wang", "T. Weyand", "M. Andreetto", "H. Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications,", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning is increasingly applied to critical decisions which can have a lasting impact on individual lives, such as for credit lending (Bruckner, 2018), medical applications (Deo, 2015), and criminal justice (Brennan et al., 2009). Consequently, it is imperative to understand and improve the degree of bias of such automated decision-making.\nUnfortunately, despite the fact that bias (or “fairness”) is a central concept in our society today, it is difficult to define it in precise terms. In fact, as people perceive ethical matters differently depending on a plethora of factors including geographical location or culture (Awad et al., 2018), no universally-agreed upon definition for bias exists. Moreover, the definition of bias may depend on the application and might even be ignored in favor of accuracy when the stakes are high, such as in medical diagnosis (Kleinberg et al., 2017; Ingold and Soper, 2016). As such, it is not surprising that several definitions of “unbiased classification” have been introduced. These include statistical parity (Dwork et al., 2012; Zafar et al., 2017a), equality of opportunity (Hardt et al., 2016), and equalized odds (Hardt et al., 2016; Kleinberg et al., 2017). Unfortunately, such definitions are not generally compatible (Chouldechova, 2017) and some might even be in conflict with calibration (Kleinberg et al., 2017). In addition, because fairness is a societal concept, it does not necessarily translate into a statistical criteria (Chouldechova, 2017; Dixon et al., 2018).\nStatistical parity Let X be an instance space and let Y = {0, 1} be the target set in a standard binary classification problem. In the fair classification setting, we may further assume the existence of a (possibly randomized) sensitive attribute s : X → {0, 1, . . . ,K}, where s(x) = k if and only if x ∈ Xk for some total partition X = ∪kXk. For example, X might correspond to the set of job applicants while s indicates their gender. Here, the sensitive attribute can be randomized if, for instance, the gender of an applicant is not a deterministic function of the full instance x ∈ X (e.g. number of publications, years of experience, ...etc). Then, a commonly used criterion for fairness is to require similar mean outcomes across the sensitive attribute. This property is well-captured through the notion of statistical parity (a.k.a. demographic parity) (Corbett-Davies et al., 2017; Dwork et al., 2012; Zafar et al., 2017a; Mehrabi et al., 2019):\nDefinition 1 (Statistical Parity). Let X be an instance space and X = ∪kXk be a total partition of X . A classifier f : X → {0, 1} satisfies statistical parity across all groups X1, . . . , XK if:\nmax k∈{1,2,...,K} Ex[f(x) | x ∈ Xk] − min k∈{1,2,...,K} Ex[f(x) | x ∈ Xk] ≤\nTo motivate and further clarify the definition, we showcase the empirical results on the Adult benchmark dataset (Blake and Merz, 1998) in Figure 1. When tasked with predicting whether the income of individuals is above $50K per year, all considered classifiers exhibit gender-related bias. One way of removing such bias is to enforce statistical parity across genders. Crucially, however, without taking ethnicity into account, different demographic groups may experience different outcomes. In fact, gender bias can actually increase in some minority groups after enforcing statistical parity. This can be fixed by redefining the sensitive attribute to be the cross product of both gender and ethnicity (green bars).\nOur main contribution is to present a near-optimal recipe for debiasing models, including deep neural networks, according to Definition 1. Specifically, we formulate the task of debiasing learned models as a regularized optimization problem that is solved efficiently using the projected SGD method. We show how the algorithm produces thresholding rules with randomization near the thresholds, where the width of randomization is controlled by the regularization parameter. We also show that randomization near the threshold is necessary for Bayes risk consistency. While we focus on binary sensitive attributes in our experiments in Section 5, our algorithm and its theoretical guarantees continue to hold for non-binary sensitive attributes as well." }, { "heading": "Statement of Contribution", "text": "1. We derive a near-optimal post-processing algorithm for debiasing learned models (Section 3). 2. We prove theoretical guarantees for the proposed algorithm, including a proof of correctness\nand an explicit bound on the Bayes excess risk (Section 4). 3. We empirically validate the proposed algorithm on benchmark datasets across both classical\nalgorithms and modern DNN architectures. Our experiments demonstrate that the proposed algorithm significantly outperforms previous post-processing methods (Section 5).\nIn Appendix E, we also show how the proposed algorithm can be modified to handle other criteria of bias as well." }, { "heading": "2 RELATED WORK", "text": "Algorithms for fair machine learning can be broadly classified into three groups: (1) pre-processing methods, (2) in-processing methods, and (3) post-processing methods (Zafar et al., 2019).\nPreprocessing algorithms transform the data into a different representation such that any classifier trained on it will not exhibit bias. This includes methods for learning a fair representation (Zemel et al., 2013; Lum and Johndrow, 2016; Bolukbasi et al., 2016; Calmon et al., 2017; Madras et al., 2018; Kamiran and Calders, 2012), label manipulation (Kamiran and Calders, 2009), data augmentation (Dixon et al., 2018), or disentanglement (Locatello et al., 2019).\nOn the other hand, in-processing methods constrain the behavior of learning algorithms in order to control bias. This includes methods based on adversarial learning (Zhang et al., 2018) and constraint-based classification, such as by incorporating constrains on the decision margin (Zafar et al., 2019) or features (Grgić-Hlača et al., 2018). Agarwal et al. (2018) showed that the task of learning an unbiased classifier could be reduced to a sequence of cost-sensitive classification problems, which could be applied to any black-box classifier. One caveat of the latter approach is that it requires solving a linear program (LP) and retraining classifiers, such as neural networks, many times before convergence.\nThe algorithm we propose in this paper is a post-processing method, which can be justified theoretically (Corbett-Davies et al., 2017; Hardt et al., 2016; Menon and Williamson, 2018; Celis et al., 2019). Fish et al. (2016) and Woodworth et al. (2017) fall under this category. However, the former only provides generalization guarantees without consistency results while the latter proposes a twostage approach that requires changes to the original training algorithm. Kamiran et al. (2012) also proposes a post-processing algorithm, called Reject Option Classifier (ROC), without providing any theoretical guarantees. In contrast, our algorithm is Bayes consistent and does not alter the original classification method. In Celis et al. (2019) and Menon and Williamson (2018), instance-dependent thresholding rules are also learned. However, our algorithm also learns to randomize around the threshold (Figure 2(a)) and this randomization is key to our algorithm both theoretically as well as experimentally (Appendix C and Section 5). Hardt et al. (2016) learns a randomized post-processing rule but our proposed algorithm outperforms it in all of our experiments (Section 5).\nWoodworth et al. (2017) showed that the post-processing approach can, sometimes, be highly suboptimal. Nevertheless, the latter result does not contradict the statement that our post-processing rule is near-optimal because we assume that the original classifier outputs a monotone transformation of some approximation to the posterior probability p(y = 1 | x) (e.g. margin or softmax output) whereas Woodworth et al. (2017) assumed in their construction that the post-processing rule had access to the binary predictions only.\nWe argue that the proposed algorithm has distinct advantages, particularly for deep neural networks (DNNs). First, stochastic convex optimization methods are well-understood and can scale well to massive amounts of data (Bottou, 2010), which is often the case in deep learning today. Second, the guarantees provided by our algorithm hold w.r.t. the binary predictions instead of using a proxy, such as the margin as in some previous works (Zafar et al., 2017b; 2019). Third, unlike previous reduction methods that would require retraining a deep neural network several times until convergence (Agarwal et al., 2018), which can be prohibitively expensive, our algorithm operates on learned models that are trained once and does not require retraining.\nBesides developing algorithms for fair classification, several recent works focused on other related aspects, such as proposing new definitions for fairness; e.g. demographic parity (Dwork et al., 2012; Mehrabi et al., 2019), equalized odds (Hardt et al., 2016), equality of opportunity/disparate mistreatment (Zafar et al., 2017a; Hardt et al., 2016), and individual fairness (Dwork et al., 2012). Recent works have also established several impossibility results related to fair classification, such as Kleinberg et al. (2017); Chouldechova (2017). In our case, we derive a new impossibility result that holds for any deterministic binary classifier and relate it to the task of controlling the covariance between the classifier’s predictions and the sensitive attribute (Appendix E)." }, { "heading": "3 NEAR-OPTIMAL ALGORITHM FOR STATISTICAL PARITY", "text": "Notation We reserve boldface letters for random variables (e.g. x), small letters for instances (e.g. x), capital letters for sets (e.g. X), and calligraphic typeface for universal sets (e.g. the instance space X ). Given a set S, 1S(x) ∈ {0, 1} is the characteristic function indicating whether x ∈ S. We denote by [n] the set of integers {1, . . . , n} and [x]+ = max{0, x}.\nAlgorithm Given a classifier f : X → [−1, +1] our goal is to post-process the predictions made by f 1 in order to control the bias with respect to a sensitive attribute s : X → [K] as in Definition 1. To this end, instead of learning a deterministic classifier, we consider randomized prediction rules of the form h̃ : X × {1, 2, . . . ,K} × [−1, 1]→ [0, 1], where h̃(x) represents the probability of predicting the positive class given (i) instance x ∈ X , (ii) sensitive attribute s(x), and (iii) classifier’s output f(x).\nAs discussed in Appendix B, for post-processing rule h̃(x), and for each group Xk ⊆ X , the fairness constraint in Definition 1 can be written as |Ex[h̃(x) | x ∈ Xk] − ρ| ≤ , where ρ ∈ [0, 1] is a hyper-parameter tuned via a validation dataset. On the other hand, minimizing the probability of altering the predictions of the original classifier can be achieved by maximizing the inner product Ex[h̃(x) ·f(x)]. Instead of optimizing this quantity directly, which would lead to a pure thresholding rule, we minimize the regularized objective: (γ/2)Ex[h̃(x)2]−Ex[h̃(x) · f(x)] for some regularization parameter γ > 0. This regularization leads to randomization around the threshold, which we show to be critical, both theoretically (Section 4 and Appendix C) and experimentally (Section 5). Using Lagrange duality we show that the solution reduces to the update rules in Equation 2 with optimization variables {λk, µk}k∈[K] and the corresponding predictor which outputs +1 for group Xk with probability h̃γ(x) is given by\nh̃γ(x) = 0, f(x) ≤ λk − µk (f(x)− λk + µk)/γ, λk − µk ≤ f(x) ≤ λk − µk + γ 1, f(x) ≥ λk − µk + γ\n(1)\nwhere ξγ is given by Eq. (3).\nUpdate rules To learn these parameters, one can apply the following update rules (Appendix B): λs(x) ← max { 0, λs(x) − η (\n2 + ρ+\n∂ ∂λs(x) ξγ ( f(x)− (λs(x) − µs(x)) ))} µs(x) ← max { 0, µs(x) − η ( 2 − ρ+ ∂ ∂µs(x) ξγ ( f(x)− (λs(x) − µs(x)) ))} ,\n(2)\nwhere, again, ρ ∈ [0, 1] is a hyperparameter tuned via a validation dataset, s : X → [K] is the sensitive attribute, and γ > 0 is a regularization parameter that controls the level of randomization. In addition, the function ξγ : R→ R+ is given by:\nξγ(w) = w2\n2γ · I{0 ≤ w ≤ γ} +\n( w − γ\n2\n) · I{w > γ} (3)\nNote that ξγ is convex and its derivative ξ′γ is (1/γ)-Lipschitz continuous; it can be interpreted as differentiable approximation to the ReLU unit (Nair and Hinton, 2010). A full pseudocode of the proposed algorithm is presented in Appendix A." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "Next, we analyze the algorithm. Our first theoretical result is to show that the prediction rule in Equation 1 learned through the update rules presented in Section 3 satisfies the desired fairness guarantees on the training sample.\nTheorem 1 (Correctness). Let h̃γ : X → [0, 1] be the randomized predictor in Equation 1 learned by applying the update rules in Equation 2 starting with µk = 0, λk = 0,∀k ∈ [K] until convergence. Then, h̃γ satisfies statistical parity w.r.t. {Xk}k∈[K] in the training sample.\nThe proof of Theorem 1 is presented in Appendix B. The following guarantee, which holds w.r.t. the underlying data distribution, shows that the randomized prediction rule converges to the Bayes\n1Ideally an estimate of some monotone transformation of 2η(x) − 1, where η(x) = p(y = 1|x = x) is the Bayes regressor. This is not a strong assumption because many algorithms can be calibrated to provide probability scores (Platt et al., 1999; Guo et al., 2017).\noptimal unbiased classifier if the original classifier f is Bayes consistent. The proof of the following theorem (Appendix C) is based on the Lipschitz continuity of the decision rule when γ > 0 and the robustness framework of Xu and Mannor (2010). Theorem 2. Let h? = arg minh∈H E[h(x) 6= y], where H is the set of binary predictors on X that satisfy fairness according to Definition 1 for > 0. Let h̃γ : X → [0, 1] be the randomized learning rule in Equation 1. If h̃γ is trained on a freshly sampled data of size N , then there exists a value of ρ ∈ [0, 1] such that the following holds with a probability of at least 1− δ:\nE[I{h̃γ(x) 6= y}] ≤ E[I{h?(x) 6= y}] +E |2η(x)−1− f(x)|+ 2γ+ 8(2 + 1γ )\nN 1 3\n+ 4\n√ 2K + 2 log 2δ\nN ,\nwhere η(x) = p(y = 1|x = x) is the Bayes regressor and K is the number of groups Xk.\nConsequently, if the original classifier is Bayes consistent and we have: N → ∞, γ → 0+ and γN− 1 3 → ∞, then E[h̃γ(x) 6= y]\nP−→ E[h?(x) 6= y]. Hence, the updates converge to the optimal prediction rule subject to the chosen fairness constraint.\nRunning time As shown in Appendix B, the update rules in Equation 2 perform a projected stochastic gradient descent on the following optimization problem:\nmin (µ1,λ1),...,(µK ,λK)≥0\nF = Ex [ (λs(x)+µs(x))+ρ (λs(x)−µs(x))+ξγ(f(x)−(λs(x)−µs(x))) ] (4)\nWe assume with no loss of generality that f(x) ∈ [−1, 1] since f(x) is assumed to be an estimator to 2η(x)− 1 (see Section 3 and Appendix B) and any thresholding rule over f(x) can be transformed into an equivalent rule over a monotone increasing function of f (i.e. using the hyperbolic tangent).\nProposition 1. Let µ(0) = λ(0) = 0 and write µ(t), λ(t) ∈ RK for the value of the optimization variables after t updates defined in Equation 2 for some fixed learning rate αt = α. Let µ̄ = (1/T ) ∑T t=1 µ (t)(x) and λ̄ = (1/T ) ∑T t=1 λ (t)(x). Then,\nE[F̄ ]− F (µ?) ) ≤ (1 + ρ+ ) 2α\n2 + ||µ?||22 + ||γ?||22 2Tα , (5)\nwhere F̄ : RK ×RK → R is the objective function in (4) using the averaged solution µ̄ and λ̄ while F ? is its optimal value. In particular, E[F̄ ]− F (µ?) ) = O( √ K/T ) when α = O( √ K/T ).\nThe proof is in Appendix D. Hence, the post-processing rule can be efficiently computed. In practice, we observe fast convergence as shown in Figure 2(b).\nAs shown in Figure 2(a), the hyperparameter γ controls the width of randomization around the thresholds. A large value of γ may reduce the accuracy of the classifier. On the other hand, γ" }, { "heading": "Bias", "text": "" }, { "heading": "Test error", "text": "cannot be zero because randomization around the threshold is, in general, necessary for Bayes risk consistency as illustrated in the following example: Example 1 (Randomization is necessary). Suppose that X = {−1, 0, 1} where p(x = −1) = 1/2, p(x = 0) = 1/3 and p(x = 1) = 1/6. Let η(−1) = 0, η(0) = 1/2 and η(1) = 1. In addition, let s ∈ {0, 1} be a sensitive attribute, where p(s = 1|x = −1) = 1/2, p(s = 1|x = 0) = 1, and p(s = 1|x = 1) = 0. Then, the Bayes optimal prediction rule f?(x) subject to statistical parity ( = 0) satisfies: p(f?(x) = 1|x = −1) = 0, p(f?(x) = 1|x = 0) = 7/10 and p(f?(x) = 1|x = 1) = 1.\nNote that the Bayes excess risk bound in Theorem 2 is vacuous when γ = 0. Therefore, γ controls a trade-off depending on how crucial randomization is around the thresholds (e.g. in k-NN where the classifier’s scores come from a finite set or in deep neural networks that tend to produce scores concentrated around {−1,+1}). In our experiments, γ is always chosen using a validation set." }, { "heading": "5 EMPIRICAL EVALUATION", "text": "Experiment Setup We compare against three post-processing methods: (1) the post-processing algorithm of Hardt et al. (2016) (2) the shift inference method, first introduced in (Saerens et al., 2002) and used more recently in (Wang et al., 2020), and (3) the Reject Option Classifier (ROC) (Kamiran et al., 2012). We use the implementation of the algorithm of Hardt et al. (2016) in the FairLearn software package (Dudik et al., 2020). The training data used for the post-processing methods is always a fresh sample, i.e. different from the data used to train the original classifiers. The value of the hyper-parameter θ of the ROC algorithm is chosen in the grid {0.01, 0.02, . . . , 1.0}. When ROC fails, its solution with the minimum bias is reported. In the proposed algorithm, the parameter γ is chosen in the grid {0.01, 0.02, 0.05, 0.1, 0.2, . . . , 1.0} while ρ is chosen in the gird E[y]± {0, 0.05, 0.1}. All hyper-parameters are selected based on a separate validation dataset.\nTabular Data We empirically evaluate the performance of the proposed algorithm and the baselines on two real-world datasets, namely the Adult income dataset and the Default of Credit Card\nClients (DCCC) dataset, both taken from the UCI Machine Learning Repository (Blake and Merz, 1998). The Adult dataset contains 48,842 records with 14 attributes each and the goal is to predict if the income of an individual exceeds $50K per year. The DCCC dataset contains 30,000 records with 24 attributes, and the goal is to predict if a client will default on their credit card payment. Both datasets include sensitive attributes, such as sex and age. In Figure 1 we showcased why, in some cases, the sensitive attribute can be the cross product of multiple features (e.g. religion, gender, and race). In our experiments in this section, we define the sensitive class to be the class of females. In the DCCC dataset, we additionally introduce bias in the training set for the purpose of the experiment: if s(x) = y(x) we keep the instance and otherwise drop it with probability 0.5.\nWe train four classifiers on each dataset: (1) random forests with maximum depth 10, (2) k-NN with k = 10, (3) a two-layer fully connected neural network with 128 hidden nodes, and (4) logistic regression. For the latter, we fine-tune the parameter C in a grid of values chosen in a logarithmic scale between 10−4 and 104 using 10-fold cross validation. The learning rate in our algorithm is fixed to 10−1(K/T )1/2, where T is the number of steps, and = 0.\nTable 1 shows the bias and accuracy on test data after applying each post-processing method. The column marked as “original” corresponds to the original classifier without any alteration. As shown in the table, both our proposed algorithm and the algorithm of Hardt et al. (2016) eliminate bias in all classifiers. By contrast, the shift-inference method does not succeed at controlling statistical parity while the ROC method can fail when the output of the original classifier is discrete, such as in kNN, because it does not learn to randomize. Moreover, the proposed algorithm has a much lower impact on the test accuracy compared to Hardt et al. (2016) and can even improve it in certain cases. The fact that fairness can sometimes improve accuracy was recently noted by Blum and Stangl (2020). The full tradeoff curves between bias and performance are provided in Figure 3.\nCelebA Dataset Our second set of experiments builds on the task of predicting the “attractiveness” attribute in the CelebA dataset (Liu et al., 2015). CelebA contains 202,599 images of celebrities annotated with 40 binary attributes, including gender. We use two standard deep neural network architectures: ResNet50 (He et al., 2016) and MobileNet (Howard et al., 2017), trained from scratch or pretrained on ImageNet. We present the results in Table 2. We observe that the proposed algorithm significantly outperforms the post-processing algorithm of Hardt et al. (2016) and performs, at least, as well as the ROC algorithm whenever the latter algorithm succeeds. Often, however, ROC fails at debiasing the deep neural networks because it does not learn to randomize when most scores produced by neural networks are concentrated around the set {−1, +1}." }, { "heading": "Bias", "text": "" }, { "heading": "Test error", "text": "We investigated the strong performance compared to that of Hardt et al. (2016) and found that it is due to the specific form of randomization used by the proposed algorithm. As shown in Figure 4, the post-processing algorithm of Hardt et al. (2016) uses a fixed probability when randomizing between two thresholds. For CelebA trained from scratch, for example, the post-processing rule of Hardt et al. (2016) predicts nearly uniformly at random when ResNet50 predicts the negative class for males. In contrast, our algorithm uses a ramp function that takes the confidence of the scores into account. In Figure 4, in particular, the male instances with scores close to -1 are flipped with probability ≈ 0.15, as opposed to ≈ 0.5 in Hardt et al. (2016), and this difference is compensated for by flipping all examples with scores larger than≈ −0.9 and all female instances with scores less than ≈ 0.9. Hence, less randomization is applied when the original classifier is more confident. Lastly, one important observation we note in Table 2 is the impact of pre-training – pretraining in our experiments helps in achieving a lower test error rate even after eliminating bias. In other words, pretraining seems to reduce the cost of debiasing trained models." }, { "heading": "6 CONCLUDING REMARKS", "text": "In this paper, we propose a near-optimal post-processing algorithm for debiasing trained machine learning models. The proposed algorithm is scalable, does not require retraining the classifiers, and has a limited impact on the test accuracy. In addition to providing strong theoretical guarantees, we show that it outperforms previous post-processing methods for unbiased classification on standard benchmarks across classical and modern machine learning models." }, { "heading": "A FULL ALGORITHM", "text": "Algorithm 1: A Pseudocode of the Proposed Algorithm for Conditional Statistical Parity.\nData: γ > 0; ρ ∈ [0, 1]; ≥ 0; f : X → [−1,+1]; s : X → [K] Result: Optimal values of thresholds: (λ1, µ1), . . . , (λK , µK). Training: Initialize (λ1, µ1), . . . , (λK , µK) to zeros. Then, repeat until convergence:\n1. Sample an instance x ∼ p(x) 2. Perform the updates:\nλs(x) ← max { 0, λs(x) − η ( 2 + ρ+ ∂ ∂λs(x) ξγ ( f(x)− (λs(x) − µs(x)) ))} µs(x) ← max { 0, µs(x) − η ( 2 − ρ+ ∂ ∂µs(x) ξγ ( f(x)− (λs(x) − µs(x)) ))} ,\nwhere ξγ is given by Eq. (3).\nPrediction: Given an instance x in the group Xk, predict the label +1 with probability:\nh̃(x) = 0, f(x) ≤ λk − µk (f(x)− (λk − µk))/γ, λk − µk ≤ f(x) ≤ λk − µk + γ 1, f(x) ≥ λk − µk + γ" }, { "heading": "B PROOF OF THEOREM 1", "text": "" }, { "heading": "B.1 CONSTRAINED CONVEX FORMULATION", "text": "Suppose we have a binary classifier on the instance spaceX . We would like to construct an algorithm for post-processing the predictions made by that classifier such that we control the bias with respect to a set of pairwise disjoint groups X1, . . . , XK ⊆ X according to Definition 1. We assume that the output of the classifier f : X → [−1, +1] is an estimate to 2η(x)−1, where η(x) = p(y = 1|x = x) is the Bayes regressor. This is not a strong assumption because many algorithms can be calibrated to provide probability scores (Platt et al., 1999; Guo et al., 2017) so the assumption is valid. We consider randomized rules of the form:\nh̃ : X × {1, 2, . . . ,K} × [−1, 1]→ [0, 1], whose arguments are: (1) the instance x ∈ X , (2) the sensitive attribute s(x) ∈ [K], and (3) the original classifier’s score f(x). Because randomization is sometimes necessary as proved in Section 4, h̃(x) is the probability of predicting the positive class when the instance is x ∈ X .\nSuppose we have a training sample of size N , which we will denote by S. Let qi = h̃(xi) ∈ [0, 1] for the i-th instance in S. For each group Xk ⊆ S, the fairness constraint in Definition 1 over the training sample can be written as:\n1\n|Xk| ∣∣∣ ∑ i∈Xk qi − ρ ∣∣∣ ≤ 2 ,\nfor some hyper-parameter ρ > 0. This holds by the triangle inequality.\nTo learn f̃ , we propose solving the following regularized optimization problem:\nmin 0≤qi≤1\nN∑ i=1 (γ/2) q2i − f(xi) qi s.t. ∀Xk ∈ G : ∣∣ ∑ i∈Xk qi − ρ ∣∣ ≤ k (6)\nwhere γ > 0 is a regularization parameter and k = |Xk| /2." }, { "heading": "B.2 REDUCTION TO UNCONSTRAINED OPTIMIZATION", "text": "Because the groups Xk are pairwise disjoint, the optimization problems in (6) decomposes into K separate suboptimization problems, one for each group Xk. Each sub-optimization problem can be\nwritten in the following general form:\nmin 0≤qi≤1\nM∑ i=1 γ 2 q2i − f(xi)qi\ns.t. M∑ i=1 (ziqi − b) ≤ , − M∑ i=1 (ziqi − b) ≤ ′\nTo recall, ′ = M /2. The Lagrangian is:\nL(q, α, β, λ, µ) =∑ i (γ 2 q2i − f(xi)qi ) + λ( ∑ i (ziqi − b)− ′)− µ( ∑ i (ziqi − b) + ′) + ∑ i αi(qi − 1)− ∑ i βiqi\nTaking the derivative w.r.t. qi gives us:\nqi = 1\nγ\n( f(xi)− (λ− µ)zi − αi + βi ) Plugging this back, the dual problem becomes:\nmin q,λ,µ,α,β\n∑ i (γ 2 q2i + b(λ− µ) ) + (λ+ µ) ′ + ∑ i αi\ns.t. qi = 1\nγ\n( f(xi)− (λ− µ)zi − αi + βi ) λ, µ, αi, βi ≥ 0\nNext, we eliminate variables. By eliminating βi, we have:\nmin q,λ,µ,α,β\n∑ i (γ 2 q2i + b(λ− µ) ) + (λ+ µ) ′ + ∑ i αi\ns.t. qi − 1\nγ\n( f(xi)− (λ− µ)zi − αi ) ≥ 0\nλ, µ, αi ≥ 0 Equivalently:\nmin q,λ,µ,α,β\n∑ i (γ 2 q2i + b(λ− µ) ) + (λ+ µ) ′ + ∑ i αi\ns.t. αi ≥ f(xi)− γqi − (λ− µ)zi λ, µ, αi ≥ 0\nNext, we eliminate αi to obtain:\nmin q,λ,µ\n∑ i (γ 2 q2i + b(λ− µ) ) + (λ+ µ) ′ + ∑ i [ f(xi)− γqi − (λ− µ)zi ]+ λ, µ ≥ 0\nFinally, let’s eliminate the qi variables. For a given optimal µ and λ, it is straightforward to observe that the minimizer q? to γ/2q2+[w−γq]+ must lie in the set {0, w/γ, 1}. In particular, if w/γ ≤ 0, then q? = 0. If w/γ ≥ 1, then q? = 1. Note here that we make use of the fact that γ > 0. So, the optimal value of q? to γ/2q2 + [w − γq]+ is:\nξγ(w) = 0 wγ ≤ 0 w2 2γ 0 ≤ w γ ≤ 1\nw − γ2 w γ ≥ 1\nFrom this, the optimization problem reduces to:\nmin λ,µ≥0 N∑ i=1 ( b(λ− µ) + ′(λ+ µ) + ξγ(f(xi)− (λ− µ)zi) ) (7)\nThis is a differentiable objective function and can be solved quickly using the projected gradient descent method (Boyd and Mutapcic, 2008). The projection step here is taking the positive parts of λ and µ. This leads to the update rules in Algorithm 1.\nWhat about the solution? Given λ and µ, the solution of qi is a minimizer to; γ\n2 q2i +\n[ f(xi)− γqi − (λ− µ)zi ]+ As stated earlier, the solution is:\nqi = 0, f(xi) ≤ (λ− µ)zi (1/γ)(f(xi)− (λ− µ)zi), γ(λ− µ)zi ≤ f(xi) ≤ (λ− µ)zi + γ 1, f(xi) ≥ (λ− µ)zi + γ\n(8)\nSo, we have a ramp function. In the proposed algorithm, we have zi = 1 and b = ρ for all examples. This proves Theorem 1." }, { "heading": "C PROOF OF THEOREM 2", "text": "" }, { "heading": "C.1 OPTIMAL UNBIASED PREDICTORS", "text": "We begin by proving the following result, which can be of independent interest. Theorem 3. Let f? = arg minf :X→{0,1} E[I{f(x) 6= y}] be the Bayes optimal decision rule subject to group-wise affine constraints of the form E[wk(x) · f(x) | x ∈ Xk] = bk for some fixed partition X = ∪kXk. If wk : X → R and bk ∈ R are such that there exists a constant c ∈ (0, 1) in which p(f(x) = 1) = c will satisfy all the affine constraints, then f? satisfies p(f?(x) = 1) = I{η(x) > tk} + τk I{η(x) = tk}, where η(x) = p(y = 1|x = x) is the Bayes regressor, tk ∈ [0, 1] is a threshold specific to the group Xk ⊆ X , and τk ∈ [0, 1].\nProof. Minimizing the expected misclassification error rate of a classifier f is equivalent to maximizing:\nE[f(x) · y + (1− f(x)) · (1− y)] = E [ E[f(x) · y + (1− f(x)) · (1− y)] ∣∣ x] = E [ E[f(x) · (2η(x)− 1)]\n∣∣ x]+ E[1− η(x)] Hence, selecting f that minimizes the misclassification error rate is equivalent to maximizing:\nE[f(x) · (2η(x)− 1)] (9) Instead of maximizing this directly, we consider the regularized form first. Writing g(x) = 2η(x)− 1, the optimization problem is:\nmin 0≤f(x)≤1\n(γ/2)E[f(x)2]− E[f(x) · g(x)] s.t. E[w(x) · f(x)] = b\nHere, we focused on one subset Xk because the optimization problem decomposes into K separate optimization problems, one for each Xk. If there exists a constant c ∈ (0, 1) such that f(x) = c satisfies all the equality constraints, then Slater’s condition holds so strong duality holds (Boyd and Vandenberghe, 2004).\nThe Lagrangian is:\n(γ/2)E[f(x)2]− E[f(x) · g(x)] + µ(E[w(x) · f(x)]− b) + E[α(x)(f(x)− 1)]− E[β(x)f(x)], where α(x), β(x) ≥ 0 and µ ∈ R are the dual variables. Taking the derivative w.r.t. the optimization variable f(x) yields:\nγf(x) = g(x)− µw(x)− α(x) + β(x) (10) Therefore, the dual problem becomes:\nmax α(x),β(x)≥0\n−(2γ)−1 E[(g(x)− µw(x)− α(x) + β(x))2]− bµ− E[α(x)]\nWe use the substitution in Eq. (10) to rewrite it as:\nmin α(x),β(x)≥0\n(γ/2)E[f(x)2] + bµ+ E[α(x)]\ns.t.∀x ∈ X : γf(x) = g(x)− µw(x)− α(x) + β(x) Next, we eliminate the multiplier β(x) by replacing the equality constraint with an inequality:\nmin α(x)≥0\n(γ/2)E[f(x)2] + bµ+ E[α(x)]\ns.t.∀x ∈ X : g(x)− γf(x)− µw(x)− α(x) ≤ 0 Finally, since α(x) ≥ 0 and α(x) ≥ g(x)− γf(x)− µw(x), the optimal solution is the minimizer to:\nmin f :X→R\n(γ/2)E[f(x)2] + bµ+ E[max{0, g(x)− γf(x)− µw(x)}]\nNext, let µ? be the optimal solution of the dual variable µ. Then, the optimization problem over f decomposes into separate problems, one for each x ∈ X . We have:\nf(x) = arg min τ∈R\n{ (γ/2)τ2 + [g(x)− γτ − µ? w(x)]+ } Using the same argument in Appendix B, we deduce that f(x) is of the form:\nf(x) = 0, g(x)− µ? w(x) ≤ 0 1 g(x)− µ? w(x) ≥ γ (1/γ) (g(x)− µ? w(x)) otherwise\nFinally, the statement of the theorem holds by taking the limit as γ → 0+." }, { "heading": "C.2 EXCESS RISK BOUND", "text": "In this section, we write D to denote the underlying probability distribution and write S to denote the uniform distribution over the training sample (a.k.a. empirical distribution).\nThe parameter ρ stated in the theorem is given by: ρ = (1/2) (\nmax k∈{1,2,...,K} Ex[h?(x) | x ∈ Xk] + min k∈{1,2,...,K}\nEx[h?(x) | x ∈ Xk] )\nNote that, by definition, the optimal classifier h? that satisfies statistical parity also satisfies the constraint in (6) with this choice of ρ. Hence, with this choice of ρ, h? remains optimal among all possible classifiers.\nObserve that the decision rule depends on x only via f(x) ∈ [−1,+1]. Hence, we write z = f(x). Since the thresholds are learned based on a fresh sample of data, the random variables zi are i.i.d. In light of Eq. 9, we would like to minimize the expectation of the loss l(h̃γ , x) = −f(x) · h̃γ(x) = −z · q(z) .= ζ(z) for some function q : [−1,+1] → [0, 1] of the form shown in 2(a). Note that ζ is 2(1 + 1/γ)-Lipschitz continuous within the same group and sensitive class. This is because the thresholds are always in the interval [−1− γ, 1 + γ]; otherwise moving beyond this interval would not change the decision rule.\nLet h̃γ be the decision rule learned by the algorithm. Using Corollary 5 in (Xu and Mannor, 2010), we conclude that with a probability of at least 1− δ:\n∣∣ED[l(h̃γ , x)]− ES [l(h̃γ , x)]∣∣ ≤ inf R≥1 {( 4 R (1 + 1 γ ) + 2\n√ 2(R+K) log 2 + 2 log 1δ\nN\n} (11)\nHere, we used the fact that the observations f(x) are bounded in the domain [−1, 1] and that we can first partition the domain into groups Xk (K subsets) in addition to partitioning the interval [−1, 1] into R smaller sub-intervals and using the Lipschitz constant. Choosing R = N 1 3 and simplifying gives us with a probability of at least 1− δ:\n∣∣ED[l(h̃γ , x)]− ES [l(h̃γ , x)]∣∣ ≤ 4(2 + 1γ ) N 1 3 + 2\n√ 2K + 2 log 1δ\nN\nThe same bound also applies to the decision rule h?γ that results from applying optimal threshold with width γ > 0 (here, “optimal” is with respect to the underlying distribution) because the -cover (Definition 1 in (Xu and Mannor, 2010)) is independent of the choice of the thresholds. By the union bound, we have with a probability of at least 1− δ, both of the following inequalities hold:\n∣∣ED[l(h̃γ , x)]− ES [l(h̃γ , x)]∣∣ ≤ 4(2 + 1γ ) N 1 3 + 2\n√ 2K + 2 log 2δ\nN (12)\n∣∣ED[l(h?γ , x)]− ES [l(h?γ , x)]∣∣ ≤ 4(2 + 1γ ) N 1 3 + 2\n√ 2K + 2 log 2δ\nN (13)\nIn particular:\nED[l(h̃γ , x)] ≤ ES [l(h̃γ , x)] + 4(2 + 1γ )\nN 1 3\n+ 2\n√ 2K + 2 log 2δ\nN\n≤ ES [l(h?γ , x)] + γ + 4(2 + 1γ )\nN 1 3\n+ 2\n√ 2K + 2 log 2δ\nN\n≤ ED[l(h?γ , x)] + γ + 8(2 + 1γ )\nN 1 3\n+ 4\n√ 2K + 2 log 2δ\nN\nThe first inequality follows from Eq. (12). The second inequality follows from the fact that h̃γ is an empirical risk minimizer to the regularized loss, where E[f̃(x)2] ≤ 1 since f̃(x) ∈ [0, 1]. The last inequality follows from Eq. (13).\nFinally, we know that the thresholding rule h?γ with width γ > 0 is, by definition, a minimizer to:\n(γ/2)E[h(x)2]− E[h(x) · f(x)] among all possible bounded functions h : X → [0, 1] subject to the desired fairness constraints. Therefore, we have:\n(γ/2)E[h?γ(x)2]− E[h?γ(x) · f(x)] ≤ (γ/2)E[h?(x)2]− E[h?(x) · f(x)] Hence: E[l(h?γ , x)] = −E[h?γ(x) · f(x)] ≤ γ + E[l(h?, x)] This implies the desired bound:\nED[l(h̃γ , x)] ≤ ED[l(h?, x)] + 2γ + 8(2 + 1γ )\nN 1 3\n+ 4\n√ 2K + 2 log 2δ\nN\nTherefore, we have consistency if N → ∞, γ → 0+ and γN 13 → ∞. For example, this holds if γ = O(N− 1 6 ).\nSo far, we have assumed that the output of the original classifier coincides with the Bayes regressor. If the original classifier is Bayes consistent, i.e. E[|2η(x) − 1 − f(x)|] → 0 as N → ∞, then we have Bayes consistency of the post-processing rule by the triangle inequality." }, { "heading": "D PROOF OF PROPOSITION 1", "text": "Proof. Since |ξ′γ(w)| ≤ 1, the gradient at a point x during SGD has a square `2-norm bounded by ||(1 + ρ+ )2 at all rounds. Following the proof steps of (Boyd and Mutapcic, 2008) and using the fact that projections are contraction mappings, one obtains:\n1\nT T∑ t=1 ( E[F (t)]− F (µ?) ) ≤ ||µ ?||22 + ||γ?||22 + (1 + ρ+ )2Tα2 2Tα\n= (1 + ρ+ )2α 2 + ||µ?||22 + ||γ?||22 2Tα\nBy Jensen’s inequality, we have 1T ∑T t=1 E[F (t)] ≤ E[F (µ̄)]. Plugging this into the earlier results yields:\nE[F̄ ]− F (µ?) ) ≤ (1 + ρ+ ) 2α\n2 + ||µ?||22 + ||γ?||22 2Tα" }, { "heading": "E EXTENSION TO OTHER CRITERIA", "text": "" }, { "heading": "E.1 CONTROLLING THE COVARIANCE", "text": "The proposed algorithm can, sometimes, be adjusted to control bias according to other criteria as well besides statistical parity. For example, we demonstrate in this section how the proposed postprocessing algorithm can be adjusted to control the covariance between the classifier’s prediction and the sensitive attribute when both are binary random variables.\nLet a,b, c ∈ {0, 1} be random variables. Let C(a,b) .= E[a · b] − E[a] · E[b] be their covariance, and C(a,b | c) their covariance conditioned on c:\nC(a,b | c = c) = E[a · b | c = c]− E[a | c = c] · E[b | c = c]. (14) Then, one possible criterion for measuring bias is to measure the conditional/unconditional covariance between the classifier’s predictions and the sensitive attribute when both are binary random variables. Because the random variables are binary, it is straightforward to show that achieving zero covariance implies independence.\nSuppose we have a binary classifier on the instance space X . We would like to construct an algorithm for post-processing the predictions made by that classifier such that we guarantee |C ( f(x), 1S(x) | x ∈ Xk ) | ≤ , where X = ∪kXk is a total partition of the instance space. Informally, this states that the fairness guarantee with respect to the senstiive attribute 1S : X → {0, 1} holds within each subgroup Xk.\nWe assume, again, that the output of the classifier f : X → [−1, +1] is an estimate to 2η(x) − 1, where η(x) = p(y = 1|x = x) is the Bayes regressor and consider randomized rules of the form:\nh̃ : X × {0, 1} × {1, 2, . . . ,K} × [−1, 1]→ [0, 1], whose arguments are: (i) the instance x ∈ X , (ii) the sensitive attribute 1S : X → {0, 1} , (iii) the sub-group membership k : X → [K], and (iv) the original classifier’s score f(x). Because randomization is sometimes necessary as proved in Section 4, h̃(x) is the probability of predicting the positive class when the instance is x ∈ X .\nSuppose we have a training sample of size N , which we will denote by S. Let qi = h̃(xi) ∈ [0, 1] for the i-th instance in S. For each group Xk ⊆ S, the desired fairness constraint on the covariance can be written as:\n1\n|Xk| ∣∣∣ ∑ i∈Xk (1S(i)− ρk) qi ∣∣∣ ≤ ,\nwhere ρk = Ex[1S(x) | x ∈ Xk]. This is because: 1 |Xk| ∑ i∈Xk (1S(i)− ρk) qi = 1 |Xk| ∑ i∈Xk 1S(i) f̃(i)− ρk |Xk| ∑ i∈Xk f̃(i)\n= E[1S(x) · f̃(x) | x ∈ Xk]− E[1S(x)| x ∈ Xk] · E[f̃(x) | x ∈ Xk] = C(f̃(x), 1S(x) | x ∈ Xk),\nwhere the expectation is over the training sample. Therefore, in order to learn h̃, we solve the regularized optimization problem:\nmin 0≤qi≤1\nN∑ i=1 (γ/2) q2i − f(xi) qi s.t. ∀Xk ∈ G : ∣∣ ∑ i∈Xk (1S(i)− ρk) qi ∣∣ ≤ k (15)\nwhere γ > 0 is a regularization parameter and k = |Xk| . This is of the same general form analyzed in Section B.2. Hence, the same algorithm can be applied with b = 0 and zi = 1S(i)− ρk.\nE.2 IMPOSSIBILITY RESULT\nThe previous algorithm for controlling covariance requires that the subgroups Xk be known in advance. Indeed, our next impossibility result shows that this is, in general, necessary. In other words, a deterministic classifier f : X → {0, 1} cannot be universally unbiased with respect to a sensitive class S across all possible known and unknown groups unless the representation x has zero mutual information with the sensitive attribute or if f is constant almost everywhere. As a corollary, the groups Xk have to be known in advance. Proposition 2 (Impossibility result). Let X be the instance space and Y = {0, 1} be a target set. Let 1S : X → {0, 1} be an arbitrary (possibly randomized) binary-valued function on X and define γ : X → [0, 1] by γ(x) = p(1S(x) = 1 | x = x), where the probability is evaluated over the randomness of 1S : X → {0, 1}. Write γ̄ = Ex[γ(x)]. Then, for any binary predictor f : X → {0, 1} it holds that\nsup π:X→{0,1}\n{ Eπ(x) ∣∣C(f(x), γ(x)| π(x))∣∣} ≥ 1 2 Ex|γ(x)− γ̄| ·min{Ef, 1− Ef}, (16)\nwhere C ( f(x), γ(x)| π(x) ) is defined in Equation 14.\nProof. Fix 0 < β < 1 and consider the subset: W = {x ∈ X : (γ(x)− γ̄) · (f(x)− β) > 0},\nand its complement W̄ = X \\W . Since f(x) ∈ {0, 1}, the sets W and W̄ are independent of β as long as it remains in the open interval (0, 1). More precisely:\nW = { γ(x)− γ̄ > 0 ∧ f(x) = 1 γ(x)− γ̄ ≤ 0 ∧ f(x) = 0\nNow, for any set X ⊆ X , let pX be the projection of the probability measure p(x) on the set X (i.e. pX(x) = p(x)/p(X)). Then, with a simple algebraic manipulation, one has the identity:\nEx∼pX [(γ(x)− γ̄) (f(x)− β)] = C(γ(x), f(x); x ∈ X) + (Ex∼pX [γ]− γ̄) · (Ex∼pX [f ]− β) (17)\nBy definition of W , we have: Ex∼pW [(γ(x)− γ̄)(f(x)− β)] = Ex∼pW [|γ(x)− γ̄||f(x)− β|] ≥ min{β, 1− β}Ex∼pW |γ(x)− γ̄|\nCombining this with Eq. (17), we have: C(γ(x), f(x); x ∈W ) ≥ min{β, 1−β}Ex∼pW |γ(x)− γ̄|+(Ex∼pW [γ]− γ̄)(β−Ex∼pW [f ]) (18) Since the set W does not change when β is varied in the open interval (0, 1), the lower bound holds for any value of β ∈ (0, 1). W set:\nβ = f̄ . =\n1\n2\n( Ex∼pW f(x) + Ex∼pW̄ f(x) ) (19)\nSubstituting the last equation into Eq. (18) gives the lower bound: C(γ(x), f(x); x ∈W ) ≥min{f̄ , 1− f̄} · Ex∼pW |γ(x)− γ̄|\n+ 1\n2 (Ex∼pW [γ]− γ̄)\n( Ex∼pW f(x)− Ex∼pW̄ f(x) ) (20)\nRepeating the same analysis for the subset W̄ , we arrive at the inequality: C(γ(x), f(x); x ∈ W̄ ) ≤−min{f̄ , 1− f̄}Ex∼pW̄ |γ(x)− γ̄|\n+ 1\n2 (Ex∼pW̄ [γ]− γ̄)\n( Ex∼pW f(x)− Ex∼pW̄ f(x) ) (21)\nWriting π(x) = 1W (x), we have by the reverse triangle inequality: Eπ(x) ∣∣C(f(x), γ(x); π(x))∣∣ ≥ min{f̄ , 1− f̄} · Ex|γ(x)− γ̄| (22) Finally:\n2f̄ ≥ p(x ∈W ) · Ex∼pW f(x) + p(x ∈ W̄ ) · Ex∼pW̄ f(x) = E[f ] Similarly, we have 2(1− f̄) ≥ 1− E[f ]. Therefore:\nmin{f̄ , 1− f̄} ≥ 1 2\nmin{Ef, 1− Ef} Combining this with Eq. (22) establishes the statement of the proposition." } ]
2,020
A NEAR-OPTIMAL ALGORITHM FOR DEBIASING TRAINED MACHINE LEARNING MODELS
SP:90ffef024018f59b3bde23aa2e2a4677602d41e8
[ "This paper presents a variant of Transformer where low-dimension matrix multiplications and single-head attention are used. Stacked group-linear-transformation (GLT) are applied on input of each layer to perform dimension growth and then reduction. The paper is well-written and easy to follow. Experiments demonstrate the propose architecture matches or improves the performance of baseline Transformers with fewer parameters." ]
We introduce a deep and light-weight transformer, DeLighT, that delivers similar or better performance than standard transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block using the DeLighT transformation, a deep and lightweight transformation and (2) across blocks using block-wise scaling, that allows for shallower and narrower DeLighT blocks near the input and wider and deeper DeLighT blocks near the output. Overall, DeLighT networks are 2.5 to 4 times deeper than standard transformer models and yet have fewer parameters and operations. Experiments on benchmark machine translation and language modeling tasks show that DeLighT matches or improves the performance of baseline Transformers with 2 to 3 times fewer parameters on average.
[ { "affiliations": [], "name": "LIGHT-WEIGHT TRANSFORMER" }, { "affiliations": [], "name": "Sachin Mehta" }, { "affiliations": [], "name": "Marjan Ghazvininejad" }, { "affiliations": [], "name": "Srinivasan Iyer" }, { "affiliations": [], "name": "Luke Zettlemoyer" }, { "affiliations": [], "name": "Hannaneh Hajishirzi" } ]
[ { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Li Wan", "Matthew Zeiler", "Sixin Zhang", "Yann Le Cun", "Rob Fergus" ], "title": "Regularization of neural networks using dropconnect", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Stephen Merity", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Regularizing and optimizing LSTM language models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sachin Mehta", "Rik Koncel-Kedziorski", "Mohammad Rastegari", "Hannaneh Hajishirzi" ], "title": "Pyramidal recurrent unit for language modeling", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": "In Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": null, "year": 2004 }, { "authors": [ "Alessandro Raganato", "Jörg Tiedemann" ], "title": "An analysis of encoder representations in transformer-based machine translation", "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Gino Brunner", "Yang Liu", "Damian Pascual", "Oliver Richter", "Massimiliano Ciaramita", "Roger Wattenhofer" ], "title": "On identifiability in transformers", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Elena Voita", "Rico Sennrich", "Ivan Titov" ], "title": "The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Paul Michel", "Omer Levy", "Graham Neubig" ], "title": "Are sixteen heads really better than one", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alessandro Raganato", "Yves Scherrer", "Jörg Tiedemann" ], "title": "Fixed encoder self-attention patterns in transformerbased machine translation", "venue": "arXiv preprint arXiv:2002.10260,", "year": 2020 }, { "authors": [ "Yi Tay", "Dara Bahri", "Donald Metzler", "Da-Cheng Juan", "Zhe Zhao", "Che Zheng" ], "title": "Synthesizer: Rethinking self-attention in transformer models", "venue": "arXiv preprint arXiv:2005.00743,", "year": 2020 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhanghao Wu", "Zhijian Liu", "Ji Lin", "Yujun Lin", "Song Han" ], "title": "Lite transformer with long-short range attention", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "David So", "Quoc Le", "Chen Liang" ], "title": "The evolved transformer", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yann N Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mohammad Shoeybi", "Mostofa Patwary", "Raul Puri", "Patrick LeGresley", "Jared Casper", "Bryan Catanzaro" ], "title": "Megatron-lm: Training multi-billion parameter language models using gpu model parallelism", "venue": null, "year": 1909 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Qiang Wang", "Bei Li", "Tong Xiao", "Jingbo Zhu", "Changliang Li", "Derek F. Wong", "Lidia S. Chao" ], "title": "Learning deep transformer models for machine translation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume", "year": 2016 }, { "authors": [ "Alexei Baevski", "Michael Auli" ], "title": "Adaptive input representations for neural language modeling", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Édouard Grave", "Armand Joulin", "Moustapha Cissé", "David Grangier", "Hervé Jégou" ], "title": "Efficient softmax approximation for GPUs", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sachin Mehta", "Rik Koncel-Kedziorski", "Mohammad Rastegari", "Hannaneh Hajishirzi" ], "title": "DeFINE: Deep Factorized Input Token Embeddings for Neural Sequence Modeling", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Patrick Chen", "Si Si", "Yang Li", "Ciprian Chelba", "Cho-Jui Hsieh" ], "title": "Groupreduce: Block-wise low-rank approximation for neural language model shrinking", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhiqing Sun", "Hongkun Yu", "Xiaodan Song", "Renjie Liu", "Yiming Yang", "Denny Zhou" ], "title": "Mobilebert: a compact task-agnostic bert for resource-limited devices", "venue": "In Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In International Conference for Representation Learning,", "year": 2016 }, { "authors": [ "Elena Voita", "David Talbot", "Fedor Moiseev", "Rico Sennrich", "Ivan Titov" ], "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NIPS Deep Learning and Representation Learning Workshop,", "year": 2015 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": "In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing NeurIPS,", "year": 2019 }, { "authors": [ "Matus Telgarsky" ], "title": "Benefits of depth in neural networks", "venue": null, "year": 2016 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier", "Marc’Aurelio Ranzato" ], "title": "Classical structured prediction losses for sequence to sequence learning", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Samy Bengio", "Eugene Brevdo", "Francois Chollet", "Aidan N. Gomez", "Stephan Gouws", "Llion Jones", "Łukasz Kaiser", "Nal Kalchbrenner", "Niki Parmar", "Ryan Sepassi", "Noam Shazeer", "Jakob Uszkoreit" ], "title": "Tensor2tensor for neural machine", "venue": "translation. CoRR,", "year": 2018 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318", "venue": "Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "Fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Marjan Ghazvininejad", "Omer Levy", "Yinhan Liu", "Luke Zettlemoyer" ], "title": "Mask-predict: Parallel decoding of conditional masked language models", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Yuntian Deng", "Yoon Kim", "Justin Chiu", "Demi Guo", "Alexander Rush" ], "title": "Latent alignment and variational attention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Edouard Grave", "Armand Joulin", "Nicolas Usunier" ], "title": "Improving neural language models with a continuous cache", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Stephen Merity", "Nitish Shirish Keskar", "Richard Socher" ], "title": "An analysis of neural language modeling at multiple scales", "venue": "arXiv preprint arXiv:1803.08240,", "year": 2018 }, { "authors": [ "Vaswani" ], "title": "architectures for language modeling and machine translation are shown in Figure 6. For language modeling, we follow the architecture in Baevski and Auli (2019) while for machine translation, we follow the architecture", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Attention-based transformer networks (Vaswani et al., 2017) are widely used for sequence modeling tasks, including language modeling and machine translation. To improve performance, models are often scaled to be either wider, by increasing the dimension of hidden layers, or deeper, by stacking more transformer blocks. For example, T5 (Raffel et al., 2019) uses a dimension of 65K and GPT-3 (Brown et al., 2020) uses 96 transformer blocks. However, such scaling increases the number of network parameters significantly (e.g., T5 and GPT-3 have 11 billion and 175 billion parameters, respectively), and complicates learning, i.e., these models either require very large training corpora (Raffel et al., 2019; Devlin et al., 2019; Brown et al., 2020) or careful regularization (Hinton et al., 2012; Wan et al., 2013; Merity et al., 2018a). In this paper, we introduce a new parameter-efficient attention-based architecture that can be easily scaled to be both wide and deep.\nOur Deep and Light-weight Transformer architecture, DeLighT, extends the transformer architecture of Vaswani et al. (2017) and delivers similar or better performance with significantly fewer parameters and operations. At the heart of DeLighT is the DeLighT transformation that uses the group linear transformations (GLTs) of Mehta et al. (2018) with an expand-reduce strategy for varying the width and depth of the DeLighT block efficiently. Since GLTs are local by nature, the DeLighT transformation uses feature shuffling, which is analogous to channel shuffling in convolutional networks (Zhang et al., 2018), to share information between different groups. Such wide and deep representations facilitate replacing the multi-head attention and feed-forward layers in transformers with single headed attention and light-weight feed-forward layers, reducing total network parameters and operations. Importantly, unlike transformers, the DeLighT transformation decouples the depth and width from the input size, allowing us to allocate parameters more efficiently across blocks by using shallower and narrower DeLighT blocks near the input and deeper and wider DeLighT blocks near the output.\nWe demonstrate that DeLighT models achieve similar or better performance than transformer models with significantly fewer parameters and operations, on two common sequence modeling tasks, (i) machine translation and (ii) language modeling. On the low resource WMT’16 En-Ro machine translation dataset, DeLighT attains transformer performance using 2.8× fewer parameters. On the high resource WMT’14 En-Fr dataset, DeLighT delivers better performance (+0.4 BLEU score) with 1.8× fewer parameters than baseline transformers. Similarly, on language modeling, DeLighTmatches the performance of Transformer-XL (Dai et al., 2019) with 1.5× fewer parameters\non the WikiText-103 dataset. Our source code is open-source and is available at: https://github.com/ sacmehta/delight" }, { "heading": "2 RELATED WORK", "text": "Improving transformers: Several methods have been introduced to improve the transformer architecture. The first line of research addresses the challenge of computing self attention on long input sequences (Child et al., 2019; Kitaev et al., 2020; Beltagy et al., 2020). These methods can be combined with our architecture. The second line of research focuses on explaining multi-head attention (Raganato and Tiedemann, 2018; Brunner et al., 2020). They show that increasing the number of transformer heads can lead to redundant representations (Voita et al., 2019a; Michel et al., 2019) and using fixed attention heads with predefined patterns (Raganato et al., 2020) or synthetic attention matrices (Tay et al., 2020) improves performance. The third line of research focuses on improving transformers by learning better representations (Wu et al., 2019; 2020; So et al., 2019). These works aim to improve the expressiveness of transformers using different transformations – for example, using convolutions (Wu et al., 2019; Gehring et al., 2017), gated linear units (Dauphin et al., 2017), or multi-branch feature extractors (So et al., 2019; Wu et al., 2020). Our work falls into this category. Unlike previous works, we show that it is possible to efficiently allocate parameters both at the block-level using the DeLighT transformation and across blocks using block-wise scaling.\nModel scaling: Model scaling is a standard method to improve the performance of sequence models (Vaswani et al., 2017; Raffel et al., 2019; Lan et al., 2020; Devlin et al., 2019; Shoeybi et al., 2019; Tan and Le, 2019; Brown et al., 2020). Model dimensions are increased in width-wise scaling (Vaswani et al., 2017; Devlin et al., 2019) while more blocks (e.g., Transformer blocks) are stacked in depth-wise scaling (Shoeybi et al., 2019; Brown et al., 2020; Wang et al., 2019). In both cases (and their combination), parameters inside each block of the network are the same, which may lead to a sub-optimal solution. To further improve the performance of sequence models, this paper introduces block-wise scaling that allows for variably-sized blocks and efficient allocation of parameters in the network. Our results show that (1) shallower and narrower DeLighT blocks near the input and deeper and wider DeLighT blocks near the output deliver the best performance, and (2) models with block-wise scaling coupled with model scaling achieve better performance compared to model scaling alone. We note that convolutional neural networks (CNNs) also learn shallower and narrower representations near the input and deeper and wider representations near the output. Unlike CNNs (e.g., ResNet of He et al. 2016) that perform a fixed number of operations at each convolutional layer, the proposed block-wise scaling uses a variable number of operations in each layer and block.\nImproving sequence models: There is also significant recent work on other related methods for improving sequence models, including (1) improving accuracy using better token-level representations – for example, using BPE (Sennrich et al., 2016), adaptive inputs (Baevski and Auli, 2019) and outputs (Grave et al., 2017a), and DeFINE (Mehta et al., 2020), and (2) improving efficiency – for example, using compression (Chen et al., 2018; Sun et al., 2020), pruning (Han et al., 2016; Voita et al., 2019b), and distillation (Hinton et al., 2015; Sanh et al., 2019). The closest to our work is the DeFINE transformation, which also learns representations using an expand-reduce strategy. The key difference between the DeFINE transformation (Figure 1c) and the DeLighT transformation (Figure 1d) is that the DeLighT transformation more efficiently allocates parameters within expansion and reduction layers. Unlike DeFINE, which uses fewer groups in group linear transformations to learn wider representations, DeLighT transformation uses more groups to learn wider representations with fewer parameters. The DeLighT transformation achieves comparable performance to the DeFINE transformation but with significantly fewer parameters." }, { "heading": "3 DELIGHT: DEEP AND LIGHT-WEIGHT TRANSFORMER", "text": "A standard transformer block (Figure 1a) comprises of multi-head attention that uses a query-keyvalue decomposition to model relationships between sequence tokens, and a feed forward network (FFN) to learn wider representations. Multi-head attention obtains query Q, key K, and value V by applying three projections to the input, each consisting of h linear layers (or heads) that map the dm-dimensional input into a dh-dimensional space, where dh = dm/h is the head dimension. The FFN consists of two linear layers, where the first expands the dimensions from dm to df and the\nlearnable parameters ( Linear and DeLighT ) are shown in color. The shape of linear transformations indicate their operation (expansion, reduction, etc.). (c, d) compares the DeFINE transformation (Mehta et al., 2020) with the DeLighT transformation. Compared to the DeFINE transformation, the DeLighT transformation uses group linear transformations (GLTs) with more groups to learn wider representations with fewer parameters. Different colors are used to show groups in GLTs. For simplicity, feature shuffling is not shown in (d).\nsecond reduces the dimensions from df to dm. The depth of a transformer block is 4, consisting of (1) three parallel branches for queries, keys, and values, (2) a fusion layer that combines the output of multiple heads, and (3) two sequential linear layers in the FFN. In general, transformer-based networks sequentially stacks transformer blocks to increase network capacity and depth.\nThis paper extends the transformer architecture and introduces a deep and light-weight transformer, DeLighT. Our model uses a deep and light-weight expand-reduce transformation, DeLighT transformation (Section 3.1), that enables learning wider representations efficiently. It also enables replacing multi-head attention and feed forward network (FFN) layers with single-head attention and a light-weight FFN (Section 3.2). DeLighT transformation decouples attention dimensions from the depth and width, allowing us to learn representations efficiently using block-wise scaling instead of uniform stacking of transformer blocks (Section 3.3)." }, { "heading": "3.1 DELIGHT TRANSFORMATION", "text": "DeLighT transformation maps a dm dimensional input vector into a high dimensional space (expansion) and then reduces it down to a do dimensional output vector (reduction) using N layers of the group transformations of Mehta et al. (2018), as shown in Figure 1d. During these expansion and reduction phases, DeLighT transformation uses group linear transformations (GLTs) because they learn local representations by deriving the output from a specific part of the input and are more efficient than linear transformations. To learn global representations, the DeLighT transformation shares information between different groups in the group linear transformation using feature shuffling, analogous to channel shuffling in convolutional networks (Zhang et al., 2018).\nA standard approach to increase the expressivity and capacity of transformers is to increase the input dimensions, dm. However, increasing dm linearly also increases the number of operations in multihead attention (O(n2dm), where n is the sequence length) in a standard transformer block (Figure 1a). In contrast, to increase the expressivity and capacity of the DeLighT block, we increase the depth and width of its intermediate DeLighT transformations using expansion and reduction phases. This enables us to use smaller dimensions for computing attention, requiring fewer operations.\nFormally, the DeLighT transformation is controlled by five configuration parameters: (1) number of GLT layers N , (2) width multiplier wm, (3) input dimension dm, (4) output dimension do, and\n(5) maximum groups gmax in a GLT. In the expansion phase, the DeLighT transformation projects the dm-dimensional input to a high-dimensional space, dmax = wmdm, linearly using dN2 e layers. In the reduction phase, the DeLighT transformation projects the dmax-dimensional vector to a do-dimensional space using the remaining N − dN2 e GLT layers. Mathematically, we define the output Y at each GLT layer l as:\nYl =\n{ F ( X,Wl,bl, gl ) , l = 1\nF ( H ( X,Yl−1 ) ,Wl,bl, gl ) , Otherwise (1)\nwhere Wl = { Wl1, · · · ,Wlgl } and bl = { bl1, · · · ,blgl } are the learnable weights and biases of group linear transformation F with gl groups at the l-th layer. Briefly, the F function takes the input X ( orH ( X,Yl−1 )) and splits into gl non-overlapping groups such that X = { X1, · · · ,Xgl } . The function F then linearly transforms each Xi with weights Wli and bias bli to produce output Yli = XiW l i + b l i. The outputs of each group Y l i are then concatenated to produce the output Y\nl. The functionH first shuffles the output of each group in Yl−1 and then combines it with the input X using the input mixer connection of Mehta et al. (2020) to avoid vanishing gradient problems. Figure 2 visualizes the expansion phase in the DeLighT transformation with group linear transformation, feature shuffling, and the input mixer connection.\nThe number of groups at the l-th GLT in DeLighT transformation are computed as:\ngl = { min(2l−1, gmax), 1 ≤ l ≤ dN/2e gN−l, Otherwise (2)\nIn our experiments, we use gmax = ddm32 e so that each group has at least 32 input elements." }, { "heading": "3.2 DELIGHT BLOCK", "text": "Figure 1b shows how we integrate DeLighT transformation into the transformer block to improve its efficiency. The dm-dimensional inputs are first fed to the DeLighT transformation to produce do-dimensional outputs, where do < dm. These do-dimensional outputs are then fed into a single head attention, followed by a light-weight FFN to model their relationships.\nDeLighT layer and single head attention: Let us assume we have a sequence of n input tokens, each of dimensionality dm. These n, dm-dimensional inputs are first fed to the DeLighT transformation to produce n, do-dimensional outputs, where do < dm. These n, do-dimensional outputs are then projected simultaneously using three linear layers to produce do-dimensional queries Q, keys K, and values V. We then model contextual relationships between these n tokens using scaled dot-product attention (Eq. 3). To enable the use of residual connections (He et al., 2016), the do-dimensional outputs of this attention operation are linearly projected into a dm-dimensional space.\nAttention(K,Q,V) = softmax ( QKT√\ndo\n) V (3)\nWe hypothesize that the ability of DeLighT to learn wider representations allows us to replace multi-head attention with single-head attention. The computational costs for computing attention in" }, { "heading": "Block-wise", "text": "the standard transformer and the DeLighT block are O(dmn2) and O(don2) respectively, where do < dm. Therefore, the DeLighT block reduces the cost for computing attention by a factor of dm/do. In our experiments, we used do = dm/2, thus requiring 2× fewer multiplication-addition operations as compared to the transformer architecture.\nLight-weight FFN: Similar to FFNs in transformers, this block also consists of two linear layers. Since the DeLighT block has already incorporated wider representations using the DeLighT transformation, it allows us to invert the functionality of FFN layers in the transformer. The first layer reduces the dimensionality of the input from dm to dm/r while the second layer expands the dimensionality from dm/r to dm, where r is the reduction factor (see Figure 1b). Our light-weight FFN reduces the number of parameters and operations in the FFN by a factor of rdf/dm. In the standard transformer, the FFN dimensions are expanded by a factor of 4.1 In our experiments, we used r = 4. Thus, the light-weight FFN reduces the number of parameters in the FFN by 16×. Block depth: The DeLighT block stacks (1) a DeLighT transformation with N GLTs, (2) three parallel linear layers for key, query, and value, (3) a projection layer, and (4) two linear layers of a light-weight FFN. Thus, the depth of DeLighT block is N + 4. Compared to the standard transformer block (depth is 4), DeLighT block is deeper." }, { "heading": "3.3 BLOCK-WISE SCALING", "text": "Standard methods for improving the performance of sequence models include increasing the model dimensions (width scaling), stacking more blocks (depth scaling), or both. However, such scaling is not very effective on small datasets. For example, when a Transformer-Base (dm = 512) network is replaced with Transformer-Large (dm = 1024) on the WMT’16 En-Ro corpus, the number of parameters increases by approximately 4× while the performance does not change appreciably (BLEU: 34.28 vs. 34.35). We hypothesize that this happens because scaling model width and depth allocates parameters uniformly across blocks, which may lead to learning redundant parameters. To create deep and wide networks, we extend model scaling to the block level (see Figure 3).\nScaling the DeLighT block: The DeLighT block learns deep and wide representations using the DeLighT transformation, whose depth and width are controlled by two configuration parameters: the number of GLT layers N and the width multiplier wm, respectively (Figure 3a). These configuration parameters allow us to increase the number of learnable parameters inside the DeLighT block independently of the input dm and output do dimensions. Such calibration is not possible with the standard transformer block because their expressiveness and capacity are a function of the input (input dimension = number of heads × head dimension). Here, we introduce block-wise scaling that creates a network with variably-sized DeLighT blocks, allocating shallower and narrower DeLighT blocks near the input and deeper and wider DeLighT blocks near the output.\nTo do so, we introduce two network-wide configuration parameters: minimum Nmin and maximum Nmax number of GLTs in a DeLighT transformation. For the b-th DeLighT block, we compute\n1Transformer-base uses dm=512 and df=2048 while Transformer-large uses dm=1024 and df=4096.\nthe number of GLTs N b and the width multiplier wbm in a DeLighT transformation using linear scaling (Eq. 4). With this scaling, each DeLighT block has a different depth and width (Figure 3a).\nN b = Nmin + (Nmax −Nmin) b\nB − 1 , wbm = wm + (Nmax −Nmin) b Nmin(B − 1) , 0 ≤ b ≤ B − 1 (4)\nHere, B denotes the number of DeLighT blocks in the network. We add superscript b to number of GLT layers N and width multiplier wm to indicate that these parameters are for the b-th block.\nNetwork depth: The depth of transformer block is fixed, i.e., 4. Therefore, previous works (Raffel et al., 2019; Brown et al., 2020; Wang et al., 2019) have associated the depth of transformer-based networks with the number of transformer blocks. In DeLighT, we present a different perspective to learn deeper representations, wherein each block is variably-sized. To compute the network depth, we use the standard definition across different domains, including computer vision (e.g., ResNet of He et al. 2016) and theoretical machine learning (Telgarsky, 2016). These works measures network depth as the number of sequential learnable layers (e.g., convolution, linear, or group linear). Similarly, the depth of DeLighT and transformer networks with B blocks is ∑B−1 b=0 (N b + 4) and 4B, respectively." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We evaluate the performance of DeLighT on two standard sequence modeling tasks: (1) machine translation (Section 4.1) and (2) language modeling (Section 4.2)." }, { "heading": "4.1 MACHINE TRANSLATION", "text": "Datasets and evaluation: We benchmark DeLighT models on four datasets: (1) IWSLT’14 German-English (De-En), (2) WMT’16 English-Romanian (En-Ro), (3) WMT’14 English-German (WMT’14 En-De), and (4) WMT’14 English-French (WMT’14 En-Fr). For the IWSLT’14 De-En dataset, we replicate the setup of Wu et al. (2019) and Edunov et al. (2018), which uses 160K/7K/7K sentence pairs for training, validation, and testing with a joint BPE vocabulary of about 10K tokens, respectively. For the WMT’14 English-German (En-De) dataset, we follow the setup of Vaswani et al. (2017). The dataset has 3.9M/39K/3K sentence pairs for training, validation, and testing respectively with a joint BPE vocabulary size of 44K.2 For the WMT’14 English-French (En-Fr) dataset, we replicate the setup of Gehring et al. (2017), which uses 36M/27K/3K sentence pairs for training, validation, and testing respectively with a joint BPE vocabulary size of 44K. The performance is evaluated in terms of BLEU (Papineni et al., 2002) (higher is better) on the test set. We follow Wu et al. (2019) for beam search related hyper-parameters.\nArchitecture: We follow the symmetric encoder-decoder architecture of Vaswani et al. (2017) with sinusoidal positional encodings. Both the encoder and the decoder have B DeLighT blocks. Decoder blocks are identical to the encoder blocks (Figure 1b), except that they have an additional source-target single-head attention unit before the light-weight FFN. In the source-target single-head attention unit, keys and values are projections over the encoder output (full details in Appendix A). In our experiments, we use wm = 2, Nmin = 4, and Nmax = 8 for WMT’16 En-Ro, WMT’14 En-De, and WMT’14 En-Fr; resulting in 222 layer deep DeLighT networks. For IWSLT’14 De-En, we used wm = 1, Nmin = 3, and Nmax = 9 for IWSLT’14 De-En; resulting in 289 layer deep network. For simplicity, we set B = Nmax. We use a learnable look-up table that maps every token in the vocabulary to a 128-dimensional vector. We implement our models using Fairseq (Ott et al., 2019) and use their provided scripts for data pre-processing, training, and evaluation.\nTraining: For IWSLT’14 De-En models, we follow the setup of Wu et al. (2019) and train all our models for 50K iterations with a batch size of 4K tokens on a single NVIDIA GTX 1080 GPU. For WMT’16 En-Ro, we follow the training setup of Ghazvininejad et al. (2019) and train models for 100K iterations on 16 NVIDIA Tesla V100 GPUs with an effective batch size of 64K tokens. For WMT’14 En-De and WMT’14 En-Fr, we follow the training set-up of Wu et al. (2019) and train our models on 16 V100 GPUs for 30K and 50K iterations, respectively. We use Adam (Kingma and Ba, 2015) to minimize cross entropy loss with a label smoothing value of 0.1 during training. For a fair comparison, we trained baseline transformer models using the same training set-up.\n2We use training and validation data that is compatible with the Tensor2Tensor library (Vaswani et al., 2018) in order to have fair comparisons with recent works (e.g., Evolved Transformer)." }, { "heading": "Model # Params Ratio BLEU ∆ BLEU # Params Ratio BLEU ∆ BLEU", "text": "" }, { "heading": "Model # Params Ratio BLEU ∆ BLEU # Params Ratio BLEU ∆ BLEU", "text": "" }, { "heading": "4.1.1 RESULTS", "text": "Comparison with baseline transformers: Table 1 compares the performance of DeLighT with the baseline transformers of Vaswani et al. (2017) on different corpora. DeLighT delivers better performance with fewer parameters than transformers, across different corpora. Specifically, on low-resource (WMT’16 En-Ro) and high resource (WMT’14 En-De & WMT’14 En-Fr) corpora, DeLighT delivers similar or better performance with 2.8× and 1.8× fewer parameters, respectively. When the number of parameters are increased, DeLighT outperforms transformers. For example, on WMT’14 En-Fr dataset, DeLighT is 3.7× deeper than transformers and improves its BLEU score by 1.3 points yet with 13 million fewer parameters and 3 billion fewer operations (see Table 2).\nParticularly interesting are the performance comparisons of DeLighT with the baseline transformers of Vaswani et al. (2017) and its neural search variant, i.e., Evolved Transformer of So et al. (2019), at two different parametric settings on WMT’14 En-De corpora in Figure 4. For small models (< 10 M parameters), DeLighT models delivers better performance and for attaining the same performance as these models, DeLighT models requires fewer parameters.\nComparison with state-of-the-art methods: Most state-of-the-art methods have evaluated the performance on WMT’14 En-De while some have also evaluated on IWSLT’14 De-En. Table 3 compares the performance of DeLighT with state-of-the-art methods on these two corpora. DeLighT delivers similar or better performance than existing methods. It is important to note that existing methods have improved baseline transformers with different design choices – for example, the asymmetric encoder-decoder structure (Wang et al., 2019) and neural architecture search (So et al., 2019). We believe that DeLighT, in the future, would also benefit from such design choices.\nScaling up DeLighT models: Figure 5 shows the performance of DeLighT models improves with increase in network parameters; suggesting their ability to learn representations across different corpora, including low-resource." }, { "heading": "4.2 LANGUAGE MODELING", "text": "Datasets and evaluation: We evaluate on the WikiText-103 dataset (Merity et al., 2017) that has 103M/217K/245K tokens for training, validation, and testing. It has a word-level vocabulary of about 260K tokens. Following recent works (Baevski and Auli, 2019; Dai et al., 2019), we report performance in terms of perplexity (lower is better) on the test set.\nArchitecture: We use the transformer-based decoder architecture of Baevski and Auli (2019) with B DeLighT blocks. We use wm=2, Nmin=4, and Nmax=12. We scale dm using values {384, 512, 784, 1024} for increasing network parameters. For simplicity, we set B = Nmax. Following standard practice, we use adaptive input (Baevski and Auli, 2019) as a look-up table and adaptive output (Grave et al., 2017a) as the classification layer with one head (head dimension is 128) and two tails (tail dimensions are 64 and 32). We also share weights between the input and the output layers.\nTraining: We follow the training setup of Baevski and Auli (2019), except that we train our models on 8 NVIDIA Tesla V100 GPUs for 100K iterations with a context length of 512 and an effective batch size of 64K tokens. We use Adam during training and use a context length of 480 during test.\nResults: Table 4b compares the performance of DeLighT with previous methods on WikiText-103. Table 4a plots the variation of perplexity with number of parameters for DeLighT and TransformerXL (Dai et al., 2019) – which outperforms other transformer-based implementations (e.g., Baevski and Auli 2019). Both tables show that DeLighT delivers better performance than state-of-the-art methods (including Transformer-XL) and it does this using a smaller context length and significantly fewer parameters, suggesting that the DeLighT transformation helps learn strong contextual relationships." }, { "heading": "5 ANALYSIS AND DISCUSSIONS ON COMPUTATIONAL EFFICIENCY", "text": "Training time and memory consumption: Table 5 compares the training time and memory consumption of DeLighT with baseline transformers. For an apples-to-apples comparisons, we implemented the Transformer unit without NVIDIA’s dedicated CUDA kernel, and trained both transformer and DeLighT full-precision networks for 30K iterations on 16 NVIDIA V100 GPUs. The transformer and DeLighT models took about 37 and 23 hours for training and consumed about 12.5 GB and 14.5 GB of GPU memory, respectively (R1 vs. R2). When we enabled the dedicated CUDA kernel provided by APEX library3 for multi-head attention in Transformers, the training time of the\n3https://github.com/NVIDIA/apex" }, { "heading": "Model Dropout BLEU", "text": "transformer model reduced from 37 to 16 hours while we did not observe any significant change in memory consumption. Motivated by this observation, we implemented dedicated CUDA kernels for grouping and ungrouping functions in GLTs (see Appendix E). With these changes, training time and GPU memory consumption of DeLighT reduced by about 4 hours and 3 GB, respectively. We emphasize that grouping, linear transformation, feature shuffling, and ungrouping, can be implemented efficiently using a single CUDA kernel. In future, we expect a dedicated CUDA kernel for these operations would further reduce the memory consumption as well as training/inference time.\nRegularization: Table 6 shows that DeLighT delivers similar performance to baseline transformers, but with fewer parameters and less regularization. This suggests that learning representations with better transformation functions alleviates the need for dropout." }, { "heading": "6 CONCLUSION", "text": "This paper introduces a deep and light-weight transformer architecture, DeLighT, that efficiently allocates parameters both within the DeLighT block and across DeLighT blocks. Compared to state-of-the-art transformer models, DeLighT models are (1) deep and light-weight and (2) deliver similar or better performance. In the future, we plan to apply DeLighT to other tasks, including language model pre-training, question answering, and language generation.\nAcknowledgements: This research was supported by ONR N00014-18-1-2826, DARPA N6600119-2-403, NSF (IIS-1616112, IIS1252835), and an Allen Distinguished Investigator Award. Authors would also like to thank members of the UW-NLP and the H2Lab at The University of Washington for their valuable feedback and comments." }, { "heading": "B GROUP LINEAR TRANSFORMATION WITH INPUT-MIXER CONNECTION", "text": "Group linear transformation (GLT) F splits a dm-dimensional input X into g non-overlapping groups such that X = Concat(X1, · · · ,Xg), where Xi is a dmg -dimensional vector. Xi’s are then simultaneously transformed using g linear transforms Wi ∈ R dm g × do\ng to produce g outputs Yi = XiWi. Yi’s are then concatenated to produce the final do-dimensional output Y = Concat(Y1, · · · ,Yg).\nFigure 7a shows an example of GLT in the expansion phase of DeLighT transformation. For illustrative purposes, we have used the same dimensions in this example. Recall that as we go deeper in the expansion phase, the number of groups increases. In this example, the first layer has one group, the second layer has two groups and the third layer has four groups. GLTs learns group-specific representations and are local. To allow GLT to learn global representations, we use feature shuffle. An example of GLT with feature shuffle is shown in Figure 7b. Furthermore, training deep neural networks by merely stacking linear or group linear (with or without feature shuffle) is challenging because of vanishing gradient problem. Residual connections introduced by He et al. (2016) mitigates this problem and helps train deep neural networks. However, such connections cannot be employed when input and output dimensions are not the same (e.g., during the expansion and reduction phases in DeLighT transformation). To stabilize the training and learn deeper representations, we use input-mixer connection of Mehta et al. (2020). Figure 7c shows an example of GLT with feature shuffle and input mixer connection.\nC MULTIPLICATION-ADDITION OPERATIONS IN DELIGHT\nThe DeLighT block is built using linear transformations, GLTs, and scaled dot-product attention. Total number of multiplication-addition operations (MACs) in a network is an accumulation of these individual operations.\nLet n denotes the number of source tokens, m denotes the number of target tokens, dm denotes the input dimension, do denotes the output dimension, and g denotes the number of groups in GLT. The procedure for counting MACs for each of these operations is described below.\nGroup linear transformation (GLT): GLT F has g learnable matrices Wi ∈ R dm g × do g . There-\nfore, GLT learns dmdo g parameters and performs dmdo g MACs to transform dm-dimensional input to\ndo-dimensional output. Following a standard practice, e.g., ResNet of He et al. (2016), we count addition and multiplication as one operation instead of two because these operations can be fused in recent hardwares.\nImportantly, when g = 1, the GLT is the same as linear transformation.\nSelf-attention in DeLighT: The scaled dot-product self-attention in DeLighT is defined as:\nAttention(K,Q,V) = softmax ( QKT√\ndo\n) V (5)\nwhere Q ∈ Rn×do , K ∈ Rn×do , V ∈ Rn×do denotes query, key, and value, respectively.\nThe attention operation involves two dot-products. The first dot product between Q and K while the second dot product is between the output of first dot product and V. Both dot products require don2 MACs. Therefore,\ntotal number of MACs in computing scaled dot-product self-attention are 2don2 .\nIn case of a source-target attention (as in machine translation), K’s and V’s are from the source (encoder) and Q’s are incrementally decoded (one token at a time). Therefore, the number of MACs required to decode m\ntarget tokens given n source tokens are m∑\nk=1\n2kndo ." }, { "heading": "D ABLATIONS ON THE WIKITEXT-103 DATASET", "text": "Table 7 studies the impact of DeLighT block parameters on the WikiText-103 dataset, namely (1) minimum number of GLTs Nmin, (2) maximum number of GLTs Nmax, (3) width multiplier wm, and (4) model dimension dm (see Figure 1b). Figure 8, Figure 9, and Figure 10 shows the impact of the DeLighT transformation, feature shuffling, and the light-weight FFN. Table 8 shows the effect of position of DeLighT transformation in the DeLighT block while Figure 12 shows the effect of scaling DeLighT networks. We choose the WikiText-103 dataset for ablations because it has very large vocabulary compared to other datasets (267K vs. 30-40K), allowing us to test the ability under large vocabulary sizes. The performance is reported in terms of perplexity (lower is better) on the validation set. In our ablation studies, we used the same settings for training as in Section 4.2 except that we train only for 50K iterations.\nDeLighT block: Overall, Table 7 shows that scaling depth and width using DeLighT transformation and block-wise scaling improves performance. We make following observations:\na) Block-wise scaling (R4, R5) delivers better performance compared to uniform scaling (R1-R3). For instance, DeLighT with Nmin = 4 and Nmax = 8 (R4) is 1.25× shallower than DeLighT with Nmin = 8 and Nmax = 8 (R2), but delivers better performance with a similar number of parameters and operations. Scaling wm improves performance (R2 vs. R3), however, the improvement is significantly lower than for the model with block-wise scaling (R3 vs. R5). This suggests that non-uniform distribution of parameters across blocks allows the network to learn better representations.\nb) Different ratios between Nmax and Nmin yields different results. We observe significant performance improvements when the ratio is greater than or equal to two. For example, when we scale Nmax\nNmin from 2 to 3\n(R6 vs. R8), the perplexity improves by ∼5 points with only a moderate increase in network parameters. On the other hand, when the Nmax\nNmin is close to 1 (R6 vs. R7), performance does not change appreciably. This is\nlikely because the allocation of parameters across blocks is close to uniform (Eq. 4). This is consistent with our previous observation.\nc) Learning shallower and narrower representations near the input and deeper and wider representations near the output achieves better performance. For example, when we scaled Nmax from 8 to 12 for Nmin = 4 (R6, R8), DeLighT delivered better performance with a similar number of parameters compared to a model with Nmin = 6 (R7, R9). This is likely because the ratio of Nmax and Nmin is higher when Nmin = 4, which helps allocate parameters per block more effectively.\nd) Deeper and wider representations near the input and shallower and narrower representations near the output hurts performance (R13 vs. R16).\ne) Scaling width using wm and dm improves performance (R10-R15), however, their impact is different. For example, when we scale wm and dm by two, the rate of increase in number of parameters and operations is more rapid with dm compared to wm. DeLighT’s ability to learn wider representations in different ways may be useful in selecting application specific models.\nImpact of DeLighT transformation: We replace DeLighT transformation in the DeLighT block (Figure 1b) with (1) the DeFINE transformation and (2) a stack of linear layers. Figure 8 shows that DeLighT transformation delivers similar performance with significantly fewer parameters compared to the DeFINE unit" }, { "heading": "Uniform vs. block-wise scaling", "text": "" }, { "heading": "Varying model width dm", "text": "and linear layers. In these experiments, the settings are the same as R13-R15 (Table 7), except, Nmax = 8, because models with a stack of linear layers learn too many parameters.\nFeature shuffling: Figure 9 shows that feature shuffling improves the performance of DeLighT by 1-2 perplexity points. Here, we use the same settings as in R13-R15 (Table 7).\nLight-weight FFN: Figure 10 shows the impact of varying the reduction factor r in the light-weight FFN. We use the same settings as in R13 (Table 7). We did not observe any significant drop in performance until r = 4. Beyond r = 4, we see a drop in performance (perplexity increases by ∼2 points). In such cases, the inner dimensions of the light-weight FFN are very small and hurt performance. Notably, the light-weight FFN with r = 22 delivered the same performance as r = 2−2, but with 1.28× fewer network parameters. At r = 2−2, the light-weight FFN is the same as the FFN in Vaswani et al. (2017). This suggests that the ability of\nDeLighT transformation to learn representations in high-dimensional spaces efficiently allows us to reduce the computational burden on the FFN.\nWe also tested removing the light-weight FFN and while it reduced parameters by ∼0.5-1 M, performance dropped by about 2-3 perplexity points across different parametric settings.\nUniform vs. block-wise scaling: Figure 11 compares the performance of DeLighT with uniform and blockwise scaling. For a given model dimension dm, DeLighT models with block-wise scaling delivers better performance.\nPosition of DeLighT transformation: We studied three configurations for the DeLighT transformation on the WikiText-103 validation set (Table 8): (1) DeLighT transformation followed by single-headed attention and light-weight FFN, (2) single-headed attention followed by DeLighT transformation, and (3) single-headed attention followed by DeLighT transformation and light-weight FFN. For similar number of parameters, we found that (2) and (3) drops the performance of (1) significantly across different parametric settings. This suggests that deeper and wider representations helps learn better contextual representations; allowing us to replace multi-headed attention with single-headed attention.\nScaling up DeLighT: Figure 12 shows the results of DeLighT models obtained after varying configuration parameters of DeLighT transformations (Nmin={4, 6}, Nmax={8, 12}, wm={2, 3, 4}, and dm={256, 384, 512}). We can see that scaling one configuration parameter (e.g., dm) while keeping other configuration parameters constant (e.g., Nmin, Nmax, and wm) consistently improves performance." }, { "heading": "Configuration Parameters Perplexity", "text": "This work investigates relationships between Nmin, Nmax, wm, and dm, manually. We believe that a more principled approach, such as compound scaling of Tan and Le (2019), that establishes relationships between these parameters would produce more efficient and accurate models." }, { "heading": "E SOURCE CODE FOR GROUP LINEAR TRANSFORMATION", "text": "The source code for implementing group linear transformation (GLT) in PyTorch is shown in Listing 1. The source code for efficiently implementing the grouping function in GLT is shown in Listing 2. Since the ungrouping kernel is similar to grouping kernel, we have not shown it here.\nThe reshape and transpose operations in naive PyTorch implementation for grouping and ungrouping are replaced with a dedicated CUDA kernels, resulting in reduced memory footprint and faster training.\nListing 1: \"Naive implementation of GLT in Pytorch\" import torch def glt_function(x, n_groups, weights, bias=None):\n’’’ :param x: Input tensor of size [B x N], where B is batch size and N\nis input dimension :param n_groups: number of groups in GLT :param weights: glt weights [g x N/g x M/g] :param bias: GLT bias (optional) of size [g x 1 x M/g] :return: output tensor of size [B x M] ’’’ bsz = x.size(0)\n## GROUPING FUNCTION: Converts [B x N] tensor to [g x B x N/g] ## # [B x N] --> [B x g x N/g] x = x.contiguous().view(bsz, n_groups, -1) # [B x g x N/g] --> [g x B x N/g] x = x.transpose(0, 1) # transpose so that group is first\n## TRANSFORMATION FUNCTION: Transforms from N/g-dimensional space to M/g-dimensional space ## # [g x B x N/g] x [g x N/g x M/g] --> [g x B x M/g] x = torch.bmm(x, weights) # multiply with Weights # add bias if bias is not None:\nx = torch.add(x, bias)\n## REGROUPING FUNCTION: Converts [g x B x M/g] tensor to [B x M] ## # [g x B x M/g] --> [B x g x M/g] x = x.transpose(0, 1) # transpose so that batch is first # [B x g x M/g] --> [B x M] x = x.contiguous().view(bsz, -1) return x\nListing 2: \"Grouping kernel in CUDA\"\n/* Grouping Kernel: Transforms input from [B x N] to [g x B x N/g] */ template<typename scalar_t> __global__ void grouping_kernel_forward(const scalar_t* input,\nconst int groups, const int total_elements, const int input_features, const int group_features, const int batch_size, scalar_t* output){\nconst int index = IMUL(blockIdx.x, blockDim.x) + threadIdx.x; if (index >= total_elements){\nreturn; } const int b_idx = index / group_features; const int g_f_idx = (index % group_features); int in_offset, out_offset; #pragma unroll for(int g=0; g < groups; g++){\nin_offset = (b_idx * input_features) + (g * group_features) + g_f_idx; out_offset = ((g * batch_size + b_idx) * group_features) + g_f_idx; output[out_offset] = input[in_offset];\n} }" } ]
2,021
null
SP:c83ecc74eb885df5f29e5a7080a8c60d1ee0a3b0
[ "This paper shows a relationship between the project rule weights of a Hopfield network (HN) and the interaction weights in a corresponding restricted Boltzmann machine (RBM). The mapping from HN to RBM is facilitated by realising that the partition function of BN can be seen as the partition function of a binary-continuous (Bernoulli-Gaussian) RBM. The authors comments on the mapping from RBM to BN. The experiments show the advantages of training RBM with weights initialised from BN projection weights in generation and classification." ]
Hopfield networks (HNs) and Restricted Boltzmann Machines (RBMs) are two important models at the interface of statistical physics, machine learning, and neuroscience. Recently, there has been interest in the relationship between HNs and RBMs, due to their similarity under the statistical mechanics formalism. An exact mapping between HNs and RBMs has been previously noted for the special case of orthogonal (“uncorrelated”) encoded patterns. We present here an exact mapping in the case of correlated pattern HNs, which are more broadly applicable to existing datasets. Specifically, we show that any HN with N binary variables and p < N arbitrary binary patterns can be transformed into an RBM with N binary visible variables and p gaussian hidden variables. We outline the conditions under which the reverse mapping exists, and conduct experiments on the MNIST dataset which suggest the mapping provides a useful initialization to the RBM weights. We discuss extensions, the potential importance of this correspondence for the training of RBMs, and for understanding the performance of deep architectures which utilize RBMs.
[ { "affiliations": [], "name": "BOLTZMANN MACHINES" }, { "affiliations": [], "name": "Matthew Smart" }, { "affiliations": [], "name": "Anton Zilman" } ]
[ { "authors": [ "David H Ackley", "Geoffrey E Hinton", "Terrence J. Sejnowski" ], "title": "A learning algorithm for boltzmann machines", "venue": "Cognitive Science,", "year": 1985 }, { "authors": [ "Elena Agliari", "Adriano Barra", "Andrea De Antoni", "Andrea Galluzzi" ], "title": "Parallel retrieval of correlated patterns: From Hopfield networks to Boltzmann machines", "venue": "Neural Networks,", "year": 2013 }, { "authors": [ "Elena Agliari", "Adriano Barra", "Chiara Longo", "Daniele Tantari" ], "title": "Neural Networks Retrieving Boolean Patterns in a Sea of Gaussian Ones", "venue": "Journal of Statistical Physics,", "year": 2017 }, { "authors": [ "Daniel J. Amit" ], "title": "Modeling Brain Function: The World of Attractor Neural Networks", "venue": null, "year": 1989 }, { "authors": [ "Daniel J. Amit", "Hanoch Gutfreund", "H. Sompolinsky" ], "title": "Spin-glass models of neural networks", "venue": "Physical Review A,", "year": 1985 }, { "authors": [ "Adriano Barra", "Francesco Guerra" ], "title": "About the ergodic regime in the analogical Hopfield neural networks: Moments of the partition function", "venue": "Journal of Mathematical Physics,", "year": 2008 }, { "authors": [ "Adriano Barra", "Alberto Bernacchia", "Enrica Santucci", "Pierluigi Contucci" ], "title": "On the equivalence of Hopfield networks and Boltzmann Machines", "venue": "Neural Networks,", "year": 2012 }, { "authors": [ "Miguel A Carreira-Perpinan", "Geoffrey E Hinton" ], "title": "On contrastive divergence learning", "venue": "In Aistats,", "year": 2005 }, { "authors": [ "G.E. Hinton", "R.R. Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": "Science, 313(5786):504–507,", "year": 2006 }, { "authors": [ "Geoffrey E. Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural Computation,", "year": 2002 }, { "authors": [ "Geoffrey E. Hinton" ], "title": "A practical guide to training restricted boltzmann machines. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)", "venue": "LECTU:599–619,", "year": 2012 }, { "authors": [ "Geoffrey E. Hinton", "Simon Osindero", "Yee Whye Teh" ], "title": "A fast learning algorithm for deep belief nets", "venue": "Neural Computation,", "year": 2006 }, { "authors": [ "J J Hopfield" ], "title": "Neural networks and physical systems with emergent collective computational abilities", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 1982 }, { "authors": [ "I. Kanter", "H. Sompolinsky" ], "title": "Associative recall of memory without errors", "venue": "Physical Review A,", "year": 1987 }, { "authors": [ "Scott Kirkpatrick", "David Sherrington" ], "title": "Infinite-ranged models of spin-glasses", "venue": "Physical Review B,", "year": 1978 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Per Olov" ], "title": "Löwdin. On the Nonorthogonality Problem", "venue": "Advances in Quantum Chemistry,", "year": 1970 }, { "authors": [ "Chiara Marullo", "Elena Agliari" ], "title": "Boltzmann machines as generalized hopfield networks: A review of recent results and outlooks", "venue": "Entropy, 23(1):1–16,", "year": 2021 }, { "authors": [ "Pankaj Mehta", "Marin Bukov", "Ching Hao Wang", "Alexandre G.R. Day", "Clint Richardson", "Charles K Fisher", "David J Schwab" ], "title": "A high-bias, low-variance introduction to Machine Learning for physicists, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jan Melchior", "Nan Wang", "Laurenz Wiskott" ], "title": "Gaussian-binary restricted Boltzmann machines for modeling natural image statistics", "venue": "PLoS ONE,", "year": 2017 }, { "authors": [ "Marc Mézard" ], "title": "Mean-field message-passing equations in the Hopfield model and its generalizations", "venue": "Physical Review E,", "year": 2017 }, { "authors": [ "Fionn Murtagh", "Pedro Contreras" ], "title": "Algorithms for hierarchical clustering: An overview", "venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery,", "year": 2012 }, { "authors": [ "Radford M. Neal" ], "title": "Annealed importance sampling", "venue": "Statistics and Computing,", "year": 2001 }, { "authors": [ "Fabian Pedregosa", "Gael Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg", "Jake Vanderplas", "Alexandre Passos", "David Cournapeau", "Matthieu Brucher", "Matthieu Perrot", "Édouard Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "L. Personnaz", "I. Guyon", "G. Dreyfus" ], "title": "Collective computational properties of neural networks: New learning mechanisms", "venue": "Physical Review A,", "year": 1986 }, { "authors": [ "Ruslan Salakhutdinov", "Geoffrey Hinton" ], "title": "Deep Boltzmann machines", "venue": "In Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Ruslan Salakhutdinov", "Andriy Mnih", "Geoffrey Hinton" ], "title": "Restricted boltzmann machines for collaborative filtering", "venue": "In Proceedings of the 24th International Conference on Machine Learning,", "year": 2007 }, { "authors": [ "James R. Schott", "G.W. Stewart" ], "title": "Matrix Algorithms, Volume 1: Basic Decompositions", "venue": "Journal of the American Statistical Association,", "year": 1999 }, { "authors": [ "D RBM TO HN DETAILS D" ], "title": "INTEGRATING OUT THE HIDDEN VARIABLES The explanation from Mehta et al. (2019) for integrating out the hidden variables of an RBM is presented here for completeness. For a given binary-gaussian RBM defined by HRBM(s,λ", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Hopfield networks (HNs) (Hopfield, 1982; Amit, 1989) are a classical neural network architecture that can store prescribed patterns as fixed-point attractors of a dynamical system. In their standard formulation with binary valued units, HNs can be regarded as spin glasses with pairwise interactions Jij that are fully determined by the patterns to be encoded. HNs have been extensively studied in the statistical mechanics literature (e.g. (Kanter & Sompolinsky, 1987; Amit et al., 1985)), where they can be seen as an interpolation between the ferromagnetic Ising model (p = 1 pattern) and the Sherrington-Kirkpatrick spin glass model (many random patterns) (Kirkpatrick & Sherrington, 1978; Barra & Guerra, 2008). By encoding patterns as dynamical attractors which are robust to perturbations, HNs provide an elegant solution to pattern recognition and classification tasks. They are considered the prototypical attractor neural network, and are the historical precursor to modern recurrent neural networks.\nConcurrently, spin glasses have been used extensively in the historical machine learning literature where they comprise a sub-class of “Boltzmann machines” (BMs) (Ackley et al., 1985). Given a collection of data samples drawn from a data distribution, one is generally interested in “training” a BM by tuning its weights Jij such that its equilibrium distribution can reproduce the data distribution as closely as possible (Hinton, 2012). The resulting optimization problem is dramatically simplified when the network has a two-layer structure where each layer has no self-interactions, so that there are only inter-layer connections (Hinton, 2012) (see Fig. 1). This architecture is known as a Restricted Boltzmann Machine (RBM), and the two layers are sometimes called the visible layer and the hidden layer. The visible layer characteristics (dimension, type of units) are determined by the training data, whereas the hidden layer can have binary or continuous units and the dimension is chosen somewhat arbitrarily. In addition to generative modelling, RBMs and their multi-layer extensions have been used for a variety of learning tasks, such as classification, feature extraction, and dimension reduction (e.g. Salakhutdinov et al. (2007); Hinton & Salakhutdinov (2006)).\nThere has been extensive interest in the relationship between HNs and RBMs, as both are built on the Ising model formalism and fulfill similar roles, with the aim of better understanding RBM behaviour and potentially improving performance. Various results in this area have been recently reviewed (Marullo & Agliari, 2021). In particular, an exact mapping between HNs and RBMs has been previously noted for the special case of uncorrelated (orthogonal) patterns (Barra et al., 2012). Several related models have since been studied (Agliari et al., 2013; Mézard, 2017), which partially relax the uncorrelated pattern constraint. However, the patterns observed in most real datasets exhibit significant correlations, precluding the use of these approaches.\nIn this paper, we demonstrate exact correspondence between HNs and RBMs in the case of correlated pattern HNs. Specifically, we show that any HN with N binary units and p < N arbitrary (i.e. non-orthogonal) binary patterns encoded via the projection rule (Kanter & Sompolinsky, 1987; Personnaz et al., 1986), can be transformed into an RBM with N binary and p gaussian variables. We then characterize when the reverse map from RBMs to HNs can be made. We consider a practical example using the mapping, and discuss the potential importance of this correspondence for the training and interpretability of RBMs." }, { "heading": "2 RESULTS", "text": "We first introduce the classical solution to the problem of encodingN -dimensional binary {−1,+1} vectors {ξµ}pµ=1, termed “patterns”, as global minima of a pairwise spin glass H(s) = − 12s\nTJs. This is often framed as a pattern retrieval problem, where the goal is to specify or learn Jij such that an energy-decreasing update rule for H(s) converges to the patterns (i.e. they are stable fixed points). Consider the N × p matrix ξ with the p patterns as its columns. Then the classical prescription known as the projection rule (or pseudo-inverse rule) (Kanter & Sompolinsky, 1987; Personnaz et al., 1986), J = ξ(ξT ξ)−1ξT , guarantees that the p patterns will be global minima of H(s). This resulting spin model is commonly called a (projection) Hopfield network, and has the Hamiltonian\nH(s) = −1 2 sT ξ(ξT ξ)−1ξTs. (1)\nNote that ξT ξ invertibility is guaranteed as long as the patterns are linearly independent (we therefore require p ≤ N ). Also note that in the special (rare) case of orthogonal patterns ξµ · ξν = Nδµν (also called “uncorrelated”), studied in the previous work (Barra et al., 2012), one has ξT ξ = NI and so the pseudo-inverse interactions reduce to the well-known Hebbian form J = 1N ξξ\nT (the properties of which are studied extensively in Amit et al. (1985)). Additional details on the projection HN Eq. (1) are provided in Appendix A. To make progress in analyzing Eq. (1), we first consider a transformation of ξ which eliminates the inverse factor." }, { "heading": "2.1 MAPPING A HOPFIELD NETWORK TO A RESTRICTED BOLTZMANN MACHINE", "text": "In order to obtain a more useful representation of the quadratic form Eq. (1) (for our purposes), we utilize the QR-decomposition (Schott & Stewart, 1999) of ξ to “orthogonalize” the patterns,\nξ = QR, (2)\nwith Q ∈ RN×p,R ∈ Rp×p. The columns of Q are the orthogonalized patterns, and form an orthonormal basis (of non-binary vectors) for the p-dimensional subspace spanned by the binary patterns. R is upper triangular, and if its diagonals are held positive then Q and R are both unique (Schott & Stewart, 1999). Note both the order and sign of the columns of ξ are irrelevant for HN pattern recall, so there are n = 2p · p! possibleQ,R pairs. Fixing a pattern ordering, we can use the orthogonality ofQ to re-write the interaction matrix as\nJ = ξ(ξT ξ)−1ξT = QR(RTR)−1RTQT = QQT (3)\n(the last equality follows from (RTR)−1 = R−1(RT )−1). Eq. (3) resembles the simple Hebbian rule but with non-binary orthogonal patterns. Defining q ≡ QTs in analogy to the classical pattern overlap parameterm ≡ 1N ξ Ts (Amit et al., 1985), we have\nH(s) = −1 2 sTQQTs = −1 2 q(s) · q(s). (4)\nUsing a Gaussian integral as in Amit et al. (1985); Barra et al. (2012); Mézard (2017) to transform (exactly) the partition function Z ≡ ∑ {s} e −βH(s) of Eq. (1), we get\nZ = ∑ {s} e 1 2 (βq) T (β−1I)(βq)\n= ∑ {s} ∫ e− β 2 ∑ µ λ 2 µ+β ∑ µ λµ ∑ iQiµsi ∏ µ dλµ√ 2π/β . (5)\nThe second line can be seen as the partition function of an expanded Hamiltonian for the N (binary) original variables {si} and the p (continuous) auxiliary variables {λµ}, i.e.\nHRBM({si}, {λµ}) = 1\n2 ∑ µ λ2µ − ∑ µ ∑ i Qiµsiλµ. (6)\nNote that this is the Hamiltonian of a binary-continuous RBM with inter-layer weights Qiµ. The original HN is therefore equivalent to an RBM described by Eq. (6) (depicted in Fig. 1). As mentioned above, there are many RBMs which correspond to the same HN due to the combinatorics of choosing Q. In fact, instead of QR factorization one can use any decomposition which satisfies J = UUT , with orthogonal U ∈ RN×p (see Appendix B), in which case U acts as the RBM weights. Also note the inclusion of an applied field term − ∑ i bisi in Eq. (1) trivially carries\nthrough the procedure, i.e. H̃RBM ({si}, {λµ}) = 12 ∑ µ λ 2 µ − ∑ i bisi − ∑ µ ∑ iQiµsiλµ.\nInstead of working with the joint form Eq. (6), one could take a different direction from Eq. (5) and sum out the original variables {si}, i.e.\nZ = ∫ e− β 2 ∑ µ λ 2 µ2N ∏ i cosh ( β ∑ µ Qiµλµ )∏ µ dλµ√ 2π/β . (7)\nThis continuous, p-dimensional representation is useful for numerical estimation of Z (Section 3.1). We may write Eq. (7) as Z = ∫ e−F0(λ)dλµ, where\nF0({λµ}) = 1\n2 ∑ µ λ2µ − 1 β ∑ i ln cosh ( β ∑ µ Qiµλµ ) . (8)\nEq. (8) is an approximate Lyapunov function for the mean dynamics of {λµ}; ∇λF0 describes the effective behaviour of the stochastic dynamics of the N binary variables {si} at temperature β−1." }, { "heading": "2.2 COMMENTS ON THE REVERSE MAPPING", "text": "With the mapping from HNs (with correlated patterns) to RBMs established, we now consider the reverse direction. Consider a binary-continuous RBM with inter-layer weights Wiµ which couple a\nvisible layer of N binary variables {si} to a hidden layer of p continuous variables {λµ},\nH(s,λ) = 1\n2 ∑ µ λ2µ − ∑ i bisi − ∑ µ ∑ i Wiµsiλµ. (9)\nHere we use W instead of Q for the RBM weights to emphasize that the RBM is not necessarily an HN. First, following Mehta et al. (2019), we transform the RBM to a BM with binary states by integrating out the hidden variables. The corresponding Hamiltonian for the visible units alone is (see Appendix D.1 for details),\nH̃(s) = − ∑ i bisi − 1 2 ∑ i ∑ j ∑ µ WiµWjµsisj , (10)\na pairwise Ising model with a particular coupling structure Jij = ∑ µWiµWjµ, which in vector form is J =\n∑ µ wµw T µ =WW T , (11)\nwhere {wµ} are the p columns ofW . In general, this Ising model Eq. (10) produced by integrating out the hidden variables need not have Hopfield structure (discussed below). However, it automatically does (as noted in Barra et al. (2012)), in the very special case whereWiµ ∈ {−1,+1}. In that case, the binary patterns are simply {wµ}, so that Eq. (11) represents a Hopfield network with the Hebbian prescription. This situation is likely rare and may only arise as a by-product of constrained training; for a generically trained RBM the weights will not be binary. It is therefore interesting to clarify when and how real-valued RBM interactionsW can be associated with HNs.\nApproximate binary representation of W : In Section 2.1, we orthogonalized the binary matrix ξ via the QR decomposition ξ = QR, where Q is an orthogonal (but non-binary) matrix, which allowed us to map a projection HN (defined by its patterns ξ, Eq. (1)) to an RBM (defined by its inter-layer weightsQ, Eq. (6)).\nHere we consider the reverse map. Given a trained RBM with weights W ∈ RN×p, we look for an invertible transformation X ∈ Rp×p which binarizes W . We make the mild assumption that W is rank p. If we find such an X , then B =WX will be the Hopfield pattern matrix (analogous to ξ), with Biµ ∈ {−1,+1}. This is a non-trivial problem, and an exact solution is not guaranteed. As a first step to study the problem, we relax it to that of finding a matrix X ∈ GLp(R) (i.e. invertible, p × p, real) which minimizes the binarization error\nargmin X∈GLp(R)\n||WX − sgn(WX)||F . (12)\nWe denote the approximately binary transformation ofW via a particular solutionX by\nBp =WX. (13)\nWe also define the associated error matrixE ≡ Bp− sgn(Bp). We stress thatBp is non-binary and approximatesB ≡ sgn(Bp), the columns of which will be HN patterns under certain conditions on E. We provide an initial characterization and example in Appendix D." }, { "heading": "3 EXPERIMENTS ON MNIST DATASET", "text": "Next we investigate whether the Hopfield-RBM correspondence can provide an advantage for training binary-gaussian RBMs. We consider the popular MNIST dataset of handwritten digits (LeCun et al., 1998) which consists of 28× 28 images of handwritten images, with greyscale pixel values 0 to 255. We treat the sample images as N ≡ 784 dimensional binary vectors of {−1,+1} by setting all non-zero values to +1. The dataset includes M ≡ 60, 000 training images and 10, 000 testing images, as well as their class labels µ ∈ {0, . . . , 9}." }, { "heading": "3.1 GENERATIVE OBJECTIVE", "text": "The primary task for generative models such as RBMs is to reproduce a data distribution. Given a data distribution pdata, the generative objective is to train a model (here an RBM defined by its parameters θ), such that the model distribution pθ is as close to pdata as possible. This is often quantified by the Kullback-Leibler (KL) divergence DKL(pdata‖pθ) = ∑ s pdata(s) ln ( pdata(s) pθ(s) ) . One generally does not have access to the actual data distribution, instead there is usually a representative training set S = {sa}Ma=1 sampled from it. As the data distribution is constant with respect to θ, the generative objective is equivalent to maximizing L(θ) = 1M ∑ a ln pθ(sa)." }, { "heading": "3.1.1 HOPFIELD RBM SPECIFICATION", "text": "With labelled classes of training data, specification of an RBM via a one-shot Hopfield rule (“Hopfield RBM”) is straightforward. In the simplest approach, we define p = 10 representative patterns via the (binarized) class means\nξµ ≡ sgn 1 |Sµ| ∑ s∈Sµ s . (14) where µ ∈ {0, . . . , 9} and Sµ is the set of sample images for class µ. These patterns comprise the columns of theN×p pattern matrix ξ, which is then orthogonalized as in Eq. (2) to obtain the RBM weightsW which couple N binary visible units to p gaussian hidden units.\nWe also consider refining this approach by considering sub-classes within each class, representing, for example, the different ways one might draw a “7”. As a proof of principle, we split each digit class into k sub-patterns using hierarchical clustering. We found good results with Agglomerative clustering using Ward linkage and Euclidean distance (see Murtagh & Contreras (2012) for an overview of this and related methods). In this way, we can define a hierarchy of Hopfield RBMs. At one end, k = 1, we have our simplest RBM which has p = 10 hidden units and encodes 10 patterns (using Eq. (14)), one for each digit class. At the other end, 10k/N → 1, we can specify increasingly refined RBMs that encode k sub-patterns for each of the 10 digit classes, for a total of p = 10k patterns and hidden units. This approach has an additional cost of identifying the sub-classes, but is still typically faster than training the RBM weights directly (discussed below).\nThe generative performance as a function of k and β is shown in Fig. 2, and increases monotonically with k in the range plotted. If β is too high (very low temperature) the free energy basins will be very deep directly at the patterns, and so the model distribution will not capture the diversity of images from the data. If β is too low (high temperature), there is a “melting transition” where the original pattern basins disappear entirely, and the data will therefore be poorly modelled. Taking α = p/N ∼ 0.1 (roughly k = 8), Fig. 1 of Kanter & Sompolinsky (1987) predicts βm ≈ 1.5 for the theoretical melting transition for the pattern basins. Interestingly, this is quite close to our observed peak near β = 2. Note also as k is increased, the generative performance is sustained at lower temperatures.\nIn situations where one already has access to the class labels, this approach to obtain RBM weights is very fast. The class averaging has negligible computational cost O(MN) for the whole training\nset (M samples), and the QR decomposition has a modest complexity ofO(Np2) (Schott & Stewart, 1999). Conventional RBM training, discussed below, requires significantly more computation." }, { "heading": "3.1.2 CONVENTIONAL RBM TRAINING", "text": "RBM training is performed through gradient ascent on the log-likelihood of the data, L(θ) = 1 M ∑ a ln pθ(sa) (equivalent here to minimizing KL divergence, as mentioned above). We are focused here on the weights W in order to compare to the Hopfield RBM weights, and so we neglect the biases on both layers. As is common (Hinton, 2012), we approximate the total gradient by splitting the training dataset into “mini-batches”, denoted B. The resulting gradient ascent rule for the weights is (see Appendix E)\nW t+1iµ =W t iµ + η 〈∑ j Wjµs a i s a j 〉 a∈B − 〈siλµ〉model , (15) where 〈siλµ〉model ≡ Z−1 ∑ {s} ∫ siλµe −βH(s,λ)∏ µ dλµ is an average over the model distribution.\nThe first bracketed term of Eq. (15) is simple to calculate at each iteration of the weights. The second term, however, is intractable as it requires one to calculate the partition function Z. We instead approximate it using contrastive divergence (CD-K) (Carreira-Perpinan & Hinton, 2005; Hinton, 2012). See Appendix E for details. Each full step of RBM weight updates involves O(KBNp) operations (Melchior et al., 2017). Training generally involves many mini-batch iterations, such that the entire dataset is iterated over (one epoch) many times. In our experiments we train for 50 epochs with mini-batches of size 100 (3 · 105 weight updates), so the overall training time can be extensive compared to the one-shot Hopfield approach presented above. For further details on RBM training see e.g. Hinton (2012); Melchior et al. (2017).\nIn Fig. 3, we give an example of the Hopfield RBM weights (for k = 1), as well as how they evolve during conventional RBM training. Note Fig. 3(a), (b) appear qualitatively similar, suggesting that the proposed initializationQ from Eqs. (2), (14) may be near a local optimum of the objective.\nIn Fig. 4(a), (b), we compare conventional RBM training on four different weight initializations: (i) randomWiµ ∼ N (0, 0.01) (purple), commonly used in the literature; (ii) our proposed weights from the projection rule Hopfield mapping for correlated patterns (blue); (iii) the “Hebbian” Hopfield mapping described in previous work for uncorrelated patterns, W = N−1/2ξ (Barra et al., 2012) (green); and (iv) the top p PCA components of the training data (pink). In Fig. 4(c), (d) we compare generated sample images from two RBMs, each with p = 50 hidden units but different initial weights (random in (c) and the HN mapping in (d)). The quality of samples in Fig. 4(d) reflect the efficient training of the Hopfield initialized RBM.\nFig. 4(a), (b) show that the Hopfield initialized weights provide an advantage over other approaches during the early stages of training. The PCA and Hebbian initializations start at much lower values of the objective and require one or more epochs of training to perform similarly (Fig. 4(a) inset), while the randomly initialized RBM takes > 25 epochs to reach a similar level. All initializations ultimately reach the same value. This is noteworthy because the proposed weight initialization is fast compared to conventional RBM training. PCA performs best for intermediate training times.\nDespite being a common choice, the random initialization trains surprisingly slowly, taking roughly 40 epochs in Fig. 4(a), and in Fig. 4(b) we had to increase the basal learning rate η0 = 10−4 by a factor of 5 for the first 25 epochs due to slow training. The non-random initializations, by comparison, arrive at the same maximum value much sooner. The relatively small change over training for the Hopfield initialized weights supports the idea that they may be near a local optimum of the objective, and that conventional training may simply be mildly tuning them (Fig. 3).\nThat the HN initialization performs well at 0 epochs suggests that the p Hopfield patterns concisely summarize the dataset. This is intuitive, as the projection rule encodes the patterns (and nearby states) as high probability basins in the free energy landscape of Eq. (1). As the data itself is clustered near the patterns, these basins should model the true data distribution well. Overall, our results suggest that the HN correspondence provides a useful initialization for generative modelling with binary-gaussian RBMs, displaying excellent performance with minimal training." }, { "heading": "3.2 CLASSIFICATION OBJECTIVE", "text": "As with the generative objective, we find that the Hopfield initialization provides an advantage for classification tasks. Here we consider the closely related MNIST classification problem. The goal is to train a model on the MNIST Training dataset which accurately predicts the class of presented images. The key statistic is the number of misclassified images on the MNIST Testing dataset.\nWe found relatively poor classification results with the single (large) RBM architecture from the preceding Section 3.1. Instead, we use a minimal product-of-experts (PoE) architecture as described in Hinton (2002): the input data is first passed to 10 RBMs, one for each class µ. This “layer of RBMs” functions as a pre-processing layer which maps the high dimensional sample s to a feature vector f(s) ∈ R10. This feature vector is then passed to a logistic regression layer in order to predict the class of s. The RBM layer and the classification layer are trained separately.\nThe first step is to train the RBM layer to produce useful features for classification. As in Hinton (2002), each small RBM is trained to model the distribution of samples from a specific digit class µ. We use CD-20 generative training as in Section 3.1, with the caveat that each expert is trained solely on examples from their respective class. Each RBM connects N binary visible units to k gaussian hidden units, and becomes an “expert” at generating samples from one class. To focus on the effect of interlayer weight initialization, we set the layer biases to 0.\nAfter generative training, each expert should have relatively high probability p(µ)θ (sa) for sample digits sa of the corresponding class µ, and lower probability for digits from other classes. This idea is used to define 10 features, one from each expert, based on the log-probability of a given sample\nunder each expert, ln p(µ)θ (sa) = −βH(µ)(sa)− lnZ(µ). Note that β and lnZ(µ) are constants with respect to the data and thus irrelevant for classification. For a binary-gaussian RBM, H(µ)(sa) has the simple form Eq. (10), so the features we use are\nf (µ)(s) = ∑ i ∑ j ∑ ν W (µ) iν W (µ) jν sisj = ||s TW (µ)||2. (16)\nWith the feature map defined, we then train a standard logistic regression classifier (using scikitlearn (Pedregosa et al., 2011)) on these features. In Fig. 5, we report the classification error on the MNIST Testing set of 10,000 images (held out during both generative and classification training). Note the size p = 10 of the feature vector is independent of the hidden dimension k of each RBM, so the classifier is very efficient.\nDespite this relatively simple approach, the PoE initialized using the orthogonalized Hopfield patterns (“Hopfield PoE”) performs fairly well (Fig. 5, blue curves), especially as the number of subpatterns is increased. We found that generative training beyond 50 epochs did not significantly improve performance for the projection HN or PCA. (in Fig. E.1, we train to 100 epochs and also display the aforementioned “Hebbian” initial condition, which performs much worse for classification). Intuitively, increasing the number of hidden units k increases classification performance independent of weight initialization (with sufficient training).\nFor k fixed, the Hopfield initialization provides a significant benefit to classification performance compared to the randomly initialized weights (purple curves). For few sub-patterns (circles k = 10 and squares k = 20), the Hopfield initialized models perform best without additional training and until 1 epoch, after which PCA (pink) performs better. When each RBM has k = 100 hidden features (triangles), the Hopfield and PCA PoE reach 3.0% training error, whereas the randomly initialized PoE reaches 4.5%. However, the Hopfield PoE performs much better than PCA with minimal training, and maintains its advantage until 10 epochs, after which they are similar. Interestingly, both the Hopfield and PCA initialized PoE with just k = 10 encoded patterns performs better than or equal to the k = 100 randomly initialized PoE at each epoch despite having an order of magnitude fewer trainable parameters. Finally, without any generative training (0 epochs), the k = 100 Hopfield PoE performs slightly better (4.4%) than the k = 100 randomly initialized PoE with 50 epochs of training." }, { "heading": "4 DISCUSSION", "text": "We have presented an explicit, exact mapping from projection rule Hopfield networks to Restricted Boltzmann Machines with binary visible units and gaussian hidden units. This provides a generalization of previous results which considered uncorrelated patterns (Barra et al., 2012), or special cases of correlated patterns (Agliari et al., 2013; Mézard, 2017). We provide an initial characterization of the reverse map from RBMs to HNs, along with a matrix-factorization approach to construct approximate associated HNs when the exact reverse map is not possible. Importantly, our HN to\nRBM mapping can be applied to correlated patterns such as those found in real world datasets. As a result, we are able to conduct experiments (Section 3) on the MNIST dataset which suggest the mapping provides several advantages.\nThe conversion of an HN to an equivalent RBM has practical utility: it trades simplicity of presentation for faster processing. The weight matrix of the RBM is potentially much smaller than the HN (Np elements instead ofN(N −1)/2). More importantly, proper sampling of stochastic trajectories in HNs requires asynchronous updates of the units, whereas RBM dynamics can be simulated in a parallelizable, layer-wise fashion. We also utilized the mapping to efficiently estimate the partition function of the Hopfield network (Fig. 2) by summing out the spins after representing it as an RBM.\nThis mapping also has another practical utility. When used as an RBM weight initialization, the HN correspondence enables efficient training of generative models (Section 3.1, Fig. 4). RBMs initialized with random weights and trained for a moderate amount of time perform worse than RBMs initialized to orthogonalized Hopfield patterns and not trained at all. Further, with mild training of just a few epochs, Hopfield RBMs outperform conventionally initialized RBMs trained several times longer. The revealed initialization also shows advantages over alternative non-random initializations (PCA and the “Hebbian” Hopfield mapping) during early training. By leveraging this advantage for generative tasks, we show that the correspondence can also be used to improve classification performance (Section 3.2, Fig. 5, Appendix E.3).\nOverall, the RBM initialization revealed by the mapping allows for smaller models which perform better despite shorter training time (for instance, using fewer hidden units to achieve similar classification performance). Reducing the size and training time of models is critical, as more realistic datasets (e.g. gene expression data from single-cell RNA sequencing) may require orders of magnitude more visible units. For generative modelling of such high dimensional data, our proposed weight initialization based on orthogonalized Hopfield patterns could be of practical use. Our theory and experiments are a proof-of-principle; if they can be extended to the large family of deep architectures which are built upon RBMs, such as deep belief networks (Hinton et al., 2006) and deep Boltzmann machines (Salakhutdinov & Hinton, 2009), it would be of great benefit. This will be explored in future work.\nMore broadly, exposing the relationship between RBMs and their representative HNs helps to address the infamous interpretability problem of machine learning which criticizes trained models as “black boxes”. HNs are relatively transparent models, where the role of the patterns as robust dynamical attractors is theoretically well-understood. We believe this correspondence, along with future work to further characterize the reverse map, will be especially fruitful for explaining the performance of deep architectures constructed from RBMs." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors thank Duncan Kirby and Jeremy Rothschild for helpful comments and discussions. This work is supported by the National Science and Engineering Research Council of Canada (NSERC) through Discovery Grant RGPIN 402591 to A.Z. and CGS-D Graduate Fellowship to M.S." }, { "heading": "A HOPFIELD NETWORK DETAILS", "text": "Consider p < N N -dimensional binary patterns {ξµ}pµ=1 that are to be “stored”. From them, construct the N ×p matrix ξ whose columns are the p patterns. If they are mutually orthogonal (e.g. randomly sampled patterns in the large N → ∞ limit), then choosing interactions according to the Hebbian rule, JHebb = 1N ξξ T , guarantees that they will all be stable minima of H(s) = − 12s TJs, provided α ≡ p/N < αc, where αc ≈ 0.14 (Amit et al., 1985). If they are not mutually orthogonal (referred to as correlated), then using the “projection rule” JProj = ξ(ξT ξ)−1ξT guarantees that they will all be stable minima of H(s), provided p < N (Kanter & Sompolinsky, 1987; Personnaz et al., 1986). Note JProj → JHebb in the limit of orthogonal patterns. In the main text, we use J as shorthand for JProj.\nWe provide some relevant notation from Kanter & Sompolinsky (1987). Define the overlap of a state s with the p patterns as m(s) ≡ 1N ξ\nTs, and define the projection of a state s onto the p patterns as a(s) ≡ (ξT ξ)−1ξTs ≡ A−1m(s). NoteA ≡ ξT ξ is the overlap matrix, andmµ,aµ ∈ [−1, 1]. We can re-write the projection rule Hamiltonian Eq. (1) as\nH(s) = −N2m(s) · a(s). (A.1)\nFor simplicity, we include the self-interactions rather than keeping track of their omission; the results are the same as N → ∞. From Eq. (A.1), several quadratic forms can be written depending on which variables one wants to work with:\nI. H(s) = −N 2 2 m T (ξT ξ)−1m\nII. H(s) = − 12a T (ξT ξ)a\nThese are the starting points for the alternative Boltzmann Machines (i.e. not RBMs) presented in Appendix C." }, { "heading": "B ADDITIONAL HN TO RBM MAPPINGS", "text": "We used QR factorization in the main text to establish the HN to RBM mapping. However, one can use any decomposition which satisfies\nJProj = UU T (B.1)\nsuch thatU ∈ RN×p is orthogonal (orthogonal for tall matrices meansUTU = I). In that case, U becomes the RBM weights. We provide two simple alternatives below, and show they are all part of the same family of orthogonal decompositions.\n“Square root” decomposition: Define the matrixK ≡ ξ(ξT ξ)−1/2. Note thatK is orthogonal, and that JProj =KKT .\nSingular value decomposition: More generally, consider the SVD of the pattern matrix ξ:\nξ = UΣV T (B.2)\nwhere U ∈ RN×p, V ∈ Rp×p store the left and right singular vectors (respectively) of ξ as orthogonal columns, and Σ ∈ Rp×p is diagonal and contains the singular values of ξ. Note in the limit of orthogonal patterns, we have Σ = √ NI . This decomposition gives several relations for quantities of interest: A ≡ ξT ξ = V Σ2V T\nJHebb ≡ 1\nN ξξT =\n1\nN UΣ2UT\nJProj ≡ ξ(ξT ξ)−1ξT = UUT .\n(B.3)\nThe last line is simply the diagonalization of JProj, and shows that our RBM mapping is preserved if we swap the Q from QR with U from SVD. However, since there are p degenerate eigenvalues σ2 = 1,U is not unique - any orthogonal basis for the 1-eigenspace can be chosen. ThusU ′ = UO where O is orthogonal is also valid, and the QR decomposition and “square root” decomposition correspond to particular choices ofO." }, { "heading": "C HN TO BM MAPPINGS USING ALTERNATIVE REPRESENTATIONS OF THE HOPFIELD HAMILTONIAN", "text": "In addition to the orthogonalized representation in the main text, there are two natural representations to consider based on the pattern overlaps and projections introduced in Appendix A. These lead to generalized Boltzmann Machines (BMs) consisting of N original binary spins and p continuous variables. These representations are not RBMs as the continuous variables interact with each other. We present them for completeness.\n“Overlap” Boltzmann Machine: Writing H(s) = −N 2 2 m T (ξT ξ)−1m, we have\nZ = ∑ {s} exp\n( 1\n2 (β √ Nm)T\n( β\nN ξT ξ\n)−1 (β √ Nm) ) . (C.1)\nApplying the gaussian integral identity, Z = √ det (ξT ξ) ∑ {s} ∫ e − β2N ∑ µ,ν(ξ T ξ)µνλµλν+ β√ N ∑ µ λµ ∑ i ξiµsi ∏ µ dλµ√ 2πN/β , (C.2)\nfrom which we identify the BM Hamiltonian\nH(s,λ) = 1\n2N ∑ µ,ν (ξT ξ)µνλµλν − 1√ N ∑ µ ∑ i ξiµsiλµ. (C.3)\nThis is the analog of Eq. (6) in the main text for the “overlap” representation. Note we can also sum out the binary variables in Eq. (C.2), which allows for an analogous expression to Eq. (8),\nF0({λµ}) = 1\n2N ∑ µ,ν (ξT ξ)µνλµλν − 1 β ∑ i ln cosh ( β√ N ∑ µ ξiµλµ ) . (C.4)\nCuriously, we note that we may perform a second Gaussian integral on Eq. (C.2), introducing new auxiliary variables τν to remove the interactions between the λµ variables:\nZ = √ det ( β2\nN ξT ξ )∑ {s} ∫ ∫ e − β2 τ T τ+ β√ N λT ξT s+i β√ N λT (ξT ξ)1/2τ ∏ ν dτν√ 2π ∏ µ dλµ√ 2π . (C.5)\nEq. (C.5) describes a three-layer RBM with complex interactions between the λµ and τν variables, a representation which could be useful in some contexts.\n“Projection” Boltzmann Machine: Proceeding as above but for H(s) = − 12a T (ξT ξ)a, one finds\nZ = det (ξT ξ) −1/2∑\n{s}\n∫ e− β 2 λ T (ξT ξ)−1λ+βλT (ξT ξ)−1ξT s ∏ µ dλµ√ 2π/β , (C.6)\nwhich corresponds to the BM Hamiltonian\nH(s,λ) = 1\n2 λT (ξT ξ)−1λ− λT (ξT ξ)−1ξTs. (C.7)\nThe analogous expression to Eq. (8) in this case is\nF0(λ) = 1 2 λT (ξT ξ)−1λ− 1 β ∑ i ln cosh ( β[ξ(ξT ξ)−1λ]i ) . (C.8)" }, { "heading": "D RBM TO HN DETAILS", "text": "D.1 INTEGRATING OUT THE HIDDEN VARIABLES\nThe explanation from Mehta et al. (2019) for integrating out the hidden variables of an RBM is presented here for completeness. For a given binary-gaussian RBM defined by HRBM(s,λ) (as in Eq. (9)), we have p(s) = Z−1 ∫ e−βHRBM(s,λ)dλ. Consider also that p(s) = Z−1e−βH(s) for some unknown H(s). Equating these expressions gives,\nH(s) = − ∑ i bisi − 1 β ∑ µ ln (∫ e−β 1 2λ 2 µ+β ∑ iWiµsiλµdλµ ) . (D.1)\nDecompose the argument of ln(·) in Eq. (D.1) by defining qµ(λµ), a gaussian with zero mean and variance β−1. Writing tµ = β ∑ iWiµsi, one observes that the second term (up to a constant) is a sum of cumulant generating functions, i.e.\nKµ(tµ) ≡ ln 〈etµλµ〉qµ = ln (∫ qµe tµλµdλµ ) . (D.2)\nThese can be written as a cumulant expansion, Kµ(tµ) = ∑ n=1 κ (n) µ tnµ n! , where κ (n) µ = ∂Kµ ∂tµ |tµ=0 is the nth cumulant of qµ. However, since qµ(λµ) is gaussian, only the second term remains, leaving\nKµ(tµ) = 1 β\n(β ∑ iWiµsi) 2\n2 . Putting this all together, we have\nH(s) = − ∑ i bisi − 1 2 ∑ i ∑ j ∑ µ WiµWjµsisj , (D.3)\nNote that in general, qµ(λµ) need not be gaussian, in which case the resultant Hamiltonian H(s) can have higher order interactions (expressed via the cumulant expansion).\nD.2 APPROXIMATE REVERSE MAPPING\nSuppose one has a trial solutionBp =WX to the approximate binarization problem Eq. (12), with error matrix E ≡ Bp − sgn(Bp). We consider two cases depending on ifW is orthogonal.\nCase 1: If W is orthogonal, then starting from Eq. (11) and applying Eq. (13), we have J = WW T = Bp(X TX)−1BTp . Using I =W TW , we get\nJ = Bp(B T p Bp) −1BTp . (D.4)\nThus, the interactions between the visible units is the familiar projection rule used to store the approximately binary patternsBp. “Storage” means the patterns are stable fixed points of the deterministic update rule st+1 ≡ sgn(Jst). We cannot initialize the network to a non-binary state. Therefore, the columns ofB = sgn(Bp) are our candidate patterns. To test if they are fixed points, consider\nsgn(JB) = sgn(JBp − JE) = sgn(Bp − JE).\nWe need the errorE to be such that JE will not alter the sign ofBp. Two sufficient conditions are:\n(a) small error: |(JE)iµ| < |(Bp)iµ|, or (b) error with compatible sign: (JE)iµ(Bp)iµ < 0.\nWhen either of these conditions hold for each element, we have sgn(JB) = sgn(Bp) = B, so that the candidate patternsB are fixed points. It remains to show that they are also stable (i.e. minima).\nCase 2: If W is not orthogonal but its singular values remain close to one, then Löwdin Orthogonalization (also known as Symmetric Orthogonalization) (Löwdin, 1970) provides a way to preserve the HN mapping from Case 1 above.\nConsider the SVD of W : W = UΣV T . The closest matrix to W (in terms of Frobenius norm) with orthogonal columns is L = UV T , and the approximation W ≈ L is called the Löwdin Orthogonalization of W . Note the approximation becomes exact when all the singular values are one. We then writeWW T ≈ UUT , and the orthogonalW approach of Case 1 can then be applied. On the other hand, W may be strongly not orthogonal (singular values far from one). If it is still full rank, then its pseudo-inverse W † = (W TW )−1W T = V Σ−1UT is well-defined. Repeating the steps from the orthogonal case, we note here XTX = BTp (W\n†)TW †Bp. Defining C ≡ (W †)TW † = UΣ−2UT , we arrive at the corresponding result,\nJ = Bp(B T p CBp) −1BTp . (D.5)\nThis is analogous to the projection rule but with a “correction factor” C. However, it is not immediately clear how C affects pattern storage. Given the resemblance between JProj and Eq. (D.4) (relative to Eq. (D.5)), we expect that RBMs trained with an orthogonality constraint on the weights may be more readily mapped to HNs.\nD.3 EXAMPLE OF THE APPROXIMATE REVERSE MAPPING\nIn the main text we introduced the approximate binarization problem Eq. (12), the solutions of which provide approximately binary patterns through Eq. (13). To numerically solve Eq. (12) and obtain a candidate solutionX∗, we perform gradient descent on a differentiable variant.\nSpecifically, we replace sgn(u) with tanh(αu) for large α. Define E = WX − tanh(αWX) as in the main text. Then the derivative of the “softened” Eq. (12) with respect toX is\nG(X) = 2W T (E − αE sech2(WX))). (D.6)\nGiven an initial conditionX0, we apply the update rule\nXt+1 =Xt − γG(Xt) (D.7)\nuntil convergence to a local minimumX∗.\nIn the absence of prior information, we consider randomly initialized X0. Our preliminary experiments using Eq. (D.7) to binarize arbitrary RBM weightsW have generally led to high binarization error. This is due in part to the difficulty in choosing a good initial condition X0, which will be explored in future work.\nTo avoid this issue, we consider the case of a Hopfield-initialized RBM following CD-k training. At epoch 0, we in fact have the exact binarization solution X∗ = R (from the QR decomposition, Eq. (2)), which recovers the encoded binary patterns ξ. We may use X0 = R, as an informed initial condition to Eq. (D.7), to approximately binarize the weight at later epochs and monitor how the learned patterns change.\nIn Fig. D.1, we give an example of this approximate reverse mapping for a Hopfield-initialized RBM following generative training (Fig. 4). Fig. D.1(a) shows the p = 10 encoded binary patterns, denoted below by ξ0, and Fig. D.1(b) shows the approximate reverse mapping applied to the RBM weights at epoch 10. We denote these nearly binary patterns by ξ10. Interestingly, some of the non-binary regions in Fig. D.1(b) coincide with features that distinguish the respective pattern. For example, the strongly “off” area in the top-right of the “six” pattern.\nWe also considered the associative memory performance of the projection HNs built from the the patterns ξ0, ξ10 from Fig. D.1. Specifically, we test the extent to which images from the MNIST Test dataset are attracted to the correct patterns. Ideally, each pattern should attract all test images of the corresponding class. The results are displayed in Fig. D.2 and elaborated in the caption.\nThe results in Fig. D.2 suggest that p = 10 patterns may be too crude for the associative memory network to be used as an accurate MNIST classifier (as compared to e.g. Fig. 5). Notably, the HN constructed from ξ10 performs about 3% worse than the HN constructed from ξ0, although this performance might improve with a more sophisticated method for the associative memory task. There may therefore be a cost, in terms of associative memory performance, to increasing the generative functionality of such models (Fig. 4). Our results from Appendix D.2 indicate that incorporating an orthogonality constraint on the weights during CD-k generative training may provide a way to preserve or increase the associative memory functionality. This will be explored in future work." }, { "heading": "E RBM TRAINING", "text": "Consider a general binary-gaussian RBM with N visible and p hidden units. The energy function is\nH(s,λ) = 1\n2 ∑ µ (λµ − cµ)2 − ∑ i bisi − ∑ µ ∑ i Wiµsiλµ. (E.1)\nFirst, we note the Gibbs-block update distributions for sampling one layer of a binary-gaussian RBM given the other (see e.g. Melchior et al. (2017)),\nVisible units: p(si = 1|λ) = 11+e−2βxi , where xi ≡ ∑ µWiµλµ + bi defines input to si,\nHidden units: p(λµ = λ|s) ∼ N (hµ, β−1), where hµ ≡ ∑ iWiµsi + cµ defines the input to λµ.\nE.1 GENERATIVE TRAINING\nFor completeness, we re-derive the binary-gaussian RBM weight updates, along the lines of Melchior et al. (2017). We want to maximize L ≡ 1M ∑ a ln pθ(sa). The contribution for a single\ndatapoint sa has the form ln pθ(sa) = ln(C−1 ∫ e−βH(sa,λ)dλ)− lnZ with C ≡ (2π/β)p/2. The gradient with respect to the model is\n∂ ln pθ(sa)\n∂θ =\n∫ (−β ∂H(sa,λ)∂θ )e −βH(sa,λ) ∏ µ dλµ∫\ne−βH(sa,λ) ∏ µ dλµ\n− β ∑ {s} ∫ siλµe −βH(sa,λ) ∏ µ dλµ\nZ (E.2)\nWe are focused on the interlayer weights, with ∂H(s,λ)∂Wiµ = −siλµ, so\n∂ ln pθ(sa)\n∂θ = β(sa)i\n∫ (λµe −βH(sa,λ) ∏ µ dλµ∫\ne−βH(sa,λ) ∏ µ dλµ\n− β ∑ {s} ∫ siλµe −βH(sa,λ) ∏ µ dλµ\nZ = β(sa)i〈λµ|s = sa〉model − β〈siλµ〉model. (E.3)\nThe first term is straightforward to compute: 〈λµ|s = sa〉model = ∑ iWiµ(sa)i + cµ. The second term is intractable and needs to be approximated. We use contrastive divergence (Carreira-Perpinan & Hinton, 2005; Hinton, 2012): 〈siλµ〉model ≈ s(k)i λ (k) µ . Here k denotes CD-k – k steps of Gibbsblock updates (introduced above) – from which s(k)i , λ (k) µ comprise the final state. We evaluate both terms over mini-batches of the training data to arrive at the weight update rule Eq. (15).\nE.2 GENERATIVE PERFORMANCE We are interested in estimating the objective function L ≡ 1M ∑ a ln pθ(sa) during training. As above, we split L into two terms,\nL = ln ( C−1 ∫ e−βH(sa,λ)\n∏ µ dλµ\n) − lnZ, (E.4)\nwith C ≡ (2π/β)p/2. The first term evaluates to\nβ\nM M∑ a=1 (bTsa + 1 2 ||c+W Tsa||2)− β 2 ||c||2, (E.5)\nwhich can be computed deterministically. lnZ, on the other hand, needs to be estimated. For this we perform Annealed Importance Sampling (AIS) (Neal, 2001) on its continuous representation\nZ = C−12N ∫ e− β 2 ∑ µ(λµ−cµ) 2+ ∑ i ln cosh(β( ∑ µWiµλµ+bi)) ∏ µ dλµ. (E.6)\nFor AIS we need to specify the target distribution’s un-normalized log-probabilities\nln(Zp(λ)) = N ln 2− p 2 ln\n( 2π\nβ\n) − β\n2 ∑ µ (λµ − cµ)2 + ∑ i ln cosh\n( β (∑ µ Wiµλµ + bi )) ,\n(E.7) as well as an initial proposal distribution to anneal from, which we fix as a p-dimensional unit normal distribution N (0, I).\nE.3 CLASSIFICATION PERFORMANCE\nHere we provide extended data (Fig. E.1) on the classification performance shown in Fig. 5. As in the main text, color denotes initial condition (introduced in Section 3.1) and shape denotes the number of sub-patterns. Here we train for each 100 epochs, which allows convergence of most of the curves. We also include the “Hebbian” initialization (green curves).\nFigure E.1: Product-of-experts classification performance (extended data).\nNotably, the Hebbian initialization performs quite poorly in the classification task (as compared to direct generative objective, Fig. 4). In particular, for the 100 sub-patterns case, where the projection\nHN performs best, the Hebbian curve trains very slowly (still not converged after 100 epochs) and lags behind even the 10 sub-pattern Hebbian curve for most of the training. This emphasizes the benefits of the projection rule HN over the Hebbian HN when the data is composed of correlated patterns, which applies to most real-world datasets." } ]
2,021
null
SP:3d705a1b70254d2b9d05277efff8ac08b0539086
[ "The authors present a way to learn the action of an arbitrary orthogonal matrix on a vector via a map from $\\mathbb{R}^{n\\times n}$ onto $\\operatorname{O}(n)$. They show that the map is surjective, and give conditions under which they can invert this action. They then compare against previous proposed schemes in one task and show the performance of their models in other two." ]
Orthogonal weight matrices are used in many areas of deep learning. Much previous work attempt to alleviate the additional computational resources it requires to constrain weight matrices to be orthogonal. One popular approach utilizes many Householder reflections. The only practical drawback is that many reflections cause low GPU utilization. We mitigate this final drawback by proving that one reflection is sufficient, if the reflection is computed by an auxiliary neural network.
[]
[ { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary Evolution Recurrent Neural Networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Nitin Bansal", "Xiaohan Chen", "Zhangyang Wang" ], "title": "Can We Gain More From Orthogonality Regularizations in Training Deep Networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Rianne van den Berg", "Leonard Hasenclever", "Jakub M Tomczak", "Max Welling" ], "title": "Sylvester Normalizing Flows for Variational Inference", "venue": "Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2018 }, { "authors": [ "Mario Lezcano Casado" ], "title": "Trivializations for Gradient-Based Optimization on Manifolds", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: non-linear independent components estimation", "venue": "In ICLR, Workshop Proceedings,", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density Estimation using Real NVP", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Nitish Srivastava", "Kevin Swersky" ], "title": "RMSProp) Neural Networks for Machine Learning Lecture 6a: Overview of Mini-Batch Gradient Descent", "venue": null, "year": 2012 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": null, "year": 2019 }, { "authors": [ "Emiel Hoogeboom", "Rianne Van Den Berg", "Max Welling" ], "title": "Emerging Convolutions for Generative Normalizing Flows", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative Flow with Invertible 1x1 Convolutions", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Steven G Krantz", "Harold R Parks" ], "title": "The Implicit Function Theorem: History, Theory, and Applications", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": null, "year": 2009 }, { "authors": [ "John M Lee" ], "title": "Smooth Manifolds. In Introduction to Smooth Manifolds", "venue": null, "year": 2013 }, { "authors": [ "Mario Lezcano-Casado", "David Martı́nez-Rubio" ], "title": "Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group", "venue": null, "year": 2019 }, { "authors": [ "Valerii Likhosherstov", "Jared Davis", "Krzysztof Choromanski", "Adrian Weller" ], "title": "CWY Parametrization for Scalable Learning of Orthogonal and Stiefel Matrices", "venue": "arXiv preprint arXiv:2004.08675,", "year": 2020 }, { "authors": [ "Alexander Mathiasen", "Frederik Hvilshøj", "Jakob Rødsgaard Jørgensen", "Anshul Nasery", "Davide Mottin" ], "title": "Faster Orthogonal Parameterization with Householder Matrices", "venue": "In ICML, Workshop Proceedings,", "year": 2020 }, { "authors": [ "Zakaria Mhammedi", "Andrew Hellicar", "Ashfaqur Rahman", "James Bailey" ], "title": "Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections", "venue": null, "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral Normalization for Generative Adversarial Networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic Differentiation in PyTorch", "venue": null, "year": 2017 }, { "authors": [ "Kaare Brandt Petersen", "Michael Syskind Pedersen" ], "title": "Technical University of Denmark, Version 20121115", "venue": "The Matrix Cookbook,", "year": 2012 }, { "authors": [ "Jakub M Tomczak", "Max Welling" ], "title": "Improving Variational Auto-Encoders using Householder Flow", "venue": "arXiv preprint arXiv:1611.09630,", "year": 2016 }, { "authors": [ "Jiong Zhang", "Qi Lei", "Inderjit Dhillon" ], "title": "Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization", "venue": "In ICML,", "year": 2018 } ]
[ { "heading": null, "text": "Orthogonal weight matrices are used in many areas of deep learning. Much previous work attempt to alleviate the additional computational resources it requires to constrain weight matrices to be orthogonal. One popular approach utilizes many Householder reflections. The only practical drawback is that many reflections cause low GPU utilization. We mitigate this final drawback by proving that one reflection is sufficient, if the reflection is computed by an auxiliary neural network." }, { "heading": "1 INTRODUCTION", "text": "Orthogonal matrices have shown several benefits in deep learning, with successful applications in Recurrent Neural Networks, Convolutional Neural Networks and Normalizing Flows. One popular approach can represent any d × d orthogonal matrix using d Householder reflections (Mhammedi et al., 2017). The only practical drawback is low GPU utilization, which happens because the d reflections needs to be evaluated sequentially (Mathiasen et al., 2020). Previous work often increases GPU utilization by using k d reflections (Tomczak & Welling, 2016; Mhammedi et al., 2017; Zhang et al., 2018; Berg et al., 2018). Using fewer reflections limits the orthogonal transformations the reflections can represent, yielding a trade-off between representational power and computation time. This raises an intriguing question: can we circumvent the trade-off and attain full representational power without sacrificing computation time?\nWe answer this question with a surprising “yes.” The key idea is to use an auxiliary neural network to compute a different reflection for each input. In theory, we prove that one such “auxiliary reflection” can represent any number of normal reflections. In practice, we demonstrate that one auxiliary reflection attains similar validation error to models with d normal reflections, when training Fully Connected Neural Networks (Figure 1 left), Recurrent Neural Networks (Figure 1 center) and convolutions in Normalizing Flows (Figure 1 right). Notably, auxiliary reflections train between 2 and 6 times faster for Fully Connected Neural Networks with orthogonal weight matrices (see Section 3)." }, { "heading": "1.1 OUR RESULTS", "text": "The Householder reflection of x ∈ Rd around v ∈ Rd can be represented by a matrixH(v) ∈ Rd×d.\nH(v)x = ( I − 2 vv T\n||v||2\n) x.\nAn auxiliary reflection uses a Householder matrix H(v) with v = n(x) for a neural network n.\nf(x) = H(n(x))x = ( I − 2n(x)n(x) T\n||n(x)||2\n) x.\nOne auxiliary reflection can represent any composition of Householder reflections. We prove this claim even when we restrict the neural network n(x) to have a single linear layer n(x) = Wx for W ∈ Rd×d such that f(x) = H(Wx)x. Theorem 1. For any k Householder reflections U = H(v1) · · ·H(vk) there exists a neural network n(x) = Wx with W ∈ Rd×d such that f(x) = H(Wx)x = Ux for all x ∈ Rd\\{0}.\nPrevious work (Mhammedi et al., 2017; Zhang et al., 2018) often employ k d reflections and compute Ux as k sequential Householder reflectionsH(v1) · · ·H(vk)·xwith weights V = (v1 · · · vk). It is the evaluation of these sequential Householder reflection that cause low GPU utilization (Mathiasen et al., 2020), so lower values of k increase GPU utilization but decrease representational power. Theorem 1 states that it is sufficient to evaluate a single auxiliary reflection H(Wx)x instead of k reflections H(v1) · · ·H(vk) · x, thereby gaining high GPU utilization while retaining the full representational power of any number of reflections.\nIn practice, we demonstrate that d reflections can be substituted with a single auxiliary reflection without decreasing validation error, when training Fully Connected Neural Networks (Section 3.1), Recurrent Neural Networks (Section 3.2) and Normalizing Flows (Section 3.3). While the use of auxiliary reflections is straightforward for Fully Connected Neural Networks and Recurrent Neural Networks, we needed additional ideas to support auxiliary reflections in Normalizing Flows. In particular, we developed further theory concerning the inverse and Jacobian of f(x) = H(Wx)x. Note that f is invertible if there exists a unique x given y = H(Wx)x and W .\nTheorem 2. Let f(x) = H(Wx)x with f(0) := 0, then f is invertible on Rd with d ≥ 2 if W = WT and has eigenvalues which satisfy 3/2 · λmin(W ) > λmax(W ).\nFinally, we present a matrix formula for the Jacobian of the auxiliary reflection f(x) = H(Wx)x. This matrix formula is used in our proof of Theorem 2, but it also allows us simplify the Jacobian determinant (Lemma 1) which is needed when training Normalizing Flows.\nTheorem 3. The Jacobian of f(x) = H(Wx)x is:\nJ = H(Wx)A− 2Wxx TW ||Wx||2 where A = I − 2x TWTx ||Wx||2 W.\nWe prove Theorem 1 in Appendix A.1.1 while Theorems 2 and 3 are proved in Section 2." }, { "heading": "2 NORMALIZING FLOWS", "text": "" }, { "heading": "2.1 BACKGROUND", "text": "Let z ∼ N(0, 1)d and f be an invertible neural network. Then f−1(z) ∼ Pmodel defines a model distribution for which we can compute likelihood of x ∼ Pdata (Dinh et al., 2015).\nlog pmodel(x) = log pz(f(x)) + log ∣∣∣∣det(∂f(x)∂x )∣∣∣∣ (1)\nThis allows us to train invertible neural network as generative models by maximum likelihood. Previous work demonstrate how to construct invertible neural networks and efficiently compute the log jacobian determinant (Dinh et al., 2017; Kingma & Dhariwal, 2018; Ho et al., 2019)." }, { "heading": "2.2 INVERTIBILITY AND JACOBIAN DETERMINANT (PROOF SKETCH)", "text": "To use auxiliary reflections in Normalizing Flows we need invertibility. That is, for every y ∈ Rd there must exist a unique x ∈ Rd so f(x) = H(Wx)x = y.1 We find that f is invertible if its Jacobian determinant is non-zero for all x in Sd−1 = {x ∈ Rd | ‖x‖ = 1}. Theorem 4. Let f(x) = H(Wx)x with f(0) := 0, then f is invertible on Rd with d ≥ 2 if the Jacobian determinant of f is non-zero for all x ∈ Sd−1 and W is invertible.\nThe Jacobian determinant of H(Wx)x takes the following form. Lemma 1. The Jacobian determinant of f(x) = H(Wx)x is:\n−det(A) ( 1 + 2 vTA−1u\n||u||2\n) where vT = xTW,u = Wx and A = I − 2x TWTx\n||Wx||2 W.\nIt is then sufficient that det(A) 6= 0 and 1 + 2vTA−1u/||u||2 6= 0. We prove that this happens if W = WT with eigenvalues 3/2 ·λmin(W ) > λmax(W ). This can be achieved with W = I+V V T if we guarantee σmax(V V T ) < 1/2 by spectral normalization (Miyato et al., 2018). Combining these results yields Theorem 2. Theorem 2. Let f(x) = H(Wx)x with f(0) := 0, then f is invertible on Rd with d ≥ 2 if W = WT and has eigenvalues which satisfy 3/2 · λmin(W ) > λmax(W ).\nComputing the Inverse. In practice, we use Newtons method to compute x so H(Wx)x = y. Figure 2 show reconstructions n−1(n(x)) = x for an invertible neural network n with auxiliary reflections using Newtons method, see Appendix A.2.1 for details." }, { "heading": "2.3 PROOFS", "text": "The goal of this section is to prove that f(x) = H(Wx)x is invertible. Our proof strategy has two parts. Section 2.3.1 first shows f is invertible if it has non-zero Jacobian determinant. Section 2.3.2 then present an expression for the Jacobian determinant, Lemma 1, and prove the expression is non-zero if W = WT and 3/2 · λmin(W ) > λmin(W )." }, { "heading": "2.3.1 NON-ZERO JACOBIAN DETERMINANT IMPLIES INVERTIBILITY", "text": "In this section, we prove that f(x) = H(Wx)x is invertible on Rd if f has non-zero Jacobian determinant. To simplify matters, we first prove that invertibility on Sd−1 implies invertibility on Rd. Informally, invertibility on Sd−1 is sufficient because H(Wx) is scale invariant, i.e., H(c ·Wx) = H(Wx) for all c 6= 0. This is formalized by Lemma 2. Lemma 2. If f(x) = H(Wx)x is invertible on Sd−1 it is also invertible on Rd\\{0}.\nProof. Assume that f(x) is invertible on Sd−1. Pick any y′ ∈ Rd such that ||y′|| = c for any c > 0. Our goal is to compute x′ such that H(Wx′)x′ = y′. By normalizing, we see y′/‖y′‖ ∈ Sd−1. We can then use the inverse f−1 on y′/‖y′‖ to find x such that H(Wx)x = y′/‖y‖. The result is then x′ = x‖y‖ since H(Wx′)x′ = H(Wx)x||y|| = y due to scale invariance of H(Wx).\n1Note that we do not know H(Wx) so we cannot trivially compute x = H(Wx)−1y = H(Wx)y.\nThe main theorem we use to prove invertibiliy on Sd−1 is a variant of Hadamards global function inverse theorem from (Krantz & Parks, 2012). On a high-level, Hadamard’s theorem says that a function is invertible if it has non-zero Jacobian determinant and satisfies a few additional conditions. It turns out that these additional conditions are meet by any continuously differentiable function f(x) when (in the notation of Theorem 5) M1 = M2 = Sd−1. Theorem 5. (Krantz & Parks, 2012, 6.2.8) Let M1 and M2 be smooth, connected N -dimensional manifolds and let f : M1 → M2 be continuously differentiable. If (1) f is proper, (2) the Jacobian of f is non-zero, and (3) M2 is simple connected, then f is invertible.\nFor M1 = M2 = Sd−1 the additional conditions are met if f is continuously differentiable. Corollary 1. Let f : Sd−1 → Sd−1 with d ≥ 2 be continuously differentiable with non-zero Jacobian determinant, then f is invertible.\nProof. Note that Sd−1 is smooth and simply connected if d ≥ 2 (Lee, 2013). Continuously functions on Sd−1 are proper. We conclude f is invertible on Sd−1 by Theorem 5.\nWe now show that f(x) = H(Wx)x is continuously differentiable on Sd−1. Lemma 3. The function f(x) = H(Wx)x is continuously differentiable on Sd−1 ifW is invertible.\nProof. Compositions of continuously differentiable functions are continuously differentiable by the chain rule. All the functions used to construct H(Wx)x are continuously differentiable, except the division. However, the only case where division is not continously differentiable is when ||Wx|| = 0. Since W is invertible, ||Wx|| = 0 iff x = 0. But 0 /∈ Sd−1 and we conclude f is continuously differentiable on Sd−1.\nTheorem 4. Let f(x) = H(Wx)x with f(0) := 0, then f is invertible on Rd with d ≥ 2 if the Jacobian determinant of f is non-zero for all x ∈ Sd−1 and W is invertible.\nProof. By Lemma 3, we see f is continuously differentiable since W is invertible, which by Corollary 1 means f is invertible on Sd−1 if f has non-zero Jacobian determinant on Sd−1. By Lemma 2, we get that f is invertible on Rd if it has non-zero Jacobian on Sd−1." }, { "heading": "2.3.2 ENFORCING NON-ZERO JACOBIAN DETERMINANT", "text": "The goal of this section is to present conditions on W that ensures the Jacobian determinant of f(x) is non-zero for all x ∈ Sd−1. We first present a matrix formula for the Jacobian of f in Theorem 3. By using the matrix determinant lemma, we get a formula for the Jacobian determinant in Lemma 1. By investigating when this expression can be zero, we finally arive at Lemma 4 which states that the Jacobian determinant is non-zero (and f thus invertible) if W = WT and 3/2 · λmin > λmax. Theorem 3. The Jacobian of f(x) = H(Wx)x is:\nJ = H(Wx)A− 2Wxx TW ||Wx||2 where A = I − 2x TWTx ||Wx||2 W.\nSee Appendix A.2.2 for PyTorch implementation of J and a test case against PyTorch autograd.\nProof. The (i, j)’th entry of the Jacobian determinant is, by definition,\n∂(x− 2 · Wxx TWT x\n||Wx||2 )i\n∂xj = 1i=j − 2 ·\n∂(Wx)i · x TWT x ||Wx||2\n∂xj .\nThen, by the product rule, we get\n∂(Wx)i · x TWT x ||Wx||2\n∂xj = ∂(Wx)i ∂xj · x TWTx ||Wx||2 + (Wx)i ·\n∂ x TWT x ||Wx||2\n∂xj\n= Wij · xTWTx\n||Wx||2 + (Wx)i · ∂xTWTx · 1||Wx||2 ∂xj .\nThe remaining derivative can be found using the product rule.\n∂xTWTx · 1||Wx||2 ∂xj = ∂xTWTx ∂xj · 1 ||Wx||2 + xTWTx · ∂ 1||Wx||2 ∂xj .\nFirst, (Petersen & Pedersen, 2012) equation (81) gives ∂x TWT x ∂xj = ((WT + W )x)j . Second ||Wx||−2 can be found using the chain rule: ∂(||Wx||2)−1\n∂xj = ∂(||Wx||2)−1 ∂||Wx||2 ∂||Wx||2 ∂xj\n= − 1 ||Wx||4\n( ∂xTWTWx\n∂x\n) j\n= − 1 ||Wx||4 ((WTW + (WTW )T )x)j (Petersen & Pedersen, 2012, equ. 81)\n= − 1 ||Wx||4 2(WTWx)j .\nCombining everything we get Jij = 1i=j−2 [ xTWTx\n||Wx||2 ·Wij + (Wx)i\n( 1\n||Wx||2 · ((WT +W )x)j −\n2xTWTx\n||Wx||4 · (WTWx)j\n)] .\nIn matrix notation, this translates into the following, if we let A = I − 2 · x TWT x ||Wx||2 W .\nJ = I − 2 [ xTWTx\n||Wx||2 ·W +Wx\n( 1\n||Wx||2 · xT (W +WT )− 2x\nTWTx\n||Wx||4 · xTWTW )] = I − 2 · x TWTx\n||Wx||2 ·W − 2 · Wxx\nTW\n||Wx||2 − 2 · Wxx\nTWT\n||Wx||2\n( I − 2 · x TWTx\n||Wx||2 W ) = A− 2 · Wxx TW\n||Wx||2 − 2 · Wxx\nTWT\n||Wx||2 A\n= ( I − 2 · Wxx TWT\n||Wx||2\n) A− 2 · Wxx TW\n||Wx||2 = H(Wx)A− 2 · Wxx\nTW\n||Wx||2 .\nThis concludes the proof.\nTheorem 3 allows us to write J as a rank one update M + abT for a, b ∈ Rd, which can be used to simplify det(J) as stated in the following lemma. Lemma 1. The Jacobian determinant of f(x) = H(Wx)x is:\n− det(A) ( 1 + 2 vTA−1u\n||u||2\n) where vT = xTW,u = Wx and A = I − 2x TWTx\n||Wx||2 W.\nProof. The matrix determinant lemma allows us to write det(M + abT ) = det(M)(1 + bTM−1a). Let M = H(Wx)A and bT = −2 · xTW/||Wx||2 and a = Wx. The Jacobian J from Theorem 3 is then J = M + abT . The determinant of J is then:\ndet(J) = det(M)(1 + bTM−1a) = det(H(Wx) ·A) ( 1− 2x TW (H(Wx) ·A)−1Wx\n||Wx||2 ) = −det(A) ( 1 + 2 xTWA−1Wx\n||Wx||2\n) .\nThis is true because H(Wx)−1 = H(Wx), H(Wx) ·Wx = −Wx and det(H(Wx)) = −1.\nWe can now use Lemma 1 to investigate when the Jacobian determinant is non-zero. In particular, the Jacobian determinant must be non-zero if both det(A) 6= 0 and 1 + 2vTA−1u/||u||2 6= 0. In the following lemma, we prove that both are non-zero if W = WT and 3/2 · λmin > λmax.\nLemma 4. Let W = WT and 3/2 · λmin > λmax then λi(A−1) < −1/2 for A from Lemma 1. These conditions imply that det(A) 6= 0 and 1 + 2vTA−1u/||u||2 6= 0 with vT , u from Lemma 1\nProof. We first show that the inequality 3/2 · λmin(W ) > λmax(W ) implies λi(A−1) < −1/2.\nλi(A −1) =\n1\nλi(A) =\n1\n1− 2xTWT x||Wx||2 λi(W )\nIf γi := x TWT x ||Wx||2 ·λi(W ) ∈ (1/2, 3/2) we get that 1/(1−2γi) ∈ (−∞,−1/2) so λi(A −1) < −1/2. If we let y := Wx we get x TWT x ||Wx||2 = yTW−1y ||y||2 . This is the Rayleigh quotient of W\n−1 at y, which for W = WT is within [λmin(W−1), λmax(W−1)]. Therefore γi ∈ [ 1λmax(W ) , 1 λmin(W )\n] · λi(W ). Note first that γmin ≤ 1 and γmax ≥ 1. It is left to show that γmin ≥ λmin/λmax > 1/2 and γmax ≤ λmax/λmin < 3/2. Both conditions on eigenvalues are met if 3/2 · λmin > λmax. We now want to show that det(A) 6= 0 and 1 + 2vTA−1u/||u||2 6= 0. First, notice that det(A) =∏d i=1 λi(A) 6= 0 since λi(A) < −1/2. Second, note that W = WT implies that the vT from Lemma 1 can be written as vT = xTW = xTWT = uT . This means we only need to ensure uTA−1u/||u||2, the Rayleigh quotient of A−1 at u, is different to −1/2. But W = WT implies A = AT because A = I − 2xTWTx/||Wx||2 ·W . The Rayleigh quotient is therefore bounded by [λmin(A −1), λmax(A −1)], which means it is less than −1/2 since λi(A−1) < −1/2. We can then conclude that also 1 + 2vTA−1u/||u||2 = 1 + 2uTA−1u/||u||2 < 1 + 2 · −1/2 = 0.\nSo det(J) 6= 0 by Lemma 4 and Lemma 1, which by Theorem 4 implies invertibility (Theorem 2).\nRemark. Note that the constraints W = WT and 3/2 · λmin > λmax were introduced only to guarantee det(A) 6= 0 and 1 + 2vTA−1u/||u||2 6= 0. Any argument or constraints on W that ensures det(A) · (1 + vTA−1u/||u||2) 6= 0 are thus sufficient to conclude f(x) is invertible." }, { "heading": "3 EXPERIMENTS", "text": "We compare a single auxiliary reflections against d normal reflections when training Fully Connected Neural Networks (d = 784), Recurrent Neural Networks (d = 170) and Normalizing Flows (d = 48). The experiments demonstrate that neural networks with a single auxiliary reflections attain similar performance to neural networks with many normal reflections. All plots show means and standard deviations over 3 runs. See Appendix B for experimental details." }, { "heading": "3.1 FULLY CONNECTED NEURAL NETWORKS", "text": "We trained four different Fully Connected Neural Networks (FCNNs) for classification on MNIST. We compared a FCNN with 6 auxiliary reflections against a FCNN with 6 orthogonal matrices each represented by 784 normal reflections. For completeness, we also trained two FCNNs where the 6 orthogonal matrices where attained by the matrix exponential and Cayley map, respectively, as done in (Casado, 2019; Lezcano-Casado & Martı́nez-Rubio, 2019). The FCNN with auxiliary reflections attained slightly better validation error, see (Figure 3 left). Furthermore, we found the auxiliary reflections were 2 to 6 times faster than competing methods, see (Figure 3 right). This was true even though we used (Mathiasen et al., 2020) to speed up the sequential Householder reflections. See Appendix B.1 for further details." }, { "heading": "3.2 RECURRENT NEURAL NETWORKS", "text": "We trained three Recurrent Neural Networks (RNNs) for classification on MNIST as done in (Mhammedi et al., 2017). The RNNs had a transition matrix represented by one auxiliary reflection, one normal reflection and 170 auxiliary reflections. See (Figure 4 left) for a validation error during training, including the model from (Mhammedi et al., 2017). As indicated by the red curve, using only one normal reflection severely limits the transition matrix. In the right plot, we magnify the first 20 epochs to improve readability. The RNNs with 1 auxiliary reflection attains similar mean validation accuracy to the RNNs with 170 normal reflections. See Appendix B.2 for further details.\nFully Connected Neural Network" }, { "heading": "3.3 NORMALIZING FLOWS AND CONVOLUTIONS", "text": "We initially trained two Normalizing Flows (NFs) on CIFAR10. Inspired by (Hoogeboom et al., 2019), we used reflections to parameterize the 1x1 convolutions of an NF called Glow (Kingma & Dhariwal, 2018), see Appendix B.3 for details. We trained an NF with many reflections and an NF with a single auxiliary reflection constrained to ensure invertible (see Section 2.2). The single auxiliary reflection attained worse validation NLL compared to the model with 48 normal reflections.\nWe suspected the decrease in performance was caused by the restrictions put on the weight matrices Wi of the auxiliary reflections to enforce invertibility, i.e., Wi = WTi and 3/2 · λmin(Wi) > λmax(Wi). To investigate this suspicion, we trained a model with no constraints on W . This improved performance to the point were one auxiliary reflections tied with many normal reflections (see Figure 5 left).\nEven though the resulting auxiliary reflections are not provably invertible, we found that Newtons method consistently computed the correct inverse. Based on this observation, we conjecture that the training dynamics caused the auxiliary reflections to remain invertible. By this we mean that the auxiliary reflections were initialized with non-zero Jacobian determinants (see Appendix B.3) and the loss function (Equation (1)) encourages the auxiliary reflections to increase their Jacobian determinants during training. Since Newtons method consistently computed the correct inverse, we were able to generate samples from all models, see (Figure 5 right).\nNormalizing Flow with Orthogonal Convolutions" }, { "heading": "4 RELATED WORK", "text": "Orthogonal Weight Matrices. Orthogonal weight matrices have seen widespread use in deep learning. For example, they have been used in Normalizing Flows (Hoogeboom et al., 2019), Variational Auto Encoders (Berg et al., 2018), Recurrent Neural Networks (Mhammedi et al., 2017) and Convolutional Neural Networks (Bansal et al., 2018).\nDifferent Approaches. There are several ways of constraining weight matrices to remain orthogonal. For example, previous work have used Householder reflections (Mhammedi et al., 2017), the Cayley map (Lezcano-Casado & Martı́nez-Rubio, 2019) and the matrix exponential (Casado, 2019). These approaches are sometimes referred to as hard orthogonality constraints, as opposed to soft orthogonality constraints, which instead provide approximate orthogonality by using, e.g., regularizers like ||WWT − I||F (see (Bansal et al., 2018) for a comprehensive review).\nReflection Based Approaches. The reflection based approaches introduce sequential computations, which is, perhaps, their main limitation. Authors often address this by reducing the number of reflections, as done in, e.g., (Tomczak & Welling, 2016; Mhammedi et al., 2017; Berg et al., 2018). This is sometimes undesirable, as it limits the expressiveness of the orthogonal matrix. This motivated previous work to construct algorithms that increase parallelization of Householder products, see, e.g., (Mathiasen et al., 2020; Likhosherstov et al., 2020).\nSimilar Ideas. Normalizing Flows have been used for variational inference, see, e.g., (Tomczak & Welling, 2016; Berg et al., 2018). Their use of reflections is very similar to auxiliary reflections, however, there is a very subtle difference which has fundamental consequences. For a full appreciation of this difference, the reader might want to consult the schematic in (Tomczak & Welling, 2016, Figure 1), however, we hope that the text below clarifies the high-level difference.\nRecall that auxiliary reflections compute H(Wx)x so H(Wx) can depend on x. In contrast, the previous work on variational inference instead compute H(v)z where v and z both depend on x. This limits H(v) in that it can not explicitly depend on z. While this difference is subtle, it means our proof of Theorem 1 does not hold for reflections as used in (Tomczak & Welling, 2016)." }, { "heading": "5 CONCLUSION", "text": "In theory, we proved that a single auxiliary reflection is as expressive as any number of normal reflections. In practice, we demonstrated that a single auxiliary reflection can attain similar performance to many normal reflections when training Fully Connected Neural Networks, Recurrent Neural Networks and Normalizing Flows. For Fully Connected Neural Networks, we reduced training time by a factor between 2 and 6 by using auxiliary reflections instead of previous approaches to orthogonal matrices (Mhammedi et al., 2017; Lezcano-Casado & Martı́nez-Rubio, 2019; Casado, 2019)." }, { "heading": "A APPENDIX", "text": "A.1 PROOFS\nA.1.1 THEOREM 1\nOur proof of Theorem 1 is an follows Lemma 5 which we state below. Theorem 1. For any k Householder reflections U = H(v1) · · ·H(vk) there exists a neural network n(x) = Wx with W ∈ Rd×d such that f(x) = H(Wx)x = Ux for all x ∈ Rd\\{0}.\nProof. LetW = I−U thenH(Wx)x = H(x−Ux)x = Ux for all x ∈ Rd since ||Ux|| = ||x||.\nLemma 5. Let ||x|| = ||y|| then H(x− y)x = y.\nProof. The result is elementary, see, e.g., (Wang, 2015). For completeness, we derive it below.\nH(x− y)x = x− 2(x− y)(x− y) T\n||x− y||2 x\n= x− 2xx T + yyT − xyT − yxT\nxTx+ yT y − 2xT y x\n= x− 2xx Tx+ yyTx− xyTx− yxTx xTx+ yT y − 2xT y\n= x− 2x||x||+ yy Tx− xyTx− y||x||\n2||x||2 − 2xT y\n= x− (x− y)||x|| 2 + (y − x)(yTx)\n||x||2 − xT y\n= x− (x− y)||x|| 2 + (y − x)(yTx)\n||x||2 − xT y\n= x− (x− y)(||x|| 2 − xT y) ||x||2 − xT y = x− (x− y) = y\nA.2 PYTORCH EXAMPLES AND TEST CASES\nTo ease the workload on reviewers, we opted to use small code snippets that can be copied into www.colab.research.google.com and run in a few seconds without installing any dependencies. Some PDF viewers do not copy line breaks, we found viewing the PDF in Google Chrome works.\nA.2.1 TEST CASE: INVERSE USING NEWTONS METHOD\nGiven y we compute x such that H(Wx)x = y using Newtons method. To be concrete, the code below contains a toy example where x ∈ R4 andW = I+V V T /(2·σmax(V V T )) ∈ R4×4. The particular choice ofW makesH(Wx)x invertible, because λi(W ) = 1+λi(V V T ) = 1+σi(V V T ) ∈ [1, 3/2) because V V T is positive definite. Any possible way of choosing the eigenvalues in the range [1, 3/2) guarantees that 3/2 · λmin > λmax which implies invertibility by Theorem 2.\nimport torch print(\"torch version: \", torch.__version__) torch.manual_seed(42) d = 4 # Create random test-case." }, { "heading": "I = torch.eye(d)", "text": "" }, { "heading": "V = torch.zeros((d, d)).uniform_() x = torch.zeros((d, 1)).uniform_()", "text": "" }, { "heading": "W = I + V @ V.T / torch.svd(V @ V.T)[1].max()", "text": "# Define the function f(x)=H(Wx)x. def H(v): return torch.eye(d) - 2 * v @ v.T / (v.T @ v) def f(x): return H(W @ x ) @ x # Print input and output print(\"x\\t\\t\", x.data.view(-1).numpy()) print(\"f(x)\\t\", f(x).data.view(-1).numpy()) print(\"\")\n# Use Newtons Method to compute inverse. y = f(x) xi = y for i in range(10):\nprint(\"[%.2i/%.2i]\"%(i+1, 10), xi.data.view(-1).numpy()) # Compute Jacobian using Theorem 3. A = torch.eye(d) - 2* (xi.T @ W.T @ xi) / torch.norm(W @ xi)**2 * W J = -2*W @ xi @ xi.T @ W/torch.norm(W@xi)**2 + H(W @ xi) @ A xi = xi - torch.inverse(J) @ (f(xi)- y)\nassert torch.allclose(xi, x, atol=10**(-7)) print(\"The two vectors are torch.allclose\")\ntorch version: 1.6.0+cu101 x [0.8854429 0.57390445 0.26658005 0.62744915] f(x) [-0.77197534 -0.49936318 -0.5985155 -0.6120473 ]\n[01/10] [-0.77197534 -0.49936318 -0.5985155 -0.6120473 ] [02/10] [ 0.72816867 0.78074205 -0.02241153 1.0435152 ] [03/10] [0.7348436 0.6478982 0.14960966 0.8003925 ] [04/10] [0.8262452 0.6155189 0.2279686 0.6997254] [05/10] [0.8765415 0.5831212 0.2592551 0.640691 ] [06/10] [0.8852093 0.5742159 0.26631045 0.6278922 ] [07/10] [0.88543946 0.5739097 0.26658094 0.62744874] [08/10] [0.88544315 0.57390547 0.2665805 0.6274475 ] [09/10] [0.885443 0.57390594 0.26658088 0.6274466 ] [10/10] [0.8854408 0.57390743 0.2665809 0.6274484 ] The two vectors are torch.allclose\nFigure 2. Figure 2 contains reconstructions n−1(n(x)) of the variant of Glow (Kingma & Dhariwal, 2018) used in Section 3.3. The Glow variant has 1x1 convolutions with auxiliary reflections, i.e., for an input x ∈ Rc×h×w where (c, h, w) are (channels, heigh, width) it computes z:,i,j = H(Wx:,i,j)x:,i,j ∈ Rc where i = 1, ..., h and j = 1, ..., w. Computing the inverse required computing the inverse of the auxiliary 1x1 convolutions, i.e., compute x:,i,j givenW and z:,i,j ∀i, j. The weights were initialized as done in the above toy example.\nA.2.2 TEST CASE: JACOBIAN AND AUTOGRAD\nimport torch print(\"torch version: \", torch.__version__) torch.manual_seed(42)\n# Create random test-case. d = 4" }, { "heading": "W = torch.zeros((d, d)).uniform_(-1, 1)", "text": "x = torch.zeros((d, 1)).uniform_(-1, 1)" }, { "heading": "I = torch.eye(d)", "text": "# Compute Jacobian using autograd. def H(v): return I - 2 * v @ v.T / (v.T @ v) def f(x): return H(W @ x ) @ x J = torch.autograd.functional.jacobian(f, x)[:, 0, :, 0] print(J)\n# Compute Jacobian using Lemma 4." }, { "heading": "A = I - 2* (x.T @ W.T @ x) / torch.norm(W @ x)**2 * W", "text": "J_ = H(W @ x) @ A -2*W @ x @ x.T @ W/torch.norm(W@x)**2 print(J_)\n# Test the two matrices are close. assert torch.allclose(J, J_, atol=10**(-5)) print(\"The two matrices are torch.allclose\")\ntorch version: 1.6.0+cu101 tensor([[ 0.2011, -1.4628, 0.7696, -0.5376],\n[ 0.3125, 0.6518, 0.7197, -0.5997], [-1.0764, 0.8388, 0.0020, -0.1107], [-0.8789, -0.3006, -0.4591, 1.3701]])\ntensor([[ 0.2011, -1.4628, 0.7696, -0.5376], [ 0.3125, 0.6518, 0.7197, -0.5997], [-1.0764, 0.8388, 0.0020, -0.1107], [-0.8789, -0.3006, -0.4591, 1.3701]]) The two matrices are torch.allclose" }, { "heading": "B EXPERIMENTAL DETAILS", "text": "In this section, we specify the details of the three experiments presented in the Section 3. The experiments were run on a single NVIDIA RTX 2080 Ti GPU and Intel Xeon Silver 4214 CPU @ 2.20GHz.\nB.1 FULLY CONNECTED NEURAL NETWORKS\nFor the experiment in Section 3.1 we trained four Fully Connected Neural Networks (FCNNs) as MNIST classifiers. All FCNNs had the same structure which we now explain. Inspired by (Zhang et al., 2018) the layers of the FCNNs were parametrized in their Singular Value Decomposition (SVD). This just means each layer consisted of two orthogonal matrices U, V ∈ R784×784 and a diagonal matrix Σ ∈ R784×784, so the forward pass computes y = UΣV Tx. The FCNNs had three such fully connected layers with relu non-linearity in between, and a final linear layer of shape W ∈ R784×10. We used the Adam optimizer (Kingma & Ba, 2015) with default parameters2 to minimize cross entropy. To speed up the network with 784 normal reflections, we used the FastH algorithm from (Mathiasen et al., 2020). For the network with auxiliary reflections, we had U, V be auxiliary reflections instead of orthogonal matrices. In all experiments, we initialized the singular values Σij ∼ U(0.99, 1.01). We used orthogonal matrices with reflections, the Cayley transform and the matrix exponential as done in (Mhammedi et al., 2017; Casado, 2019; Lezcano-Casado & Martı́nez-Rubio, 2019), respectively. The orthogonal matrices are constructed using a weight matrix W . In all cases, we initialized Wij ∼ U(− 1√d , 1√ d ). It is possible one could initialize Wij in a smarter way, which could change the validation error reported in Figure 3. That said, we did try to initialize W using the Cayley initialization suggested by (Casado, 2019). However, we did not find it improved performance.\nB.2 RECURRENT NEURAL NETWORKS\nFor the experiment in Section 3.2, we trained three Recurrent Neural Networks as MNIST classifiers as done in (Arjovsky et al., 2016; Mhammedi et al., 2017; Zhang et al., 2018; Casado, 2019). We used the open-source implementation from (Casado, 2019).3 They use a clever type of “Cayley initialization” to initialize the transition matrix U . We found it worked very well, so we choose to initialize both the normal and auxiliary reflections so they initially represented the same transition matrix U . For normal reflections, this can be done by computing v1, ..., vd so H1 · · ·Hd = U by using the QR decomposition. For the auxiliary reflection, this can be done using W = I − U so H(Wx)x = Ux (see Theorem 4).\nIn (Casado, 2019), they use h0 = 0 as initial state and report “We choose as the initial vector h0 = 0 for simplicity, as we did not observe any empirical improvement when using the initialization given in (Arjovsky et al., 2016).” We sometimes encountered division by zero with auxiliary reflections when h0 = 0, so we used the initialization suggested by (Arjovsky et al., 2016) in all experiments.\nThe open-source implementation (Casado, 2019) use RMSProp (Hinton et al., 2012) with different learning rates for the transition matrix and the remaining weights. This was implemented in PyTorch by using two RMSProp optimizers. We found training auxiliary reflectons to be more stable with Adam (Kingma & Ba, 2015). We believe this happens because the “averaged gradients” v become very small due to the normalization term ||Wx||2 in H(Wx)x = x − 2WxxTWTx/||Wx||2. When v becomes small the scaling 1/( √ v + ) of RMSProp becomes very large. We suspect the\n1/( √ v/(1− βT2 )+ ) scaling used by Adam fixed the issue, which caused more stable training with Adam. This caused us to use Adam optimizer for the transition matrix instead of RMSProp for all the RNNs we trained.\n2Default parameters of the Adam implementation in PyTorch 1.6 (Paszke et al., 2017). 3https://github.com/Lezcano/expRNN/\nB.3 NORMALIZING FLOW\nFor the experiment in Section 3.3, we trained three Normalizing Flows as generative models on CIFAR10 as done in (Dinh et al., 2015; 2017; Kingma & Dhariwal, 2018; Ho et al., 2019). We used an open-source PyTorch implementation of Glow (Kingma & Dhariwal, 2018)4 with default parameters, except for the number of channels “-C” and the number of steps “-K.” In particular, to decrease training time, we reduced “-C” from 512 to 64 and “-K” from 32 to 8. This caused an increase in validation NLL (worse performance) from 3.49 to 3.66 after 80 epochs.\nAuxiliary Reflections for 1x1 Convolutions. (Kingma & Dhariwal, 2018) suggested using invertible 1× 1 convolutions for Normalizing Flows. That is, given an input x ∈ Rc×h×w and kernel W ∈ Rc×c they compute z:,i,j = Wx:,i,j for all i, j. The resulting function is invertible ifW is, and it has Jacobian determinant hw det(W ). It was suggested by (Hoogeboom et al., 2019) to represent W in its QR decomposition so det(W ) = det(QR) = det(Q) det(R) = det(R) = ∏ iRii. To this end, they represent the orthogonal matrix Q as a product of reflections, in particular, they use c = 12, 24, 48 reflections at different places in the network. The main goal of this experiment, was to compare c = 12, 24, 48 normal reflections against a single auxiliary reflection, which computes z:,i,j = H(Wx:,i,j)x:,i,j instead of z:,i,j = Wx:,i,j . To isolate the difference in performance due to reflections, we further removed the rectangular matrix.\nProvably Invertible. One of the Normalizing Flows with auxiliary reflections had the weights of its auxiliary reflections constrained to ensure invertibility. In particular, we let each weight matrix be W = I + V V T and used spectral normalization V V T /(2σmax(V V T )) to ensure σi(V V T ) < 1/2. The largest singular value can be computed efficiently using power iteration (Miyato et al., 2018). For ease of implementation, we circumvented using power iteration due to a known open issue in the official PyTorch implementation. We instead used TORCH.SYMEIG to compute the largest singular value by computing the largest eigenvalue λmax(V V T ) = σmax(V V T ), which holds because V V T is positive definite for V ∈ Rc×c. This was only possible because the matrices where at most 48× 48, for larger problems one would be forced to use the power iteration.\nInitialization. The open-source implementation of Glow initializes W = Q with Q from TORCH.QR(TORCH.RANDN((C,C)))[0]. For the case with normal reflections, we computed v1, ..., vc such that H(v1) · · ·H(vc) = Q (Wang, 2015). For the auxiliary reflection without constraints we let W = I −Q such that H(Wx)x = H(x−Qx) = Qx by Lemma 5. However, for the experiment with constraints onW , we could not initiallizeW = I−Q and instead used W = I + V V T where (initially) Vij ∼ U(− 1√c , 1√ c ). This increased error at initialization from 6.5 to 8. We suspect this happens because the previous initialization of W has complex eigenvalues which W = I + V V T does not (because it is symmetric). In practice, we mitigate the poor initialization by using an additional fixed matrix Q = QT which does not change during training. This is essentially the same as using a fixed permutation as done in (Dinh et al., 2017), but, instead of using a fixed permutation, we use a fixed orthogonal matrix. While using the fixed Q, we found simply initializing V = I worked sufficiently well.\n4https://github.com/chrischute/glow" } ]
2,020
null
SP:0cb862cf3806c4f04d2d30f200c25841a1cb52a8
[ "This paper proposes to learn patient-specific representation using patient physiological signals. The authors design a PCP representation for each patient, which is learned to agree with signals from the same patients and disagrees with the remaining patients. In the supervised part, the classifier is generated from patient-specific parameters by meta-learning. The model was evaluated on three large ECG datasets: PhysioNet 2020 ECG, Chapman ECG, PTB-XL ECG." ]
Many clinical deep learning algorithms are population-based and difficult to interpret. Such properties limit their clinical utility as population-based findings may not generalize to individual patients and physicians are reluctant to incorporate opaque models into their clinical workflow. To overcome these obstacles, we propose to learn patient-specific embeddings, entitled patient cardiac prototypes (PCPs), that efficiently summarize the cardiac state of the patient. To do so, we attract representations of multiple cardiac signals from the same patient to the corresponding PCP via supervised contrastive learning. We show that the utility of PCPs is multifold. First, they allow for the discovery of similar patients both within and across datasets. Second, such similarity can be leveraged in conjunction with a hypernetwork to generate patient-specific parameters, and in turn, patient-specific diagnoses. Third, we find that PCPs act as a compact substitute for the original dataset, allowing for dataset distillation.
[]
[ { "authors": [ "A K Akobeng" ], "title": "Understanding randomised controlled trials", "venue": "Archives of Disease in Childhood,", "year": 2005 }, { "authors": [ "Erick A Perez Alday", "Annie Gu", "Amit Shah", "Chad Robichaux", "An-Kwok Ian Wong", "Chengyu Liu", "Feifei Liu", "Ali Bahrami Rad", "Andoni Elola", "Salman Seyedi" ], "title": "Classification of 12-lead ecgs: the physionet/computing in cardiology", "venue": "medRxiv,", "year": 2020 }, { "authors": [ "Zachi I Attia", "Suraj Kapa", "Francisco Lopez-Jimenez", "Paul M McKie", "Dorothy J Ladewig", "Gaurav Satam", "Patricia A Pellikka", "Maurice Enriquez-Sarano", "Peter A Noseworthy", "Thomas M Munger" ], "title": "Screening for cardiac contractile dysfunction using an artificial intelligence–enabled electrocardiogram", "venue": "Nature Medicine,", "year": 2019 }, { "authors": [ "Zachi I Attia", "Peter A Noseworthy", "Francisco Lopez-Jimenez", "Samuel J Asirvatham", "Abhishek J Deshmukh", "Bernard J Gersh", "Rickey E Carter", "Xiaoxi Yao", "Alejandro A Rabinstein", "Brad J Erickson" ], "title": "An artificial intelligence-enabled ecg algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction", "venue": null, "year": 2019 }, { "authors": [ "Olivier Bachem", "Mario Lucic", "Andreas Krause" ], "title": "Scalable k -means clustering via lightweight coresets", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, KDD", "year": 2018 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "J. Bailey Carter" ], "title": "Value of the electrocardiogram in clinical practice", "venue": "Journal of the American Medical Association, 143(7):644–652,", "year": 1950 }, { "authors": [ "Nancy Cartwright" ], "title": "Are RCTs the gold standard? BioSocieties", "venue": null, "year": 2007 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Joseph Y Cheng", "Hanlin Goh", "Kaan Dogrusoz", "Oncel Tuzel", "Erdrin Azemi" ], "title": "Subject-aware contrastive learning for biosignals", "venue": "arXiv preprint arXiv:2007.04871,", "year": 2020 }, { "authors": [ "Dan Feldman", "Melanie Schmidt", "Christian Sohler" ], "title": "Turning big data into tiny data: Constantsize coresets for k-means, PCA and projective clustering", "venue": "CoRR, abs/1807.04518,", "year": 2018 }, { "authors": [ "Conner D Galloway", "Alexander V Valys", "Jacqueline B Shreibati", "Daniel L Treiman", "Frank L Petterson", "Vivek P Gundotra", "David E Albert", "Zachi I Attia", "Rickey E Carter", "Samuel J Asirvatham" ], "title": "Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram", "venue": "JAMA Cardiology,", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent-a new approach to self-supervised learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Margaret A Hamburg", "Francis S Collins" ], "title": "The path to personalized medicine", "venue": "New England Journal of Medicine,", "year": 2010 }, { "authors": [ "Dani Kiyasseh", "Tingting Zhu", "David A Clifton" ], "title": "CLOCS: Contrastive learning of cardiac signals", "venue": "arXiv preprint arXiv:2005.13249,", "year": 2020 }, { "authors": [ "Wei-Yin Ko", "Konstantinos C Siontis", "Zachi I Attia", "Rickey E Carter", "Suraj Kapa", "Steve R Ommen", "Steven J Demuth", "Michael J Ackerman", "Bernard J Gersh", "Adelaide M Arruda-Olson" ], "title": "Detection of hypertrophic cardiomyopathy using a convolutional neural network-enabled electrocardiogram", "venue": "Journal of the American College of Cardiology,", "year": 2020 }, { "authors": [ "Mario Lucic", "Olivier Bachem", "Andreas Krause" ], "title": "Strong coresets for hard and soft bregman clustering with applications to exponential family mixtures", "venue": "Proceedings of Machine Learning Research,", "year": 2016 }, { "authors": [ "Sebastian Mair", "Ulf Brefeld" ], "title": "Coresets for archetypal analysis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Shraddha Pai", "Gary D Bader" ], "title": "Patient similarity networks for precision medicine", "venue": "Journal of Molecular Biology,", "year": 2018 }, { "authors": [ "Shraddha Pai", "Shirley Hui", "Ruth Isserlin", "Muhammad A Shah", "Hussam Kaka", "Gary D Bader" ], "title": "netdx: Interpretable patient classification using integrated patient similarity networks", "venue": "Molecular Systems Biology,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Siyuan Qiao", "Chenxi Liu", "Wei Shen", "Alan L Yuille" ], "title": "Few-shot image recognition by predicting parameters from activations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Anis Sharafoddini", "Joel A Dubin", "Joon Lee" ], "title": "Patient similarity in prediction models based on health data: a scoping review", "venue": "JMIR Medical Informatics,", "year": 2017 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Nils Strodthoff", "Patrick Wagner", "Tobias Schaeffter", "Wojciech Samek" ], "title": "Deep learning for ecg analysis: Benchmarks and insights from ptb-xl", "venue": "arXiv preprint arXiv:2004.13701,", "year": 2020 }, { "authors": [ "Solomon Strouse", "Louis N. Katz", "Herbert F. Binswanger" ], "title": "The clinical value of the electrocardiogram: an analysis of 100 private cases", "venue": "Journal of the American Medical Association, 113(7):576–579,", "year": 1939 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qiuling Suo", "Fenglong Ma", "Ye Yuan", "Mengdi Huai", "Weida Zhong", "Aidong Zhang", "Jing Gao" ], "title": "Personalized disease prediction using a cnn-based similarity learning method", "venue": "IEEE International Conference on Bioinformatics and Biomedicine (BIBM),", "year": 2017 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Patrick Wagner", "Nils Strodthoff", "Ralf-Dieter Bousseljot", "Wojciech Samek", "Tobias Schaeffter" ], "title": "PTBXL, a large publicly available electrocardiography dataset, 2020", "venue": "URL https://physionet. org/content/ptb-xl/1.0.1/", "year": 2020 }, { "authors": [ "Tongzhou Wang", "Jun-Yan Zhu", "Antonio Torralba", "Alexei" ], "title": "A Efros. Dataset distillation", "venue": "arXiv preprint arXiv:1811.10959,", "year": 2018 }, { "authors": [ "Jianwei Zheng", "Jianming Zhang", "Sidy Danioko", "Hai Yao", "Hangyuan Guo", "Cyril Rakovski" ], "title": "A 12-lead electrocardiogram database for arrhythmia research covering more than 10,000 patients", "venue": "Scientific Data,", "year": 2020 }, { "authors": [ "Zihao Zhu", "Changchang Yin", "Buyue Qian", "Yu Cheng", "Jishang Wei", "Fei Wang" ], "title": "Measuring patient similarities via a deep architecture with medical concept embedding", "venue": "In 2016 IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern medical research is arguably anchored around the “gold standard” of evidence provided by randomized control trials (RCTs) (Cartwright, 2007). However, RCT-derived conclusions are population-based and fail to capture nuances at the individual patient level (Akobeng, 2005). This is primarily due to the complex mosaic that characterizes a patient from demographics, to physiological state, and treatment outcomes. Similarly, despite the success of deep learning algorithms in automating clinical diagnoses (Galloway et al., 2019; Attia et al., 2019a;b; Ko et al., 2020), network-generated predictions remain population-based and difficult to interpret. Such properties are a consequence of a network’s failure to incorporate patient-specific structure during training or inference. As a result, physicians are reluctant to integrate such systems into their clinical workflow. In contrast to such reluctance, personalized medicine, the ability to deliver the right treatment to the right patient at the right time, is increasingly viewed as a critical component of medical diagnosis (Hamburg & Collins, 2010).\nThe medical diagnosis of cardiac signals such as the electrocardiogram (ECG) is of utmost importance in a clinical setting (Strouse et al., 1939). For example, such signals, which convey information about potential abnormalities in a patent’s heart, also known as cardiac arrhythmias, are used to guide medical treatment both within and beyond the cardiovascular department (Carter, 1950). In this paper, we conceptually borrow insight from the field of personalized medicine in order to learn patient representations which allow for a high level of network interpretability. Such representations have several potential clinical applications. First, they allow clinicians to quantify the similarity of patients. By doing so, network-generated predictions for a pair of patients can be traced back to this similarity, and in turn, their corresponding ECG recordings. Allowing for this inspection of ECG recordings aligns well with the existing clinical workflow. An additional application of patient similarity is the exploration of previously unidentified patient relationships, those which may lead to the discovery of novel patient sub-cohorts. Such discoveries can lend insight into particular diseases and appropriate medical treatments. In contrast to existing patient representation learning methods (Zhu et al., 2016; Suo et al., 2017), we concurrently optimize for a predictive task (cardiac arrhythmia classification), leverage patient similarity, and design a system specifically for 12-lead ECG signals.\nContributions. Our contributions are the following:\n1. Patient cardiac prototypes (PCPs) - we learn representations that efficiently summarize the cardiac state of a patient in an end-to-end manner via contrastive learning.\n2. Patient similarity quantification - we show that, by measuring the Euclidean distance between PCPs and representations, we can identify similar patients across different datasets.\n3. Dataset distillation - we show that PCPs can be used to train a network, in lieu of the original dataset, and maintain strong generalization performance." }, { "heading": "2 RELATED WORK", "text": "Contrastive learning is a self-supervised method that encourages representations of instances with commonalities to be similar to one another. This is performed for each instance and its perturbed counterpart (Oord et al., 2018; Chen et al., 2020a;b; Grill et al., 2020) and for different visual modalities (views) of the same instance (Tian et al., 2019). Such approaches are overly-reliant on the choice of perturbations and necessitate a large number of comparisons. Instead, Caron et al. (2020) propose to learn cluster prototypes. Most similar to our work is that of Cheng et al. (2020) and CLOCS (Kiyasseh et al., 2020) which both show the benefit of encouraging patient-specific representations to be similar to one another. Although DROPS (Anonymous, 2021) leverages contrastive learning, it does so at the patient-attribute level. In contrast to existing methods, we learn patient-specific representations, PCPs, in an end-to-end manner\nMeta-learning designs learning paradigms that allow for the fast adaptation of networks. Prototypical Networks (Snell et al., 2017) average representations to obtain class-specific prototypes. During inference, the similarity of representations to these prototypes determines the classification. Relational Networks (Sung et al., 2018) build on this idea by learning the similarity of representations to prototypes through a parametric function. Gidaris & Komodakis (2018) and Qiao et al. (2018) exploit hypernetworks (Ha et al., 2016) and propose to generate the parameters of the final linear layer of a network for few-shot learning on visual tasks. In contrast, during inference only, we compute the cosine similarity between representations and PCPs and use the latter as the input to a hypernetwork.\nPatient similarity aims at discovering relationships between patient data (Sharafoddini et al., 2017). To quantify these relationships, Pai & Bader (2018) and (Pai et al., 2019) propose Patient Similarity Networks for cancer survival classification. Exploiting electronic health record data, Zhu et al. (2016) use Word2Vec to learn patient representations, and Suo et al. (2017) propose to exploit patient similarity to guide the re-training of models, an approach which is computationally expensive. Instead, our work naturally learns PCPs as efficient descriptors of the cardiac state of a patient." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 LEARNING PATIENT CARDIAC PROTOTYPES VIA CONTRASTIVE LEARNING", "text": "We assume the presence of a dataset,D = {xi, yi}Ni=1, comprising N ECG recordings, x, and cardiac arrhythmia labels, y, for a total of Ptot patients. Typically, multiple recordings are associated with a single patient, p. This could be due to multiple recordings within the same hospital visit or multiple visits to a hospital. Therefore, each patient is associated with N/Ptot recordings. We learn a feature extractor fθ : x ∈ RD −→ h ∈ RE , parameterized by θ, that maps a D-dimensional recording, x, to an E-dimensional representation, h. In the quest to learn patient-specific representations, we associate each patient, p, out of a total of P patients in the training set with a unique and learnable embedding, v ∈ RE , in a set of embeddings, V , where |V | = P N . Such embeddings are designed to be efficient descriptors of the cardiac state of a patient, and we thus refer to them as patient cardiac prototypes or PCPs.\nWe propose to learn PCPs in an end-to-end manner via contrastive learning. More specifically, given an instance, xi, that belongs to a particular patient, k, we encourage its representation, hi = fθ(xi), to be similar to the same patient’s PCP, vk, and dissimilar to the remaining PCPs, vj , j 6= k. We quantify this similarity, s(hi, vk), by using the cosine similarity with a temperature parameter, τ . The intuition is that each PCP, in being attracted to a diverse set of representations that belong to the same patient, should become invariant to insidious intra-patient differences. For a mini-batch of size, B, the contrastive loss is as follows.\nLcontrastive = − B∑ i log [ es(hi,vk)∑P j e s(hi,vj) ] (1) s(hi, vj) = fθ(xi) · vj ‖fθ(xi)‖‖vj‖ · 1 τ\n(2)" }, { "heading": "3.2 GENERATING PATIENT-SPECIFIC PARAMETERS VIA HYPERNETWORKS", "text": "Network parameters are typically updated during training and fixed during inference. This allows the parameters to exploit population-based information in order to learn high-level features useful for solving the task at hand. Such an approach, however, means that all instances are exposed to the same set of parameters during inference, regardless of instance-specific information. Such information can be related to any meta-label including, but not limited to, patient ID, geographical location, and even temporal period. As an exemplar, and motivated by the desire to generate patient-specific diagnoses, we focus on patient-specific information. We are essentially converting a traditional classification task to one that is conditioned on patient-specific information. To perform such conditioning, we propose to exploit both PCPs and hypernetworks, as explained next.\nWe assume the presence of a hypernetwork, gφ : h ∈ RE −→ ω ∈ RE×C , parameterized by φ, that maps an E-dimensional representation, h, to a matrix of classification parameters, ω, where C is the number of class labels. During training, we feed a representation, hi, to the hypernetwork and generate instance-specific parameters, ωi (see Fig. 1 left). During inference, however, we retrieve, and feed into the hypernetwork, the most similar PCP, vk, to the current representation, hi, (based on similarity metric, s). We chose this strategy after having experimented with several of them (see Sec. 5.2). It is worthwhile to note that although this approach bears some resemblance to clustering, it is distinct from it. In a clustering scenario, we would have assigned labels to instances based on their proximity to PCPs. In contrast, we are leveraging this proximity to determine the input of a hypernetwork (see Fig. 1 right).\nωi = gφ(hi) for traininggφ(vk) for inference, vk = arg max vj s(hi, vj) (3)\nBy performing this retrieval, we exploit the similarity between patients in the training and inference set. As a result, the hypernetwork generates patient-specific parameters that parameterize the linear classifier, pω : h ∈ RE −→ y ∈ RC , which maps a representation, h, to a posterior class distribution, y. We train the entire network in an end-to-end manner using a combined contrastive and supervised loss.\nLsupervised = − B∑ i log pωi(yi = c|hi) (4) Lcombined = Lcontrastive + Lsupervised (5)" }, { "heading": "4 EXPERIMENTAL DESIGN", "text": "" }, { "heading": "4.1 DATASETS", "text": "We conduct experiments using PyTorch (Paszke et al., 2019) on three large-scale ECG datasets that contain a significant number of patients. PhysioNet 2020 ECG consists of 12-lead ECG recordings from 6,877 patients alongside labels corresponding to 9 different classes of cardiac arrhythmia. Each recording can be associated with multiple labels. Chapman ECG (Zheng et al., 2020) consists of 12-lead ECG recordings from 10,646 patients alongside labels corresponding to 11 different classes of cardiac arrhythmia. As is suggested by Zheng et al. (2020), we group these labels into 4 major classes. PTB-XL ECG (Wagner et al., 2020) consists of 12-lead ECG recordings from 18,885 patients alongside 71 different types of annotations provided by two cardiologists. We follow the training and evaluation protocol presented by Strodthoff et al. (2020) where we leverage the 5 diagnostic class labels. We alter the original setup to only consider ECG segments with one label assigned to them and convert the task into a binary classification problem. Further details can be found in Appendix A.1.\nUnless otherwise mentioned, datasets were split into training, validation, and test sets according to patient ID using a 60, 20, 20 configuration. In other words, patients appeared in only one of the sets. Further details about the dataset splits can be found in Appendix A.2." }, { "heading": "4.2 HYPERPARAMETERS", "text": "When calculating the contrastive loss, we chose τ = 0.1 as in Kiyasseh et al. (2020). We also use the same neural network architecture for all experiments. Further implementation details can be found in Appendix B." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "5.1 PATIENT CARDIAC PROTOTYPES ARE DISCRIMINATIVE", "text": "During training, we optimize an objective function that consists of both a supervised and contrastive loss term (see Eq. 5). Based on the former, we expect representations to exhibit discriminative behaviour for the task at hand. The latter term encourages these representations to be similar to PCPs, and thus we also expect PCPs to be discriminative. In Fig. 2, we illustrate the representations of instances in the training set and the PCPs after being projected to a 2-dimensional subspace using t-SNE and colour-coded according to their class label. We find that both training representations, h, and PCPs, v, are class-discriminative. This can be seen by the high separability of the projected representations along class boundaries. Based on this finding alone, one could make the argument that PCPs are trivially detecting class label differences between patients." }, { "heading": "5.2 EFFECT OF HYPERNETWORK INPUT STRATEGY ON PERFORMANCE", "text": "As described, our pipeline uses the PCP nearest to each representation as input to the hypernetwork. This approach places a substantial dependency on that single chosen PCP. Therefore, we explore three additional input strategies that incorporate PCPs differently. Nearest 10 searches for, and takes the average of, the 10 PCPs that are nearest to the representation. Mean simply takes the average of all PCPs. Similarity-Weighted Mean takes a linear combination of all PCPs, weighted according to their cosine similarity to the representation. In Fig. 3, we show the effect of these strategies on the test set AUC as the embedding dimension, E, is changed.\nWe find that exploiting the similarity of representations during inference benefits the generalization performance of the network. This is shown by the inferiority of the mean strategy relative to the remaining strategies. For example, at E = 256, the Mean strategy achieves an AUC ≈ 0.50, equivalent to a random guess. However, simply weighting those PCPs according to their similarity to representations, as exemplified by Similarity Weighted Mean achieves an AUC ≈ 0.65. This implies that representations are capturing patient-specific information.\nWe also find that it is more advantageous to exploit similarity to identify the nearest PCPs than to weight many PCPs. In Fig. 3, the Nearest and Nearest 10 input strategies perform best, with the latter achieving an AUC ≈ 0.90, regardless of the embedding dimension. We hypothesize that such behaviour can be attributed to the notion that fewer PCPs are less likely to overwhelm the hy-\npernetwork. This, in turn, allows the hypernetwork to generate reasonable parameters for the linear classification layer. Moreover, the strong performance of these strategies despite their high dependence on so few PCPs reaffirms the utility of the learned representations." }, { "heading": "5.3 PATIENT CARDIAC PROTOTYPES ARE PATIENT-SPECIFIC", "text": "So far, we have shown that PCPs are class-discriminative and can assist in achieving strong generalization performance. In this section, we aim to validate our initial claim that PCPs are patient-specific. To do so, we calculate the Euclidean distance between each PCP and two sets of representations. The first includes representations corresponding to the same patient as that of the PCP (PCP to Same Training Patient). The second includes representations that correspond to all remaining patients (PCP to Different Training Patients). In Fig. 4, we illustrate the distribution of these distances.\nWe find that PCPs are indeed patient-specific. This can be seen by the smaller average distance between PCPs and representations of the same patient (≈ 4.5) than between PCPs and representations of different patients (≈ 9.5). Such a finding implies that PCPs are, on average, a factor of two more similar to representations from the same patient than they are to those from other patients.\nWe also set out to investigate whether computing their similarity to representations of instances in the validation set (as is done in Fig. 1) was appropriate. In Fig. 4, we overlay the distribution of distances between the PCPs and representations of instances from the validation set (PCP to Validation Patients). We find that comparing PCPs to representations of instances in the validation set is appropriate. This is emphasized by how the average Euclidean distance between these two entities (≈ 9) is on the same order of the average Euclidean distance between PCPs and representations of instances in the training set (≈ 4). Based on the distributions in Fig. 4, we can also confirm that patients in the validation set are not present in the training set, as was enforced by design. This can be seen by the minimal overlap between the blue and purple distributions. Such a finding suggests that PCPs can be deployed to detect out-of-distribution patients or distribution shift. We also illustrate the generalizability of these findings by arriving at the same conclusion on the PTB-XL and PhysioNet2020 datasets (see Appendix C)." }, { "heading": "5.4 DISCOVERY OF SIMILAR PATIENTS VIA PATIENT CARDIAC PROTOTYPES", "text": "Having established that PCPs are patient-specific and class-discriminative, we now investigate whether they can be exploited to quantify patient similarity. From a clinical perspective, such information can allow physicians to discover similar patient sub-cohorts and guide medical treatment. This is particularly consequential when patients are located across different healthcare institutions. Patient similarity quantification also has the potential to add a layer of interpretability to exigent networkgenerated diagnoses. In this section, we exploit, and validate the ability of, PCPs to identify similar (and dissimilar) patients.\nTo quantify patient similarity, we compute the pairwise distance (e.g., Euclidean) between each PCP and each representation in the validation set. The distribution of these distances can be found in Fig. 5 (top). We average these distances across representations that belong to the same patient, and generate a matrix of distances between pairs of patients (see Fig. 5 centre for a subset of that matrix). Validating our findings, however, is non-trivial since similarity can be based on patient demographics, physiology, or treatment outcomes. With this in mind, we decide to validate our findings both qualitatively and quantitatively. For the former, we locate the cell in the distance matrix with the lowest distance, and in turn, identify the most similar pair of patients. We then visualize their corresponding 12-lead ECG recordings (Fig. 5 bottom).\nWe find that PCPs are able to sufficiently distinguish between unseen patients and thus act as reasonable patient-similarity tools. In Fig. 5 (centre), we see that there exists a large range of distance values for any chosen PCP (row). In other words, it is closer to some representations than to others, implying that a chosen PCP is not trivially equidistant to all other representations. However, distinguishing between patients is not sufficient for a patient-similarity tool. We show that PCPs can also correctly capture the relative similarity to these patients. In Fig. 5 (bottom), we show that the two patients identified as being most similar to one another, using our method, have ECG recordings with a similar morphology and arrhythmia label, supra-ventricular tachycardia. We hypothesize that this behaviour arises due to the ability of PCPs to efficiently summarize the cardiac state of a patient. Such a finding reaffirms the potential of PCPs as patient-similarity tools. We also repeat the above procedure in attempt to discover similar and dissimilar patients across different datasets. In doing so, we arrive at positive results and similar conclusions to those above (see Appendix D).\nWe now move on to quantitatively validate the PCP-derived patient similarity values. Conceptually, we build on our qualitative findings and assume that a pair of patients, identified as being similar by our method, are in fact similar if they happen to be associated with the same cardiac arrhythmia label. More specifically, we retrieve all pairs of patients that are more similar than some threshold distance, dE , and determine what percentage of such retrieved pairs consist of patients with a matching cardiac arrhythmia label (Precision). In Fig. 6, we illustrate this precision as a function of different threshold distances.\nWe find that PCP-derived similarity values are able to identify patients with matching cardiac arrhythmia labels. For example, > 90% of the pairs of patients that are deemed very similar to one another (i.e., dE < 6.0) exhibit a perfect cardiac arrhythmia label match. As we increase the threshold distance, dE → 8.5, we see that Precision → 0. Such a decay is expected of a reasonable similarity metric where patients that are deemed dissimilar do not match according to their cardiac arrhythmia labels. Moreover, based on an acceptable level of precision, (e.g., 0.90), we can identify an appropriate threshold distance (e.g., dE ≈ 6.2). It is worthwhile to note that this specific threshold coincides with the region of minimal distribution overlap we observed in Fig. 4. This suggests that a simple threshold can be derived from those distributions." }, { "heading": "5.5 DATASET DISTILLATION VIA PATIENT CARDIAC PROTOTYPES", "text": "Our interpretation and the growing evidence we have presented in support of PCPs as efficient descriptors of the cardiac state of a patient led us to investigate the following question: could PCPs be sufficient for training classification tasks, in lieu of the original dataset? This idea is analogous to dataset distillation which focuses on obtaining a coreset of instances that do not compromise the generalization capabilities of a model (Feldman et al., 2018; Wang et al., 2018).\nTo investigate the role of PCPs as dataset distillers, we train a Support Vector Machine (SVM) on them and evaluate the model on representations of held-out instances. We compare PCPs to three coreset construction methods: 1) Lucic (Lucic et al., 2016), 2) Lightweight (Bachem et al., 2018), and 3) Archetypal (Mair & Brefeld, 2019). In constructing the coreset, these methods generate a categorical proposal distribution over all instances in the dataset before sampling k instances and assigning them weights. For a fair comparison to our method, we chose k = P where P is the number of PCPs. In addition to exploring the effect of these coreset construction methods based on raw instances, we do so based on representations of instances learned via our network. In Table 1, we illustrate the performance of these methods on various datasets.\nIn Table 1, we find that coresets of raw instances generated by traditional coreset construction methods are insufficient for achieving strong generalization performance. For example, the Archetypal method achieves an AUC = 54.8 on Chapman. Such poor performance is likely attributed to the poor class separability of the input features. Nonetheless, we show that the exact same set of methods still perform poorly, albeit slightly better, even when constructing coresets from network-derived representations that have been shown to be separable (see Fig. 2a). For example, the Archetypal method now achieves an AUC = 58.1 on Chapman. In contrast, we show that PCPs are relatively more effective coresets than those constructed by the remaining methods. On Chapman, for example, PCPs achieve an AUC = 88.7. These findings suggest that PCPs have the potential to effectively summarize datasets in a compact manner and act as dataset distillation tools.\nHaving shown the utility of PCPs as dataset distillers, we wanted to investigate the extent to which further distillation was possible. In Fig. 2, we illustrate the generalization performance of models trained on a different fraction of the available PCPs. For comparison’s sake, we also show the AUC of our network when trained on all instances in the training set (Full Training Set), which is several folds larger than the number of PCPs. We find that PCPs do indeed act as effective dataset distillers. In Fig. 2, we show that training on 100% of the PCPs (N = 6, 387) achieves an AUC ≈ 0.89 which is similar to that achieved when training on the full training set (N = 76, 614). In other words, we achieve similar generalization performance despite a 12-fold decrease in the number of training instances. We also show that further reducing the number of PCPs, by selecting a random subset for training, does not significantly hurt performance. For example, training with only 5% of\navailable PCPs (N = 319) achieves an AUC ≈ 0.82. Concisely, this corresponds to a 7% reduction in performance despite a 240-fold decrease in the number of training instances relative to that found in the standard training procedure. We arrive at similar conclusions when changing the embedding dimension, E (see Appendix F). Such a finding supports the potential of PCPs at dataset distillers. We hypothesize that this behaviour arises due to our patient-centric contrastive learning approach. By encouraging each PCP to be similar to several representations of instances that belong to the same patient, it is able to capture the most pertinent information and shed that which is least useful." }, { "heading": "5.6 DISCUSSION AND FUTURE WORK", "text": "In this paper, we proposed to learn efficient representations of the cardiac state of a patient, entitled patient cardiac prototypes, using a combination of contrastive and supervised learning. We showed that patient cardiac prototypes are both patient-specific and discriminative for the task at hand. We successfully deployed PCPs for the discovery of similar patients within the same dataset and across different datasets. This opens the door to leveraging clinical information that is available in disparate healthcare institutions. Lastly, we illustrated the potential of PCPs as dataset distillers, where they can be used to train models in lieu of larger datasets without compromising generalization performance. We now elucidate several future avenues worth exploring.\nObtaining multi-modal summary of cardiac state of patient. Although our approach was validated on multiple, large, time-series datasets, it was limited to a single modality, the ECG. Incorporating additional modalities to our approach such as coronary angiograms, cardiac MRI, and cardiac CT, may provide a more reliable summary of the cardiac state of the patient. This could ultimately lead to more reliable patient similarity quantification.\nGuiding design of graph neural networks. Arriving at ground-truth values for the similarity of a pair of patients is non-trivial. Recently, graph neural networks have been relatively successful at discovering and quantifying the similarity of instances, but most necessitate a pre-defined graph structure, which may be difficult to design in the context of physiological signals. We believe that designing this graph structure can be facilitated by our patient-similarity scores, for instance, by using them as an initialization of the edge weights between nodes." } ]
null
PCPS: PATIENT CARDIAC PROTOTYPES
SP:b7a45906d972644e9d0e757a83ff50fd3ad7cde3
[ "Either putting the uncertainty on the weights (e.g., Bayes by BP) or on the activation (e.g., fast dropout or variants of natural-parameter networks [2,3] or Bayesian dark knowledge [4]) or both [1] have been investigated before. The idea of moving the uncertainty from the weight to the activation function is not new. One could argue that VAE-style parameterization or local reparameterization trick is also a kind of methods that put uncertainty in the activation function. In fact the proposed method does involve the reprarameterization trick in each layer as shown in Eq. 7." ]
Current approaches for uncertainty estimation in deep learning often produce too confident results. Bayesian Neural Networks (BNNs) model uncertainty in the space of weights, which is usually high-dimensional and limits the quality of variational approximations. The more recent functional BNNs (fBNNs) address this only partially because, although the prior is specified in the space of functions, the posterior approximation is still defined in terms of stochastic weights. In this work we propose to move uncertainty from the weights (which are deterministic) to the activation function. Specifically, the activations are modelled with simple 1D Gaussian Processes (GP), for which a triangular kernel inspired by the ReLu non-linearity is explored. Our experiments show that activation-level stochasticity provides more reliable uncertainty estimates than BNN and fBNN, whereas it performs competitively in standard prediction tasks. We also study the connection with deep GPs, both theoretically and empirically. More precisely, we show that activation-level uncertainty requires fewer inducing points and is better suited for deep architectures.
[ { "affiliations": [], "name": "Pablo Morales-Álvarez" }, { "affiliations": [], "name": "Daniel Hernández-Lobato" } ]
[ { "authors": [ "F. Agostinelli", "M. Hoffman", "P. Sadowski", "P. Baldi" ], "title": "Learning activation functions to improve deep neural networks", "venue": "arXiv preprint arXiv:1412.6830,", "year": 2014 }, { "authors": [ "P. Baldi", "P. Sadowski", "D. Whiteson" ], "title": "Searching for exotic particles in high-energy physics with deep learning", "venue": "Nature communications,", "year": 2014 }, { "authors": [ "D. Barber", "C.M. Bishop" ], "title": "Ensemble learning for multi-layer networks", "venue": "In Advances in neural information processing systems,", "year": 1998 }, { "authors": [ "C. Blundell", "J. Cornebise", "K. Kavukcuoglu", "D. Wierstra" ], "title": "Weight uncertainty in neural network", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "T. Bui", "D. Hernández-Lobato", "J.M. Hernández-Lobato", "Y. Li", "R.E. Turner" ], "title": "Deep Gaussian processes for regression using approximate expectation propagation", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "A. Damianou", "N.D. Lawrence" ], "title": "Deep Gaussian processes", "venue": "In International conference on artificial intelligence and statistics,", "year": 2013 }, { "authors": [ "A.G. De G. Matthews", "M. Van Der Wilk", "T. Nickson", "K. Fujii", "A. Boukouvalas", "P. León-Villagrá", "Z. Ghahramani", "J. Hensman" ], "title": "Gpflow: A Gaussian process library using tensorflow", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "N. Durrande", "D. Ginsbourger", "O. Roustant" ], "title": "Additive kernels for gaussian process modeling", "venue": "arXiv preprint arXiv:1103.4023,", "year": 2011 }, { "authors": [ "D.K. Duvenaud", "H. Nickisch", "C.E. Rasmussen" ], "title": "Additive Gaussian processes", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "A. Esteva", "B. Kuprel", "R.A. Novoa", "J. Ko", "S.M. Swetter", "H.M. Blau", "S. Thrun" ], "title": "Dermatologist-level classification of skin cancer", "venue": "with deep neural networks. Nature,", "year": 2017 }, { "authors": [ "D. Flam-Shepherd", "J. Requeima", "D. Duvenaud" ], "title": "Mapping Gaussian process priors to Bayesian neural networks", "venue": "In NIPS Bayesian deep learning workshop,", "year": 2017 }, { "authors": [ "A.Y.K. Foong", "Y. Li", "J.M. Hernández-Lobato", "R.E. Turner" ], "title": "in-between’uncertainty in Bayesian neural networks", "venue": "ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning,", "year": 2019 }, { "authors": [ "A.Y.K. Foong", "D.R. Burt", "Y. Li", "R.E. Turner" ], "title": "On the expressiveness of approximate inference in Bayesian neural networks", "venue": "In Advances in neural information processing systems,", "year": 2020 }, { "authors": [ "S. Fort", "H. Hu", "B. Lakshminarayanan" ], "title": "Deep ensembles: A loss landscape perspective", "venue": "arXiv preprint arXiv:1912.02757,", "year": 2019 }, { "authors": [ "Y. Gal" ], "title": "Uncertainty in Deep Learning", "venue": "PhD thesis, University of Cambridge,", "year": 2016 }, { "authors": [ "Y. Gal", "Z. Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "X. Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In International conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "X. Glorot", "A. Bordes", "Y. Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In International conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "T. Gneiting", "A.E. Raftery" ], "title": "Strictly proper scoring rules, prediction, and estimation", "venue": "Journal of the American Statistical Association,", "year": 2007 }, { "authors": [ "C. Guo", "G. Pleiss", "Y. Sun", "K.Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In International conference on machine learning,", "year": 2017 }, { "authors": [ "D. Hafner", "D. Tran", "T. Lillicrap", "A. Irpan", "J. Davidson" ], "title": "Noise contrastive priors for functional uncertainty", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "J. Hensman", "N. Fusi", "N.D. Lawrence" ], "title": "Gaussian processes for big data", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2013 }, { "authors": [ "J. Hensman", "A.G. De G. Matthews", "Z. Ghahramani" ], "title": "Scalable variational Gaussian process classification", "venue": "In International conference on artificial intelligence and statistics,", "year": 2015 }, { "authors": [ "D. Hernández-Lobato", "J.M. Hernández-Lobato", "P. Dupont" ], "title": "Robust multi-class Gaussian process classification", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "G.E. Hinton", "D. Van Camp" ], "title": "Keeping the neural networks simple by minimizing the description length of the weights", "venue": "In Proceedings of the sixth annual conference on Computational learning theory,", "year": 1993 }, { "authors": [ "G.E. Hinton", "L. Deng", "D. Yu", "G.E. Dahl", "A.R. Mohamed", "N. Jaitly", "A. Senior", "V. Vanhoucke", "P. Nguyen", "T.R. Sainath", "B. Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "A. Kendall", "Y. Gal" ], "title": "What uncertainties do we need in Bayesian deep learning for computer vision", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "G. Klambauer", "T. Unterthiner", "A. Mayr", "S. Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "B. Lakshminarayanan", "A. Pritzel", "C. Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "D.J.C. MacKay" ], "title": "A practical Bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "W.J. Maddox", "P. Izmailov", "T. Garipov", "D.P. Vetrov", "A.G. Wilson" ], "title": "A simple baseline for Bayesian uncertainty in deep learning", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "T. Mikolov", "K. Chen", "G. Corrado", "J. Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "A. Mobiny", "A. Singh", "H. Van Nguyen" ], "title": "Risk-aware machine learning classifier for skin lesion diagnosis", "venue": "Journal of clinical medicine,", "year": 2019 }, { "authors": [ "V. Nair", "G.E. Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In International conference on machine learning,", "year": 2010 }, { "authors": [ "R.M. Neal" ], "title": "Bayesian Learning for Neural Networks", "venue": "PhD thesis, University of Toronto,", "year": 1995 }, { "authors": [ "A. Nguyen", "J. Yosinski", "J. Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "K. Osawa", "S. Swaroop", "M.E.E. Khan", "A. Jain", "R. Eschenhagen", "R.E. Turner", "R. Yokota" ], "title": "Practical deep learning with Bayesian principles", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "T. Pearce", "R. Tsuchida", "M. Zaki", "A. Brintrup", "A. Neely" ], "title": "Expressive priors in Bayesian neural networks: Kernel combinations and periodic functions", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "J. Postels", "F. Ferroni", "H. Coskun", "N. Navab", "F. Tombari" ], "title": "Sampling-free epistemic uncertainty estimation using approximated variance propagation", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2019 }, { "authors": [ "J. Ren", "P.J. Liu", "E. Fertig", "J. Snoek", "R. Poplin", "M. Depristo", "J. Dillon", "B. Lakshminarayanan" ], "title": "Likelihood ratios for out-of-distribution detection", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "T. Salimans", "D.P. Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "H. Salimbeni", "M. Deisenroth" ], "title": "Doubly stochastic variational inference for deep Gaussian processes", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "A. Shekhovtsov", "B. Flach" ], "title": "Feed-forward propagation in probabilistic neural networks with categorical and max layers", "venue": "In International conference on learning representations,", "year": 2018 }, { "authors": [ "J. Shi", "S. Sun", "J. Zhu" ], "title": "A spectral approach to gradient estimation for implicit distributions", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "J. Snoek", "Y. Ovadia", "E. Fertig", "B. Lakshminarayanan", "S. Nowozin", "D. Sculley", "J. Dillon", "J. Ren", "Z. Nado" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "S. Sun", "G. Zhang", "J. Shi", "R. Grosse" ], "title": "Functional variational Bayesian neural networks", "venue": "In International conference on learning representations,", "year": 2019 }, { "authors": [ "M. Titsias" ], "title": "Variational learning of inducing variables in sparse Gaussian processes", "venue": "In International conference on artificial intelligence and statistics,", "year": 2009 }, { "authors": [ "S. Urban", "P. van der Smagt" ], "title": "Gaussian process neurons", "venue": "https://openreview.net/ forum?id=By-IifZRW,", "year": 2018 }, { "authors": [ "H. Wang", "X. Shi", "D.Y. Yeung" ], "title": "Natural-parameter networks: A class of probabilistic neural networks", "venue": "Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "S. Wang", "C. Manning" ], "title": "Fast dropout training", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "F. Wenzel", "K. Roth", "B.S. Veeling", "J. Świątkowski", "L. Tran", "S. Mandt", "J. Snoek", "T. Salimans", "R. Jenatton", "S. Nowozin" ], "title": "How good is the bayes posterior in deep neural networks really", "venue": "arXiv preprint arXiv:2002.02405,", "year": 2020 }, { "authors": [ "C.K.I. Williams", "C.E. Rasmussen" ], "title": "Gaussian processes for machine learning, volume 2. MIT press", "venue": null, "year": 2006 }, { "authors": [ "A. Wu", "S. Nowozin", "E. Meeds", "R.E. Turner", "Hernández-Lobato J.M", "A.L. Gaunt" ], "title": "Deterministic variational inference for robust bayesian neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "J. Yao", "W. Pan", "S. Ghosh", "F. Doshi-Velez" ], "title": "Quality of uncertainty quantification for bayesian neural network inference", "venue": null, "year": 1906 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Neural Networks (DNNs) have achieved state-of-the-art performance in many different tasks, such as speech recognition (Hinton et al., 2012), natural language processing (Mikolov et al., 2013) or computer vision (Krizhevsky et al., 2012). In spite of their predictive power, DNNs are limited in terms of uncertainty estimation. This has been a classical concern in the field (MacKay, 1992; Hinton & Van Camp, 1993; Barber & Bishop, 1998), which has attracted a lot of attention in the last years (Lakshminarayanan et al., 2017; Guo et al., 2017; Sun et al., 2019; Wenzel et al., 2020). Indeed, this ability to “know what is not known” is essential for critical applications such as medical diagnosis (Esteva et al., 2017; Mobiny et al., 2019) or autonomous driving (Kendall & Gal, 2017; Gal, 2016).\nBayesian Neural Networks (BNNs) address this problem through a Bayesian treatment of the network weights1 (MacKay, 1992; Neal, 1995). This will be refered to as weight-space stochasticity. However, dealing with uncertainty in weight space is challenging, since it contains many symmetries and is highly dimensional (Wenzel et al., 2020; Sun et al., 2019; Snoek et al., 2019; Fort et al., 2019). Here we focus on two specific limitations. First, it has been recently shown that BNNs with well-established inference methods such as Bayes by Backprop (BBP) (Blundell et al., 2015) and MC-Dropout (Gal & Ghahramani, 2016) underestimate the predictive uncertainty for instances located in-between two clusters of training points (Foong et al., 2020; 2019; Yao et al., 2019). Second, the weight-space prior does not allow BNNs to guide extrapolation to out-of-distribution (OOD) data (Sun et al., 2019; Nguyen et al., 2015; Ren et al., 2019). Both aspects are illustrated graphically in Figure 3, more details in Section 3.1.\n∗Work developed mostly while visiting Cambridge University, UK. 1The bias term will be absorbed within the weights throughout the work.\nAs an alternative to standard BNNs, Functional Bayesian Neural Nets (fBNN) specify the prior and perform inference directly in function space (Sun et al., 2019). This provides a mechanism to guide the extrapolation in OOD data, e.g. predictions can be encouraged to revert to the prior in regions of no observed data. However, the posterior stochastic process is still defined by a factorized Gaussian on the network weights (i.e. as in BBP), see (Sun et al., 2019, Sect. 3.1). We will show that this makes fBNN inherit the problem of underestimating the predictive uncertainty for in-between data.\nIn this work, we adopt a different approach by moving stochasticity from the weights to the activation function, see Figure 1. This will be referred to as auNN (activation-level uncertainty for Neural Networks). The activation functions are modelled with (one-dimensional) GP priors, for which a triangular kernel inspired by the ReLu non-linearity (Nair & Hinton, 2010; Glorot et al., 2011) is used. Since non-linearities are typically simple functions (e.g. ReLu, sigmoid, tanh), our GPs are sparsified with few inducing points. The network weights are deterministic parameters which are estimated to maximize the marginal likelihood of the model. The motivation behind auNN is to avoid inference in the complex space of weights. We hypothesise that it could be enough to introduce stochasticity in the activation functions that follow the linear projections to provide sensible uncertainty estimations.\nWe show that auNN obtains well-calibrated estimations for in-between data, and its prior allows to guide the extrapolation to OOD data by reverting to the empirical mean. This will be visualized in a simple 1D example (Figure 3 and Table 1). Moreover, auNN obtains competitive performance in standard benchmarks, is scalable (datasets of up to ten millions training points are used), and can be readily used for classification. The use of GPs for the activations establishes an interesting connection with deep GPs (DGPs) (Damianou & Lawrence, 2013; Salimbeni & Deisenroth, 2017). The main difference is the linear projection before the GP, recall Figure 1(c-d). This allows auNN units to model simpler mappings between layers, which are defined along one direction of the input space, similarly to neural networks. However, DGP units model more complex mappings defined on the whole input space, see also Figure 2a. We will show that auNN units require fewer inducing points and are better suited for deep architectures, achieving superior performance. Also, a thorough discussion on additional related work will be provided in Section 4.\nIn summary, the main contributions of this paper are: (1) a new approach to model uncertainty in DNNs, based on deterministic weights and simple stochastic non-linearities (in principle, not necessarily modelled by GPs); (2) the specific use of non-parametric GPs as a prior, including the triangular kernel inspired by the ReLu; (3) auNN addresses a well-known limitation of BNNs and fBNNs (uncertainty underestimation for in-between data), can guide the extrapolation to OOD data by reverting to the empirical mean, and is competitive in standard prediction tasks; (4) auNN units require fewer inducing points and are better suited for deep architectures than DGP ones, achieving superior performance." }, { "heading": "2 PROBABILISTIC MODEL AND INFERENCE", "text": "Model specification. We focus on a supervised task (e.g. regression or classification) with training data2 {xn,:,yn,:}Nn=1. The graphical model in Figure 2b will be useful throughout this section. We\n2The output is represented as a vector since all the derivations apply for the multi-output case.\nassume a model of L layers, each one with Dl units as in Figure 1c. Each activation is modelled with a (1D) GP prior, i.e. f ld(a l d) ∼ GP(µld, kld), with µld : R → R and kld : R × R → R. The GP hyperparameters θld will be omitted for clarity (for the kernels used here, θ l d includes the amplitude and the lengthscale). Assuming independence between units, each layer depends on the previous one as:\np(Fl|Fl−1,Wl) = p(Fl|Al) = ∏Dl\nd=1 p(f l d|ald), (1)\nwhere Fl is the N ×Dl matrix of outputs of the l-th layer for N inputs, Wl is the Dl−1 ×Dl matrix of weights in that layer, and Al is the N ×Dl matrix of pre-activations, i.e. Al = Fl−1 ·Wl. As usual, the columns and rows of Fl are denoted as f ld and f l n,:, respectively (and analogously for the other matrices). Since the activation is defined by a GP, we have p(f ld|ald) = N (f ld|µld,Kld), with µld (resp. Kld) the result of evaluating µ l d (resp. k l d) on a l d (that is, µ l d is a N -dimensional vector and K l d is a N ×N matrix). To fully specify the model, the output Y is defined from the last layer with a distribution that factorizes across data points, i.e. p(Y|FL) = ∏N n=1 p(yn,:|fLn,:). This formulation resembles that of DGPs (Damianou & Lawrence, 2013; Salimbeni & Deisenroth, 2017). The main difference is that we model Fl|Fl−1 through Dl 1D GPs evaluated on the pre-activations Al (i.e. the projections of Fl−1 through Wl), whereas DGPs use Dl GPs of dimension Dl−1 evaluated directly on Fl−1, recall Figure 1(c-d).\nVariational Inference. Inference in the proposed model is intractable. To address this, we follow standard sparse variational GP approaches (Titsias, 2009; Hensman et al., 2013; 2015), similarly to the Doubly Stochastic Variational Inference (DSVI) for DGPs (Salimbeni & Deisenroth, 2017). Specifically, in each unit of each layer we introduce M l inducing values uld, which are the result of evaluating the GP on the one-dimensional inducing points zld. We naturally write U\nl and Zl for the corresponding M l × Dl matrices associated to the l-th layer, respectively. Following eq. (1), the augmented model for one layer is\np(Fl,Ul|Fl−1,Wl,Zl) = p(Fl|Ul,Al,Zl)p(Ul|Zl) = ∏Dl\nd=1 p(f l d|uld,ald, zld)p(uld|zld). (2)\nVariational inference (VI) involves the approximation of the true posterior p({Fl,Ul}l|Y). Following (Hensman et al., 2013; Salimbeni & Deisenroth, 2017), we propose a posterior given by p(F|U) and a parametric Gaussian on U:\nq({Fl,Ul}l) = ∏L l=1 p(F l|Ul,Al,Zl)q(Ul) = ∏L l=1 ∏Dl d=1 p(f l d|uld,ald, zld)q(uld), (3)\nwhere q(uld) = N (uld|mld,Sld), with mld ∈ RM l and Sld ∈ RM l×M l variational parameters to be estimated. Minimizing the KL divergence between q({Fl,Ul}l) and the true posterior is equivalent to maximizing the following evidence lower bound (ELBO):\nlog p(Y|{Wl,Zl}l) ≥ ELBO = N∑\nn=1\nEq(fLn,:) [ log p(yn,:|fLn,:) ] − L∑ l=1 Dl∑ d=1 KL ( q(uld)||p(uld) ) . (4)\nIn the ELBO, the KL term can be computed in closed-form, as both q(uld) and p(u l d) are Gaussians. The log likelihood term can be approximated by sampling from the marginal posterior q(fLn,:), which can be done efficiently through univariate Gaussians as in (Salimbeni & Deisenroth, 2017). Specifically, Ul can be analytically marginalized in eq. (3), which yields q({Fl}l) = ∏ l q(F l|Fl−1,Wl) = ∏ l,dN (f ld|µ̃ l d, Σ̃ l d), with:\n[µ̃ld]i = µ l d(a l id) +α l d(a l id) ᵀ(mld − µld(zld)), (5)\n[Σ̃ l\nd]ij = k l d(a l id, a l jd)−αld(alid)ᵀ(kld(zld)− Sld)αld(aljd), (6)\nwhere αld(x) = k l d(x, z l d)[k l d(z l d)] −1 and aln,: = W lf l−1n,: . Importantly, the marginal posterior q(f l n,:) is a Gaussian that depends only on aln,:, which in turn only depends on q(f l−1 n,: ). Therefore, sampling from f ln,: is straightforward using the reparametrization trick (Kingma & Welling, 2013):\nf lnd = [µ̃ l d]n + ε · [Σ̃\nl d] 1/2 nn , with ε ∼ N (0, 1), and f0n,: = xn,:. (7)\nTraining consists in maximizing the ELBO, eq. (4), w.r.t. variational parameters {mld,Sld}, inducing points {zld}, and model parameters (i.e. weights {wld} and kernel parameters {θ l d}). This can be done in batches, allowing for scalability to very large datasets. The complexity to evaluate the ELBO is O(NM2(D1 + · · ·+DL)), the same as DGPs with DSVI (Salimbeni & Deisenroth, 2017).3\nPredictions. Given a new x∗,:, we want to compute4 p(fL∗,:|X,Y) ≈ Eq({Ul}) [ p(fL∗,:|{Ul}) ] . As in (Salimbeni & Deisenroth, 2017), this can be approximated by sampling S values up to the (L− 1)-th layer with the same eq. (7), but starting with x∗,:. Then, p(fL∗,:|X,Y) is given by the mixture of the S Gaussians distributions obtained from eqs. (5)-(6).\nTriangular kernel. One of the most popular kernels in GPs is the RBF (Williams & Rasmussen, 2006), which produces very smooth functions. However, the ReLu non-linearity led to a general boost in performance in DNNs (Nair & Hinton, 2010; Glorot et al., 2011), and we aim to model similar activations. Therefore, we introduce the use of the triangular (TRI) kernel. Just like RBF, TRI is an isotropic kernel, i.e. it depends on the distance between the inputs, k(x, y) = γ · g(|x− y|/`), with γ and ` the amplitude and lengthscale. For RBF, g(t) = e−t\n2/2. For TRI, g(t) = max(1− t, 0). This is a valid kernel (Williams & Rasmussen, 2006, Section 4.2.1). Similarly to the ReLu, the functions modelled by TRI are piecewise linear, see Figure 6a in the main text and Figure 8 in Appendix C.\nComparison with DGP. The difference between auNN and DGP units is graphically illustrated in Figure 2a. Whereas DGP mappings from one layer to the next are complex functions defined on Dl−1 dimensions (Dl−1 = 2 in the figure), auNN mappings are defined just along one direction via the weight projection. This is closer in spirit to NNs, whose mappings are also simpler and better suited for feature extraction and learning more abstract concepts. Moreover, since the GP is defined on a 1D space, auNN requires fewer inducing points than DGP (which, intuitively, can be interpreted as inducing (hyper)planes in the Dl−1-dimensional space before the projection)." }, { "heading": "3 EXPERIMENTS", "text": "In this section, auNN is compared to BNN, fBNN (Sun et al., 2019) and DSVI DGP (Salimbeni & Deisenroth, 2017). BNNs are trained with BBP (Blundell et al., 2015), since auNN also leverages a simple VI-based inference approach. In each section we will highlight the most relevant experimental aspects, and all the details can be found in Appendix B. In the sequel, NLL stands for Negative Log Likelihood. Anonymized code for auNN is provided in the supplementary material, along with a script to run it for the 1D illustrative example of Section 3.1." }, { "heading": "3.1 AN ILLUSTRATIVE EXAMPLE", "text": "Here we illustrate the two aspects that were highlighted in the introduction: the underestimation of predictive uncertainty for instances located in-between two clusters of training points and the\n3As in (Salimbeni & Deisenroth, 2017), there exists also a cubic term O(M3(D1 + · · · + DL)) that is dominated by the former (since the batch size N is typically larger than M ). Moreover, in auNN we have the multiplication by weights, with complexity O(NDl−1Dl) for each layer. This is also dominated by the former.\n4The distribution p(yL∗,:|X,Y) is obtained as the expectation of the likelihood over p(fL∗,:|X,Y). A Gaussian likelihood is used for regression, and the Robust-Max (Hernández-Lobato et al., 2011) for classification.\n5.0 2.5 0.0 2.5 5.02\n1\n0\n1\n2\nNN BNN fBNN\n5.0 2.5 0.0 2.5 5.0\nauNN-RBF\n5.0 2.5 0.0 2.5 5.0\nauNN-TRI\nFigure 3: Predictive distribution (mean and one standard deviation) after training on a 1D dataset with two clusters of points. This simple example illustrates the main limitations of NN, BNN and fBNN, which are overcome by the novel auNN. See Table 1 for a summary and the text for details.\nTable 1: Visual overview of conclusions from the 1D experiment in Figure 3. This shows that NN, BNN, fBNN and auNN increasingly expand their capabilities.\nEpistemic uncertainty Reverts to the mean In-between uncertainty\nNN 7 7 7 BNN 3 7 7\nfBNN 3 3 7 auNN 3 3 3\nextrapolation to OOD data. Figure 3 shows the predictive distribution of NN, BNN, fBNN and auNN (with RBF and TRI kernels) after training on a simple 1D dataset with two clusters of points. All the methods have one hidden layer with 25 units, and 5 inducing points are used for auNN.\nIn Figure 3, the deterministic nature of NNs prevents them from providing epistemic uncertainty (i.e. the one originating from the model (Kendall & Gal, 2017)). Moreover, there is no prior to guide the extrapolation to OOD data. BNNs provide epistemic uncertainty. However, the prior in the complex space of weights does not allow for guiding the extrapolation to OOD data (e.g. by reverting to the empirical mean). Moreover, note that BNNs underestimate the predictive uncertainty in the region between the two clusters, where there is no observed data (this region is usually called the gap). More specifically, as shown in (Foong et al., 2020), the predictive uncertainty for data points in the gap is limited by that on the extremes. By specifying the prior in function space, fBNN can induce properties in the output, such as reverting to the empirical mean for OOD data through a zero-mean GP prior. However, the underestimation of in-between uncertainty persists, since the posterior stochastic process for fBNN is based on a weight-space factorized Gaussian (as BNN with BBP), see (Sun et al., 2019, Section 3.1) for details. Finally, auNN (either with RBF or TRI kernel) addresses both aspects through the novel activation-level modelling of uncertainty, which utilizes a zero-mean GP prior for the activations. Table 1 summarizes the main characteristics of each method. Next, a more comprehensive experiment with deeper architectures and more complex multidimensional datasets is provided." }, { "heading": "3.2 UCI REGRESSION DATASETS WITH GAP SPLITS", "text": "Standard splits are not appropriate to evaluate the quality of uncertainty estimates for in-between data, since both train and test sets may cover the space equally. This motivated the introduction of gap splits (Foong et al., 2019). Namely, a set with D dimensions admits D such train-test partitions by considering each dimension, sorting the points according to its value, and selecting the middle 1/3 for test (and the outer 2/3 for training), see Figure 2c. With these partitions, overconfident predictions for data points in the gap manifest as very high values of test negative log likelihood.\nUsing the gap splits, it was recently shown that BNNs yield overconfident predictions for in-between data (Foong et al., 2019). The authors highlight the case of Energy and Naval datasets, where BNNs fail catastrophically. Figure 4a reproduces these results for BNNs and checks that fBNNs also obtain overconfident predictions, as theoretically expected. However, notice that activation-level stochasticity performs better, specially through the triangular kernel, which dramatically improves the results (see the plot scale). Figure 4b confirms that the difference is due to the underestimation of uncertainty, since the predictive performance in terms of RMSE is on a similar scale for all the methods. In all cases, D = 50 hidden units are used, and auNN uses M = 10 inducing points.\nTo further understand the intuition behind the different results, Figure 5 shows the predictive distribution over a segment that crosses the gap, recall Figure 2c. We observe that activation-level approaches obtain more sensitive (less confident) uncertainties in the gap, where there is no observed data. For instance, BNN and fBNN predictions in Naval are unjustifiably overconfident, since the output in that dataset ranges from 0.95 to 1. Also, to illustrate the internal mechanism of auNN, Figure 6a shows one example of the activations learned when using each kernel. Although it is just one example, it allows for visualising the different nature: smoother for RBF and piecewise linear for TRI. All the activations for a particular network and for both kernels are shown in Appendix C (Figure 8).\nIn addition to the paradigmatic cases of Energy and Naval illustrated here, four more datasets are included in Appendix C. Figure 7 there is analogous to Figure 4 here, and Tables 4 and 5 there show the full numeric results and ranks. We observe that auNN, specially through the triangular kernel, obtains the best results and does not fail catastrophically in any dataset (unlike BNN and fBNN, which do in Energy and Naval). Finally, the performance on the gap splits is complemented by that on standard splits, see Tables 6 and 7 in Appendix C. This shows that, in addition to the enhanced uncertainty estimation, auNN is a competitive alternative in general practice." }, { "heading": "3.3 COMPARISON WITH DGPS", "text": "As explained in Section 2, the choice of a GP prior for activation stochasticity establishes a strong connection with DGPs. The main difference is that auNN performs a linear projection from Dl−1 to Dl dimensions before applying Dl 1D GPs, whereas DGPs define Dl GPs directly on the Dl−1 dimensional space. This means that auNN units are simpler than those of DGP, recall Figure 2a. Here we show two practical implications of this.\nFirst, it is reasonable to hypothesise that DGP units may require a higher number of inducing points M than auNN, since they need to cover a multi-dimensional input space. By contrast, auNN may require a higher number of hidden units D, since these are simpler. Importantly, the computational cost is not symmetric in M and D, but significantly cheaper on D, recall Section 2. Figure 6b shows the performance of auNN and DGP for different values of M and D on the UCI Kin8 set (with one hidden layer; depth will be analyzed next). As expected, note the different influence by M and D: whereas auNN improves “by rows” (i.e. as D grows), DGP does it “by columns” (i.e. as M grows)5. Next section (Section 3.4), will show that this makes auNN faster than DGP in practice. An analogous figure for RMSE and full numeric results are in Appendix C (Figure 9 and Tables 9-10).\nSecond, auNN simpler units might be better suited for deeper architectures. Figure 6c shows the performance on the UCI Power dataset when depth is additionally considered. It can be observed that auNN is able to take greater advantage of depth, which translates into better overall performance.\n5Interestingly, the fact that DGP is not greatly influenced by D could be appreciated in its recommended value in the original work (Salimbeni & Deisenroth, 2017). They set D = min(30, D0), where D0 is the input dimension. This limits D to a maximum value of 30.\nMoreover, the aforementioned different influence ofD andM on DGP and on auNN is also confirmed here. The results on RMSE are similar, see Figure 10 and Tables 11-12 in Appendix C.\nFinally, it may be argued that auNN closely resembles a DGP with additive kernel (Duvenaud et al., 2011; Durrande et al., 2011) (DGP-add hereafter). Recall that an additive kernel models functions that are decomposed as f(x) = f1(x1) + · · ·+ fD(xD). Therefore, the model for al+1|al in auNN is very similar to that of f l+1|f l in DGP-add, see Figure 11 in Appendix C. Specifically, in both cases, the input (al in auNN, f l in DGP-add) goes through 1D GPs and then these are aggregated (linear combination through W in auNN, summation in DGP-add) to yield the output (al+1 in auNN, f l+1 in DGP-add). However, there exists a key difference. In auNN, all the nodes in the (l + 1)-th layer (i.e. al+1i ) aggregate a shared set of distinct functions (namely, f l i ), each node using its own weights to aggregate them. While in DGP-add, there is not such shared set of functions, and each node in the (l+1)-th layer (i.e. f l+1i ) aggregates a different set of GP realizations (i.e. the unlabelled blue nodes in Figure 11c). This subtle theoretical difference has empirical implications, since many more functions need to be learned for DGP-add. Indeed, Figures 12 and 13 in Appendix C compare the performance of DGP-add and auNN-RBF (the experimental setting is analogous to that of Figure 6c)6. We observe that the results obtained by DGP-add are worse than those by auNN-RBF, probably due to the larger number of functions that need to be learned in DGP-add." }, { "heading": "3.4 CLASSIFICATION, SCALABILITY, AND ADDITIONAL METRICS", "text": "So far, we have experimented with small to medium regression datasets, and uncertainty estimation has been measured through the (negative) log likelihood and the visual inspection of the predictive distribution (Figures 3 and 5). Here we focus on two large scale classification datasets (up to 107 instances), and additional metrics that account for uncertainty calibration are reported. We use the well-known particle physics binary classification sets HIGGS (N = 11M, D = 28) and SUSY (N = 5M, D = 18) (Baldi et al., 2014). We consider DGP as a baseline, as it obtained state-of-the-art results for these datasets (Salimbeni & Deisenroth, 2017). For all the methods, we consider a Robust-Max classification likelihood (Hernández-Lobato et al., 2011).\nThe metrics to be used are the Brier score (Gneiting & Raftery, 2007) and the Expected Calibration Error (ECE) (Guo et al., 2017). The former is a proper score function that measures the accuracy of probabilistic predictions for categorical variables. In practice, it is computed as the mean squared\n6For a fair comparison, here we use auNN-RBF (and not TRI), because DGP-add leverages a RBF kernel.\ndifference between a one dimensional vector with the probability for each class label and the one-hot encoding of the actual class. The latter measures miscalibration as the difference in expectation between confidence and accuracy. This is done by partitioning the predictions in M equally spaced bins and taking a weighted average of the bins’ accuracy/confidence difference, see (Guo et al., 2017, Eq.(3)) for details.\nTable 2 shows the Brier score and ECE for auNN and DGP for different values of L (depth). We observe that auNN outperforms DGP in both metrics, achieving superior uncertainty estimation. Both TRI and RBF kernels obtain similar results for auNN. Notice that the Brier score generally improves with the network depth, whereas the performance in ECE decreases with depth. Interestingly, this behavior was also observed for standard NNs (Guo et al., 2017, Figure 2a).\nFinally, as was theoretically justified in Section 2, auNN can scale up to very large datasets (HIGGS has more than 107 training instances). Regarding the practical computational cost, Table 3 shows the average training time per batch for both auNN and DGP in the previous datasets. Although the theoretical complexity is analogous for both methods (recall Section 2), the experiments in Figures 6b-c showed that DGP requires larger values of M , whereas auNN needs larger D 7. Since the computational cost is not symmetric on M and D, but significantly cheaper in the latter (recall Section 2), auNN is faster than DGP in practice." }, { "heading": "4 RELATED WORK", "text": "Activation-level uncertainty is introduced here as an alternative to weight-space stochasticity. The expressiveness of the latter has been recently analyzed in the recent work (Wenzel et al., 2020), where the authors advocate a modified BNN objective. Alternatively, different prior specifications are studied in (Hafner et al., 2020; Pearce et al., 2019; Flam-Shepherd et al., 2017), in addition to the fBNN discussed here (Sun et al., 2019). However, none of these works consider stochasticity on the activations.\nSince we present a straightforward use of VI for auNN, in this work we have compared empirically with the well-known VI-based BBP for BNNs. Yet, we expect auNN to benefit from independent inference refinements like those proposed over the last years for BNNs. For instance, natural-gradient VI allows for leveraging techniques such as BatchNorm or data augmentation (Osawa et al., 2019), and the information contained in the SGD trajectory can be exploited as well (Maddox et al., 2019). Also, getting rid of the gradient variance through deterministic approximate moments has provided enhanced results in BNNs (Wu et al., 2019).\n7In this section, both DGP and auNN are trained with one hidden layer and their optimal configuration according to the previous experiment: large M for DGP (M = 100, D is set as recommended by the authors, i.e. D = min(30, D0)), and large D for auNN (D = 50, M is set to the intermediate value of M = 25).\nA key aspect of auNN is the modelling of the activation function. This element of neural nets has been analyzed before. For instance, self-normalizing neural nets (Klambauer et al., 2017) induce the normalization that is explicitly performed in related approaches such as BatchNorm (Ioffe & Szegedy, 2015) and weight and layer normalization (Salimans & Kingma, 2016; Ba et al., 2016). Learnable deterministic activations have been explored too, e.g. (He et al., 2015; Agostinelli et al., 2014). However, as opposed to auNN, in all these cases the activations are deterministic.\nProbabilistic neural networks such as Natural-Parameter Networks (NPN) (Wang et al., 2016) propagate probability distributions through layers of transformations. Therefore, the values of the activations are also described by probability distributions (specifically, the exponential family is used in NPN). Fast dropout training (Wang & Manning, 2013) and certain variants of NPNs can be also viewed in this way (Shekhovtsov & Flach, 2018; Postels et al., 2019). However, in auNN the activations are modelled themselves as stochastic learnable components that follow a GP prior. Along with the deterministic weights, this provides a conceptually different approach to model uncertainty.\nA very preliminary study on GP-based activation functions is proposed in (Urban & van der Smagt, 2018). However, the method is not empirically evaluated, no connection with deep GPs is provided, and the inference approach is limited. Namely, the output of each unit is approximated with a Gaussian whose mean and covariance are computed in closed-form, as was done in (Bui et al., 2016) for DGPs. However, this is only tractable for the RBF kernel (in particular, it cannot leverage the more convenient TRI kernel studied here), and the Gaussian approximation typically yields worse results than Monte Carlo approximations to the ELBO as used here (indeed, DSVI (Salimbeni & Deisenroth, 2017) substantially improved the results for DGPs compared to (Bui et al., 2016))." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "We proposed a novel approach for uncertainty estimation in neural network architectures. Whereas previous methods are mostly based on a Bayesian treatment of the weights, here we move the stochasticity to the activation functions, which are modelled with a simple 1D GP and a triangular kernel inspired by the ReLu. Our experiments show that the proposed method obtains better calibrated uncertainty estimates and is competitive in standard prediction tasks. Moreover, the connection with deep GPs is analyzed. Namely, our approach requires fewer inducing points and is better suited for deep architectures, achieving superior performance.\nWe hope this work raises interest on alternative approaches to model uncertainty in neural networks. One of the main directions of future research is to deeply understand the properties induced by each one of the kernels considered here (i.e. the triangular one and RBF). In particular, it would be interesting to automatically learn the optimal kernel for each unit in a probabilistic way. Also, the use of a GP prior for the activation function may hamper the scalability of auNN to wider and/or deeper networks. In these cases, the GP-based activation model could be substituted by a simpler Bayesian parametric one. This would allow for a cheaper modelling of uncertainty within the activations. Finally, since only the activation function is modified, important deep learning elements such as convolutional layers can be still incorporated." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by the “Agencia Estatal de Investigación” of the Spanish “Ministerio de Ciencia e Innovación” under contract PID2019-105142RB-C22/AEI/10.13039/501100011033, and the Spanish “Ministerio de Economía, Industria y Competitividad” under contract DPI2016-77869C2-2-R. DHL acknowledges support from the Spanish “Ministerio de Ciencia e Innovación” (projects TIN2016-76406-P and PID2019-106827GB-I00/AEI/10.13039/501100011033). PMA was funded by La Caixa Banking Foundation (ID 100010434, Barcelona, Spain) through La Caixa Fellowship for Doctoral Studies LCF/BQ/ES17/11600011, and the University of Granada through the program “Proyectos de Investigación Precompetitivos para Jóvenes Investigadores del Plan Propio 2019” (ref. PPJIB2019-03)." }, { "heading": "A PRACTICAL SPECIFICATIONS FOR AUNN", "text": "Whitening transformation for q(uld). The proposed parametric posterior for each unit is given by the Gaussian q(uld) = N (uld|mld,Sld). The GP prior on uld is p(uld) = N (uld|µld,Kld), with µld = µ l d(z l d) and K l d = k l d(z l d, z l d). For numerical stability and to reduce the amount of operations, we use a white representation for q(uld), as is common practice in (D)GPs (De G. Matthews et al., 2017; Salimbeni & Deisenroth, 2017). That is, we consider the variable vld ∼ N (m̃ld, S̃ld), with uld = µ l d + (K l d) 1/2vld. Specifically, in the code the variable m̃ l d is denoted as q_mu, and S̃ l d is represented through its Cholesky factorization (S̃ld) 1/2, which is named q_sqrt.\nInitialization of the variational parameters {mld}. These are the mean of the posterior distribution on the inducing points. Therefore, their value determines the initialization of the activation function. If the RBF kernel is used, {mld} are initialized to the prior µld = µld(zld) (since we are using the aforementioned white representation, q_mu is initialized to zero). This is the most standard initialization in GP literature. For the TRI kernel, {mld} are initialized according to the ReLu which TRI is inspired by, i.e. mld = ReLu(z l d).\nInitialization of the variational parameters {Sld}. The posterior distribution covariance matrices are initialized to the prior Kld = k l d(z l d, z l d) (that is, q_sqrt is initialized to the identity matrix). Following common practise for DGPs (Salimbeni & Deisenroth, 2017), the covariance matrices of inner layers are scaled by 10−5.\nInitialization of the weights. The Glorot uniform initializer (Glorot & Bengio, 2010), also called Xavier uniform initializer, is used for the weights. The biases are initialized to zero.\nInitialization of the kernel hyperparameters. The kernels used (RBF and TRI) have two hyperparameters: the variance γ and the lengthscale `. Both are always initialized to 1 (except for the lengthscale in the 1D example in Section 3.1, where ` is initialized to 0.1).\nInitialization of the inducing points. In order to initialize zld, theN input data points are propagated through the network with the aforementioned initial weights, biases, and activation function. Then, in each layer and unit, zld is initialized with a linspace between the minimum and maximum of the N values there (the minimum (resp. the maximum) is decreased (resp. increased) by 0.1 to strictly contain the interval of interest).\nInitialization of the regression likelihood noise. In the regression problems, we use a Gaussian likelihood p(y|f) = N (y|f, σ2). The standard deviation of the noise is initialized to σ = 0.1. Mean function. We always use a zero mean function. Since data is normalized to have zero mean (and standard deviation equal to one), a zero mean function allows for reverting to the empirical mean for OOD data, as explained in the main text.\nOptimizer and learning rate. Throughout the work, we use the Adam Optimizer (Kingma & Ba, 2014) with default parameters and learning rate of 0.001." }, { "heading": "B EXPERIMENTAL DETAILS FOR THE EXPERIMENTS", "text": "All the experiments were run on a NVIDIA Tesla P100. In order to predict, all the methods utilize 100 test samples in all the experiments. Details for each section are provided below.\nAn illustrative example (Section 3.1 in the main text). All the methods use two layers (i.e. one hidden layer). The hidden layer has D = 25 units in all cases. BNN and fBNN use ReLu activations. The auNN methods use M = 10 inducing points in each unit (the rest of methods do not have such inducing points). The methods are trained during 5000 epochs with the whole dataset (no mini-batches). The dataset is synthetically generated to have two clusters of points around x = ±1. More specifically, 30 points are sampled uniformly in each interval (x− 0.3, x+ 0.3) for x = ±1, and the output is given by the sin function plus a Gaussian noise of standard deviation 0.1. We have also trained DGP and GP on this dataset, see Figure 14. Both methods use M = 10 inducing points, and are trained during 5000 epochs with the whole dataset (no mini-batches). DGP has one one hidden layer with D = 25 units.\nUCI regression datasets with gap (and standard) splits (Section 3.2 in the main text). The methods use L = 2, 3 layers. In all cases, the hidden layers have D = 50 units. BNN and fBNN use ReLu activations. The methods are trained during 10000 epochs, with a mini-batch size that depends on the size of the dataset. For those with fewer than 5000 instances (i.e. Boston, Concrete, Energy, Wine and Yacht), the mini-batch size is 500. For those with more than 5000 (i.e. Naval), the mini-batch size is 5000. Recall from the main text that each dataset has as many gap splits as dimensionality, with 2/3 for train and 1/3 for test. In the case of standard splits, each dataset uses 10 random 90%-10% train-test splits. Regarding the segment used in Figure 5, each extreme of the segment is a point from a different connected component of the training set. These are chosen so that the function is well-known in the extremes (but not along the segment, which crosses the gap). Namely, the extremes are chosen as the training points who have minimum average distance to the closest five points in its connected component.\nComparison with DGPs (Section 3.3 in the main text). Here, different values of depth L, number of inducing points M and number of hidden layers D are studied (see the main text). auNN is trained during 5000 epochs, with a mini-batch size of 5000 (20000 epochs are used for DGP, as proposed by the authors (Salimbeni & Deisenroth, 2017)). Each experiment is repeated on five random 90%-10% train-test splits. DGP uses a RBF kernel. The experimental details for DGP-add are the same as for DGP, with the only difference of the kernel. Namely, an additive kernel using RBF components is used for DGP-add.\nLarge scale experiments (Section 3.4 in the main text). Since we are dealing with classification datasets, a Robust-Max likelihood is used in all cases (Hernández-Lobato et al., 2011). The values of D and M are chosen following the conclusions from Section 3.3. That is, DGP needs large M (the largest M = 100 is used), but is less influenced by D (this is chosen as recommended by the authors (Salimbeni & Deisenroth, 2017): D = min(30, D0), with D0 the dimensionality of the input data). auNN needs large D (the largest D = 50 is used), but is less influenced by M (the intermediate value M = 25 is chosen). All the methods are trained during 100 epochs, with a mini-batch size of 5000. Three random train-test splits are used. In both datasets, 500000 instances are used for test (which leaves 10.5M and 4.5M training instances for HIGGS and SUSY, respectively)." }, { "heading": "C ADDITIONAL FIGURES AND TABLES", "text": "Finally, additional material is provided here. Every figure and table is referred from the main text.\nPublished as a conference paperatIC L R 2021\nTable 9: Test NLL of auNN and DGP for different values of M (number of inducing points) and D (number of hidden units). Mean and one standard error over 5 independent runs on the UCI Kin8 dataset are shown. The lower the better.\nauNN-RBF auNN-TRI DGP D M 5 10 25 50 75 100 5 10 25 50 75 100 5 10 25 50 75 100 5 -0.85±0.01 -0.89±0.01 -0.89±0.01 -0.89±0.01 -0.89±0.01 -0.90±0.01 -0.78±0.00 -0.79±0.03 -0.78±0.05 -0.77±0.04 -0.67±0.06 -0.71±0.04 -0.67±0.01 -0.98±0.00 -1.19±0.01 -1.30±0.01 -1.33±0.01 -1.34±0.01 10 -1.06±0.01 -1.09±0.01 -1.09±0.01 -1.09±0.01 -1.10±0.02 -1.10±0.01 -0.96±0.01 -1.02±0.01 -1.03±0.01 -0.98±0.03 -0.94±0.03 -0.89±0.03 -0.69±0.01 -0.98±0.00 -1.19±0.00 -1.30±0.01 -1.33±0.01 -1.35±0.01 25 -1.27±0.02 -1.30±0.02 -1.30±0.02 -1.30±0.02 -1.31±0.01 -1.31±0.02 -1.09±0.01 -1.19±0.01 -1.22±0.01 -1.15±0.02 -1.11±0.01 -1.06±0.03 -0.68±0.01 -0.98±0.00 -1.17±0.01 -1.26±0.01 -1.29±0.01 -1.30±0.01 50 -1.33±0.01 -1.34±0.01 -1.34±0.02 -1.33±0.01 -1.34±0.02 -1.32±0.03 -1.15±0.01 -1.24±0.01 -1.29±0.01 -1.26±0.01 -1.24±0.02 -1.19±0.02 -0.69±0.01 -0.96±0.01 -1.16±0.01 -1.21±0.01 -1.22±0.01 -1.24±0.01\nTable 10: Test RMSE of auNN and DGP for different values of M (number of inducing points) and D (number of hidden units). Mean and one standard error over 5 independent runs on the UCI Kin8 dataset are shown. The lower the better.\nauNN-RBF auNN-TRI DGP D M 5 10 25 50 75 100 5 10 25 50 75 100 5 10 25 50 75 100\n5 0.10±0.00 0.10±0.00 0.10±0.00 0.10±0.00 0.10±0.00 0.10±0.00 0.11±0.00 0.11±0.00 0.11±0.01 0.11±0.00 0.12±0.01 0.12±0.00 0.12±0.00 0.09±0.00 0.07±0.00 0.07±0.00 0.06±0.00 0.06±0.00 10 0.08±0.00 0.08±0.00 0.08±0.00 0.08±0.00 0.08±0.00 0.08±0.00 0.09±0.00 0.08±0.00 0.08±0.00 0.09±0.00 0.09±0.00 0.10±0.00 0.12±0.00 0.09±0.00 0.07±0.00 0.06±0.00 0.06±0.00 0.06±0.00 25 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.08±0.00 0.08±0.00 0.08±0.00 0.12±0.00 0.09±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.06±0.00 50 0.06±0.00 0.06±0.00 0.06±0.00 0.06±0.00 0.06±0.00 0.06±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.12±0.00 0.09±0.00 0.07±0.00 0.07±0.00 0.07±0.00 0.07±0.00\nTable 11: Test NLL of auNN and DGP for different values of M (number of inducing points) and D (number of hidden units) as the depth increases from L = 2 to L = 4. Mean and one standard error over 5 independent runs on the UCI Power dataset are shown. The lower the better.\nauNN-RBF auNN-TRI DGP\nL D M 5 10 25 50 75 100 5 10 25 50 75 100 5 10 25 50 75 100\n2 5 2.86±0.02 2.84±0.02 2.84±0.02 2.84±0.02 2.84±0.02 2.84±0.02 2.85±0.02 2.83±0.02 2.82±0.02 2.84±0.02 2.89±0.03 2.84±0.02 2.87±0.02 2.85±0.02 2.83±0.02 2.82±0.02 2.81±0.02 2.81±0.02 10 2.84±0.02 2.83±0.02 2.82±0.02 2.81±0.02 2.81±0.02 2.81±0.02 2.84±0.02 2.82±0.02 2.80±0.02 2.81±0.02 2.81±0.02 2.81±0.02 2.87±0.02 2.86±0.02 2.83±0.02 2.83±0.02 2.82±0.02 2.81±0.02 25 2.83±0.02 2.81±0.02 2.81±0.02 2.81±0.02 2.81±0.02 2.81±0.02 2.83±0.02 2.80±0.02 2.78±0.02 2.78±0.02 2.78±0.02 2.79±0.02 2.87±0.02 2.85±0.02 2.83±0.02 2.82±0.02 2.81±0.02 2.81±0.02 50 2.82±0.02 2.81±0.02 2.81±0.02 2.80±0.02 2.81±0.02 2.81±0.02 2.82±0.02 2.80±0.02 2.77±0.02 2.76±0.02 2.76±0.02 2.76±0.03 2.86±0.02 2.87±0.02 2.85±0.02 2.83±0.02 2.82±0.02 2.81±0.02\n3 5 2.84±0.02 2.83±0.02 2.83±0.02 2.83±0.03 2.83±0.02 2.83±0.02 2.84±0.02 2.83±0.02 2.82±0.02 2.82±0.02 2.85±0.02 2.82±0.02 2.86±0.02 2.83±0.02 2.82±0.02 2.79±0.02 2.78±0.02 2.77±0.01 10 2.81±0.02 2.81±0.02 2.81±0.02 2.81±0.02 2.81±0.02 2.80±0.02 2.82±0.02 2.80±0.02 2.79±0.02 2.79±0.02 2.78±0.02 2.78±0.02 2.86±0.02 2.83±0.02 2.80±0.02 2.81±0.02 2.82±0.02 2.81±0.02 25 2.80±0.02 2.79±0.02 2.77±0.02 2.77±0.02 2.77±0.02 2.77±0.02 2.79±0.02 2.77±0.02 2.74±0.02 2.72±0.02 2.74±0.03 2.74±0.03 2.86±0.02 2.85±0.02 2.83±0.02 2.82±0.02 2.81±0.02 2.81±0.02 50 2.78±0.02 2.78±0.02 2.77±0.02 2.76±0.02 2.76±0.02 2.76±0.03 2.78±0.02 2.75±0.02 2.71±0.02 2.71±0.03 2.70±0.03 2.70±0.02 2.87±0.02 2.87±0.02 2.84±0.02 2.82±0.02 2.82±0.02 2.81±0.02\n4 5 2.84±0.02 2.83±0.02 2.82±0.02 2.83±0.02 2.82±0.01 2.83±0.02 3.69±0.35 2.83±0.01 2.83±0.02 2.83±0.02 2.83±0.02 2.82±0.02 2.86±0.02 2.83±0.02 2.80±0.02 2.79±0.02 2.78±0.02 2.77±0.02 10 2.81±0.02 2.80±0.02 2.80±0.02 2.80±0.02 2.82±0.02 2.81±0.02 2.83±0.02 2.81±0.01 2.79±0.01 2.79±0.02 2.79±0.02 2.79±0.02 2.86±0.02 2.84±0.02 2.83±0.02 2.79±0.02 2.82±0.02 2.81±0.02 25 2.79±0.02 2.78±0.02 2.77±0.02 2.75±0.02 2.77±0.02 2.76±0.02 2.80±0.01 2.78±0.02 2.75±0.02 2.74±0.02 2.75±0.03 2.75±0.02 2.86±0.02 2.85±0.02 2.83±0.02 2.82±0.02 2.82±0.02 2.81±0.02 50 2.79±0.02 2.80±0.02 2.75±0.03 2.77±0.03 2.75±0.03 2.74±0.03 2.79±0.01 2.77±0.01 2.73±0.02 2.74±0.02 2.74±0.02 2.75±0.02 2.87±0.02 2.85±0.02 2.84±0.02 2.82±0.02 2.82±0.02 2.81±0.02\n18\nPublished as a conference paperatIC L R 2021\nTable 12: Test RMSE of auNN and DGP for different values of M (number of inducing points) and D (number of hidden units) as the depth increases from L = 2 to L = 4. Mean and one standard error over 5 independent runs on the UCI Power dataset are shown. The lower the better.\nauNN-RBF auNN-TRI DGP L D M 5 10 25 50 75 100 5 10 25 50 75 100 5 10 25 50 75 100\n2 5 4.20±0.09 4.14±0.08 4.12±0.08 4.13±0.08 4.13±0.07 4.14±0.09 4.16±0.08 4.09±0.08 4.06±0.09 4.12±0.09 4.32±0.13 4.12±0.09 4.24±0.10 4.19±0.09 4.08±0.09 4.05±0.08 4.03±0.07 4.01±0.07 10 4.15±0.09 4.08±0.08 4.03±0.08 4.03±0.07 4.03±0.09 4.03±0.09 4.10±0.10 4.03±0.08 3.99±0.08 4.01±0.08 4.03±0.09 4.03±0.07 4.24±0.10 4.21±0.09 4.10±0.08 4.08±0.08 4.03±0.08 4.02±0.07 25 4.09±0.08 4.01±0.08 4.01±0.08 4.01±0.08 4.00±0.09 4.02±0.08 4.04±0.08 3.96±0.07 3.90±0.08 3.91±0.08 3.89±0.08 3.92±0.07 4.24±0.09 4.18±0.09 4.10±0.09 4.06±0.08 4.03±0.08 4.01±0.08 50 4.06±0.08 4.00±0.07 4.00±0.07 3.98±0.07 4.01±0.09 3.99±0.08 4.04±0.08 3.93±0.07 3.86±0.09 3.83±0.08 3.81±0.08 3.81±0.10 4.24±0.10 4.24±0.10 4.18±0.09 4.11±0.09 4.06±0.09 4.03±0.08\n3 5 4.14±0.09 4.08±0.08 4.11±0.06 4.09±0.11 4.11±0.09 4.09±0.09 4.10±0.09 4.10±0.07 4.02±0.08 4.04±0.08 4.15±0.08 4.04±0.09 4.22±0.09 4.10±0.08 4.07±0.09 3.92±0.07 3.90±0.06 3.86±0.05 10 4.02±0.08 4.02±0.08 4.00±0.07 4.02±0.07 4.02±0.07 3.99±0.07 4.01±0.08 3.95±0.07 3.92±0.07 3.92±0.08 3.90±0.07 3.90±0.08 4.20±0.09 4.10±0.08 3.98±0.08 3.99±0.07 4.05±0.08 4.03±0.08 25 3.96±0.08 3.93±0.07 3.87±0.07 3.87±0.07 3.87±0.07 3.83±0.07 3.88±0.08 3.84±0.08 3.76±0.07 3.67±0.06 3.75±0.10 3.71±0.09 4.23±0.10 4.19±0.08 4.11±0.09 4.06±0.08 4.03±0.08 4.02±0.08 50 3.89±0.08 3.88±0.07 3.85±0.06 3.80±0.09 3.82±0.07 3.80±0.08 3.86±0.08 3.77±0.09 3.62±0.06 3.61±0.08 3.59±0.09 3.60±0.07 4.24±0.10 4.24±0.10 4.12±0.09 4.07±0.08 4.04±0.08 4.03±0.08\n4 5 4.14±0.10 4.10±0.08 4.08±0.08 4.11±0.09 4.07±0.05 4.09±0.09 12.00±3.26 4.04±0.07 4.06±0.09 4.07±0.07 4.09±0.08 4.07±0.08 4.22±0.09 4.10±0.08 3.97±0.07 3.93±0.07 3.88±0.07 3.85±0.07 10 4.01±0.08 3.98±0.07 3.99±0.07 3.99±0.07 4.05±0.07 4.01±0.06 4.03±0.09 3.99±0.06 3.94±0.06 3.94±0.07 3.92±0.07 3.93±0.08 4.20±0.09 4.12±0.08 4.08±0.10 3.94±0.07 4.06±0.08 4.01±0.08 25 3.93±0.09 3.91±0.08 3.87±0.07 3.78±0.06 3.84±0.07 3.82±0.07 3.94±0.07 3.85±0.08 3.76±0.08 3.75±0.08 3.77±0.10 3.78±0.09 4.24±0.09 4.18±0.09 4.11±0.09 4.06±0.08 4.04±0.08 4.03±0.08 50 3.92±0.08 3.96±0.07 3.78±0.11 3.82±0.07 3.74±0.10 3.70±0.07 3.90±0.08 3.82±0.06 3.71±0.09 3.73±0.09 3.75±0.08 3.77±0.08 4.24±0.09 4.20±0.10 4.12±0.09 4.06±0.08 4.04±0.08 4.03±0.08\n19" }, { "heading": "D COMPUTATIONAL COST SUMMARY", "text": "Table 15 shows the training computational complexity for the methods compared in this paper. Moreover, in order to evaluate the computational cost in practice, the table also shows the actual running time for the experiment of Section 3.1. BNN is the fastest algorithm, since it utilizes a factorized Gaussian for the approximate posterior. Although fBNN has the same theoretical complexity, the Spectral Stein Gradient Estimator (Shi et al., 2018) is used to compute the KL divergence gradients. Moreover, a GP prior is specified at function space, for which a GP must be trained as a previous step. DGP and auNN have the same theoretical complexity. In practice, auNN is typically faster because it requires fewer inducing points, recall Section 3.3 and Table 3. The running time in Table 15 is very similar for both because the same amount of inducing points (M = 10) is used in this simple experiment." } ]
2,021
ACTIVATION-LEVEL UNCERTAINTY IN DEEP NEURAL NETWORKS
SP:4d94ef57fdaf5f1100b6b09331d5cff5264fcdf6
[ "In this paper, the authors argue that the mini-batch method and local SGD method suffers generalization performance degradation for large local mini-batch size. An asynchronous method is proposed to improve the generalization performance. A sublinear convergence rate is provided for the non-convex objective. As there are some missing definitions and little explanation of the proposed method, the reviewer finds the paper hard to read." ]
Distributed variants of stochastic gradient descent (SGD) are central to training deep neural networks on massive datasets. Several scalable versions of data-parallel SGD have been developed, leveraging asynchrony, communicationcompression, and local gradient steps. Current research seeks a balance between distributed scalability–seeking to minimize the amount of synchronization needed–and generalization performance–seeking to achieve the same or better accuracy relative to the sequential baseline. However, a key issue in this regime is largely unaddressed: if “local” data-parallelism is aggressively applied to better utilize the computing resources available with workers, generalization performance of the trained model degrades. In this paper, we present a method to improve the ”local scalability” of decentralized SGD. In particular, we propose two key techniques: (a) shared-memory based asynchronous gradient updates at decentralized workers keeping the local minibatch size small, and (b) an asynchronous non-blocking in-place averaging overlapping the local updates, thus essentially utilizing all compute resources at all times without the need for large minibatches. Empirically, the additional noise introduced in the procedure proves to be a boon for better generalization. On the theoretical side, we show that this method guarantees ergodic convergence for non-convex objectives, and achieves the classic sublinear rate under standard assumptions. On the practical side, we show that it improves upon the performance of local SGD and related schemes, without compromising accuracy.
[ { "affiliations": [], "name": "MEETS ASYNCHRONY" } ]
[ { "authors": [ "Dan Alistarh", "Demjan Grubic", "Jerry Li", "Ryota Tomioka", "M. Vojnovic" ], "title": "Qsgd: Communicationefficient sgd via gradient quantization and encoding", "venue": null, "year": 2017 }, { "authors": [ "Nils Berglund" ], "title": "Kramers’ law: Validity, derivations and generalisations", "venue": "arXiv preprint arXiv:1106.5799,", "year": 2011 }, { "authors": [ "Léon Bottou" ], "title": "Stochastic gradient descent tricks", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Coralia Cartis", "Katya Scheinberg" ], "title": "Global convergence rate analysis of unconstrained optimization methods based on probabilistic models", "venue": "Mathematical Programming,", "year": 2018 }, { "authors": [ "Xiaowu Dai", "Yuhua Zhu" ], "title": "Towards theoretical understanding of large batch training in stochastic gradient descent", "venue": "arXiv preprint arXiv:1812.00542,", "year": 2018 }, { "authors": [ "Noah Golmant", "Nikita Vemuri", "Zhewei Yao", "Vladimir Feinberg", "A. Gholami", "Kai Rothauge", "M. Mahoney", "Joseph E. Gonzalez" ], "title": "On the computational inefficiency of large batch sizes for stochastic gradient", "venue": "descent. ArXiv,", "year": 2018 }, { "authors": [ "Priya Goyal", "P. Dollár", "Ross B. Girshick", "P. Noordhuis", "L. Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Y. Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet", "venue": "hour. ArXiv,", "year": 2017 }, { "authors": [ "Kaiming He", "X. Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Forrest N. Iandola", "Matthew W. Moskewicz", "K. Ashraf", "Song Han", "W. Dally", "K. Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and ¡1mb model", "venue": "size. ArXiv,", "year": 2017 }, { "authors": [ "Nitish Shirish Keskar", "Jorge Nocedal", "Ping Tak Peter Tang", "Dheevatsa Mudigere", "Mikhail Smelyanskiy" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In 5th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "A. Krogh", "J. Hertz" ], "title": "A simple weight decay can improve generalization", "venue": "In NIPS,", "year": 1991 }, { "authors": [ "V. Kungurtsev", "Malcolm Egan", "Bapi Chatterjee", "Dan Alistarh" ], "title": "Asynchronous stochastic subgradient methods for general nonsmooth nonconvex optimization. arXiv: Optimization and Control, 2019", "venue": null, "year": 2019 }, { "authors": [ "Tao Lin", "S. Stich", "M. Jaggi" ], "title": "Don’t use large mini-batches, use local sgd", "venue": "ArXiv, abs/1808.07217,", "year": 2020 }, { "authors": [ "I. Loshchilov", "F. Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Giorgi Nadiradze", "Ilia Markov", "Bapi Chatterjee", "Vyacheslav Kungurtsev", "Dan Alistarh" ], "title": "Elastic consistency: A general consistency model for distributed stochastic gradient descent", "venue": "arXiv preprint arXiv:2001.05918,", "year": 2020 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "https://www.nvidia.com/de-at/geforce/graphics-cards/", "year": 2020 }, { "authors": [ "Benjamin Recht", "Christopher Re", "Stephen Wright", "Feng Niu" ], "title": "Hogwild: A lock-free approach to parallelizing stochastic gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Olga Russakovsky", "J. Deng", "H. Su", "J. Krause", "S. Satheesh", "S. Ma", "Zhiheng Huang", "A. Karpathy", "A. Khosla", "M. Bernstein", "A. Berg", "Li Fei-Fei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "C.D. Sa", "Ce Zhang", "K. Olukotun", "C. Ré" ], "title": "Taming the wild: A unified analysis of hogwild-style algorithms", "venue": "Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Christopher J. Shallue", "Jaehoon Lee", "J. Antognini", "Jascha Sohl-Dickstein", "Roy Frostig", "G. Dahl" ], "title": "Measuring the effects of data parallelism on neural network training", "venue": "J. Mach. Learn. Res.,", "year": 2019 }, { "authors": [ "Samuel L Smith", "Quoc V Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sebastian U Stich" ], "title": "Local sgd converges fast and communicates little", "venue": "arXiv preprint arXiv:1805.09767,", "year": 2018 }, { "authors": [ "Ilya Sutskever", "J. Martens", "G. Dahl", "Geoffrey E. Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In ICML,", "year": 2013 }, { "authors": [ "Jianyu Wang", "Hao Liang", "G. Joshi" ], "title": "Overlap local-sgd: An algorithmic approach to hide communication delays in distributed sgd", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Chris Ying", "S. Kumar", "Dehao Chen", "T. Wang", "Youlong Cheng" ], "title": "Image classification at supercomputer", "venue": "scale. ArXiv,", "year": 2018 }, { "authors": [ "Yang You", "Igor Gitman", "B. Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv: Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Huan Zhang", "Cho-Jui Hsieh", "V. Akella" ], "title": "Hogwild++: A new mechanism for decentralized asynchronous stochastic gradient descent", "venue": "IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "Huan Zhang", "Cho-Jui Hsieh", "Venkatesh Akella" ], "title": "Hogwild++: A new mechanism for decentralized asynchronous stochastic gradient descent", "venue": "In 2016 IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "Jian Zhang", "C.D. Sa", "Ioannis Mitliagkas", "C. Ré" ], "title": "Parallel sgd: When does averaging help", "venue": "ArXiv, abs/1606.07365,", "year": 2016 }, { "authors": [ "Fan Zhou", "Guojing Cong" ], "title": "On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization", "venue": "arXiv preprint arXiv:1708.01012,", "year": 2017 }, { "authors": [ "Martin Zinkevich", "M. Weimer", "Alex Smola", "L. Li" ], "title": "Parallelized stochastic gradient descent", "venue": "In NIPS,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "In this paper, we consider the classic problem of minimizing an empirical risk, defined simply as min x∈Rd ∑ i∈[I] fi(x), (1) where d is the dimension, x ∈ Rd denotes the set of model parameters, [I] is the training set, and fi(x) : Rd → R is the loss on the training sample i ∈ [I]. Stochastic gradient descent (SGD) (Robbins & Monro, 1951) is an extremely popular iterative approach to solving this problem: xk+1 = xk − αk∇fBk(xk), (2) where∇fBk(xk) = 1|Bk| ∑ i∈Bk ∇fi(xk) is the sum of gradients computed over samples, typically selected uniformly and randomly as a minibatch Bk ⊆ [I], and αk is the learning rate at iteration k." }, { "heading": "1.1 BACKGROUND ON DECENTRALIZED DATA-PARALLEL SGD", "text": "For better or worse, SGD and its variants currently represent the computational backbone for many large-scale optimization tasks, most notably the training of deep neural networks (DNNs). Arguably the most popular SGD variant is minibatch SGD (MB-SGD) (Bottou (2012)). In a distributed setting with decentralized workers q ∈ [Q], it follows the iteration\nxk+1 = xk − αk 1\nQ Q∑ q=1 ∇fBqk , (3)\nwhere Bqk ⊆ [I] is a local minibatch selected by worker q ∈ [Q] at iteration k. This strategy is straightforward to scale in a data-parallel way, as each worker can process a subset of the samples in parallel, and the model is then updated by the average of the workers’ gradient computations. For convenience, we assume the same batch size per worker. This approach has achieved tremendous popularity recently, and there has been significant interest in running training with increasingly large batch sizes aggregated over a large number of GPUs, e.g. Goyal et al. (2017).\nAn alternative approach is parallel or local SGD (L-SGD) (Zinkevich et al. (2010); Zhang et al. (2016c); Lin et al. (2020)):\nxqj,t+1 = x q j,t − αj,t∇fBqj,t , 0 ≤ t < Kj ; x q j+1,0 =\n1\nQ ∑ q xqj,Kj , (4)\nwhere xqj,t denotes the local model at worker q ∈ [Q] after j synchronization rounds followed by t local gradient updates and Bqj,t is the local minibatch sampled at the same iteration. Kj denotes the number of local gradient update steps before the jth synchronization. Essentially, workers run SGD without any communication for several local steps, after which they globally average the resulting local models. This method is intuitively easy to scale, since it reduces the frequency of the communication. Recently, a variant called post local SGD (PL-SGD) (Lin et al. (2020)), was introduced to address the issue of loss in generalization performance of L-SGD, wherein the averaging frequency during the initial phase of training is high and is reduced later when optimization stabilizes.\nMethod Bloc Train Loss Train Acc. Test Loss Test Acc Time (Sec) Quality/ Perf.\nMB-SGD 128 0.016 99.75 0.234 92.95 1754 Baseline MB-SGD 1024 0.023 99.51 0.293 91.38 1201 OK PL-SGD 128 0.018 99.69 0.245 92.98 1603 Good PL-SGD 1024 0.154 94.69 0.381 87.81 1159 Poor\nBloc , where Bloc = 128, Q is\nthe number of workers, 2 here, and α0 = 0.1. In PL-SGD, we average the model after each gradient update for first 150 epochs and thereafter averaging frequencyK is set to 16 as in Lin et al. (2020); other HPs are identical to theirs. The listed results are average of 3 runs with different seeds.\nwith PL-SGD. Clearly, these methods can not tolerate a larger Bloc, though the GPUs can support them. This shortcoming of the existing methods in harnessing the growing data-parallelism is also identified via empirical studies (Golmant et al. (2018); Shallue et al. (2019)) existing in literature. To our knowledge no effective remedy (yet) exists to address this challenge.\nNotice that, here our core target is maximally harnessing the local data-parallelism and therefore the larger local batch size, as against the existing trend in the literature wherein large number of GPUs are deployed to have a large aggregated global batch size with a relatively small Bloc. For example, refer to the performance of MB-SGD and PL-SGD as listed in Table 1 of Lin et al. (2020). Notice that with 16 GPUs, each with Bloc = 128, thus totaling the minibatch size as 2048, identical to the one with 2 GPUs each with Bloc = 1024 as above, with exactly the same LR scaling and warmup strategy, both MB-SGD and PL-SGD do not face generalization degradation. However, unfortunately, such an implementation setting would incur excessive wastage of available data-parallel compute resources on each of the GPUs. Indeed, the existing specific techniques such as LARS (You et al. (2017)) to address the issue of poor generalization for global large batch training are insufficient for the larger local minibatch size; we empirically describe it in Section 3 (Table 11)." }, { "heading": "1.2 LOCALLY-ASYNCHRONOUS PARALLEL SGD", "text": "Now, consider an implementation scheme as the following:\n1. In a decentralized setting of L-SGD, i.e. wherein each worker q ∈ [Q] has a local model xq undergoing local SGD updates as described earlier, multiple local concurrent processes u ∈ Uq share the model xq . Processes u ∈ Uq perform asynchronous concurrent gradient updates locally. 2. The workers average their models whenever any one of them would have had at least Kj local shared updates, where Kj is as that in Equation 4. The averaging is performed asynchronously and in a non-blocking way by the (averaging-) processes aq on behalf of each worker q ∈ [Q].\nEssentially, the decentralized workers run shared-memory-based asynchronous SGD locally and periodically synchronize in a totally non-blocking fashion.\nMore formally, consider Algorithm 1. The model xq on a GPU q ∈ [Q] is shared by the processes p ∈ P q = {{aq}∪Uq} locally. The processes p ∈ P q also maintain a shared counter Sq , initialized to 0. The operation read-and-inc implements an atomic (with lock) read and increment of Sq , whereas, read provides an atomic read. Sq essentially enables ordering the shared gradient updates. In turn, this order streamlines the synchronization among workers, thereby determines the averaging rounds j. The (updater) processes u ∈ Uq asynchronously and lock-freely update xq with\ngradients computed over a non-blocking, potentially inconsistent, snapshot va,q of xq , essentially going Hogwild! (Recht et al. (2011)), see Algorithm 1a.\n1 Initialize s = 0; 2 while s ≤ T do 3 vu,q[i] := xq[i], ∀ 1 ≤ i ≤ d; 4 s := read-and-inc(S); 5 Compute∇fBqs (v\nu,q); 6 xq[i] −= αs∇fBqs (v\nu,q)[i], ∀ 1 ≤ i ≤ d;\n(a) Local asynchronous gradient update by process u ∈ Uq .\n1 Initialize scur = spre = |Uq|, j = 0; 2 while scur ≤ T do 3 scur := read(S); Compute j corresponding to scur; 4 if scur − spre ≥ Kj then 5 va,qj [i] := x\nq[i], ∀ 1 ≤ i ≤ d; 6 Synchronize across ar , r ∈ [Q] \\ {q} to compute\nvj := 1 Q ∑ q∈[Q] v a,q j ;\n7 Compute ∆vqj = vj − v a,q j ; spre := scur; 8 xq[i] += ∆vqj [i], ∀ 1 ≤ i ≤ d; j = j + 1;\n(b) Asynchronous non-blocking in-place averaging.\nAlgorithm 1: Locally-asynchronous Parallel SGD (LAP-SGD)\nThe process aq , which performs averaging for the worker q ∈ [Q], concurrently keeps on atomically reading Sq , see Algorithm 1b. As soon as it notices an increment Kj in Sq , i.e. xq got concurrently updated with Kj number of gradients , it takes a non-blocking snapshot v a,q j of x\nq and synchronizes with ar of peers r ∈ [Q]/q to compute the average vj of the snapshots. Thereafter, aq adds the difference of the average with the snapshot va,qj to the model x\nq without blocking the concurrent asynchronous local gradient updates. We call this method locally-asynchronous parallel SGD (LAP-SGD). This method closely resembles Hogwild++ (Zhang et al. (2016a)), which targets the heterogeneous NUMA based multi-core machines, though there are key differences which we describe in Section 4. Results of the same training task as before by LAP-SGD is given in Table 2. The distinction of this implementation is that it harnesses the compute power of the GPUs not by increasing the size of Bloc but by concurrently computing many minibatch gradients. Evidently, LAP-SGD provides speed-up without losing the quality of optimization in comparison to the baseline.\nRecently, Kungurtsev et al. (2019) presented a shared-memory based method wherein they showed that partitioned gradient updates for some iterations during the course of training over a shared model can reasonably save on total computation cost by means of restricted backpropagation without necessarily losing on optimization quality. Their method is limited to a centralized sharedmemory setting. Moreover, aiming to establish\nconvergence under non-smoothness assumption, they ensure write consistency under a model-wide lock. Having designed our asynchronous parallel SGD, it inspires us to adapt the partitioned gradient update strategy to our lock-free decentralized setting.\nMore specifically, building on LAP-SGD, we consider locally partitioned gradient computation along with asynchronous lock-free updates. Essentially, we partition the model xq to {xqi(u)} for u ∈ Uq , i(u) ∩ i(w) = ∅, ∀u,w ∈ Uq (i.e., non-overlapping block components of the vector x). With that, a partitioned gradient computation\nwill amount to computing ∇i(u)fBqs (vq,u), the minibatch gradient with respect to the partition xqi(u) at line 5 in Figure 1a. Accordingly, the update step at line 6 in Algorithm 1a transforms to xq[i] −= αs∇fBqs (vq,u)[i], ∀ i ∈ i(u). It is to be noted that we do not use write lock for iterations at any stage. Having devised a partitioned update scheme, we propose locally-partitionedasynchronous parallel SGD (LPP-SGD) as described below.\n1. Processes u ∈ Uq maintain a process-local variable last iter which can take two values PARTITIONED and FULL. Each u ∈ Uq initializes last iter as FULL. 2. While s ≤ Tst, each process u ∈ Uq performs LAP-SGD updates as lines 3 to 6 of Algorithm 1a. 3. If Tst < s ≤ T , each process u ∈ Uq performs\n(a) a partitioned gradient computation and update: xq,u[i] −= αs∇fBqs (vu,q)[i], ∀i ∈ i(u) if last iter = FULL, and sets last iter = PARTITIONED. (b) an LAP-SGD update if last iter = PARTITIONED, and sets last iter = FULL.\nEssentially, after some initial stabilizing epochs each process u ∈ Uq alternates between a full and a partitioned lock-free asynchronous gradient updates to the model xq . Our experiments showed that Tst = T 10 was almost always sufficient to obtain a competitive optimization result. The results of a sample implementation of LPP-SGD are available in Table 3. It is clear that LPP-SGD handsomely speeds up the computation and provides equally competitive optimization results." }, { "heading": "2 CONVERGENCE THEORY", "text": "At a naive first glance, studying the convergence properties of locally asynchronous SGD would be an incremental to existing analyses for local SGD, e.g. Stich (2018); Zhou & Cong (2017), in particular the local stochastic gradient evaluations at delayed lock-free Hogwild!-like parameter vectors. However, there is one significant difference that presents a theoretical challenge: sometimes the vectors used for gradient computation or components thereof, have been read from the local shared memory before the last averaging across GPUs had taken place. Especially in the nonconvex case, a priori it is impossible to place a reasonable bound on the accuracy of these gradient evaluations relative to what they “should be” in order to achieve descent.\n1 Initialize x̄0 = xq0 for all q; 2 for j = 1, ..., J do 3 for all q do 4 Set xq,j0 = x̄j ; 5 for t = 1, ...,Kj do 6 Let xq,jt = x q,j t−1 −\nαj,t,q∇̃(i(j,t,q)f(vq,jt );\n7 Let x̄j+1 = 1Q Q∑\nq=1\nxq,jKj ;\nAlgorithm 2: Iterations of the view x̄j .\nIn order to present a convergence rate result, we need to define an anchor point on which we can consider convergence to some approximate stationary point in expectation. This is not trivial as both local iterations and averaging is performed asynchronously across different GPUs at distinct moments in time, with the time at which each iteration occurs potentially varying with system-dependent conditions, while the averaged iterates are what is important to consider for convergence.\nWe seek to define the major iteration x̄j as consistent with the analysis presented for the convergence of local SGD in the nonconvex setting in Zhou & Cong (2017). In this case, with asynchrony, x̄j is a theoretical construct, i.e., it may\nnot exist at any particular GPU’s memory at any moment in time. Let sqj := scur − |Uq| be the current state of the shared counter before the jth synchronization round at the GPU q, then x̄j is defined as xq\nsqj + ∆vj where x q sqj is the state of the model for GPU q when va,qj was saved and made available for averaging for “major iteration” j. Thus although de facto averaging could have taken place after a set of local updates in time, these local updates correspond to updates after iteration j conceptually. This makes x̄j properly correspond to the equivalent iterate in Zhou & Cong (2017).\nWith that, we consider a sequence of major iteration views {x̄j} with associated inter-averaging local iteration quantities Kj and local views {vq,jt } at which an unbiased estimate of the (possibly partitioned) gradient is computed, with 0 ≤ t < Kj as well as the local model in memory {xq,jt }. The partition, which could be the entire vector in general, is denoted by i(j, t, q). As each GPU has its own corresponding annealed stepsize, we denote it in general as αj,t,q as well.\nWe state the formal mathematical algorithm as Algorithm 2. Note that this is the same procedure as the practical “what the processor actually does” Algorithm 1, however, with the redefined terms in order to obtain a precise mathematical sequence well-defined for analysis.\nWe make the following standard assumptions on unbiased bounded variance for the stochastic gradient, a bounded second moment, gradient Lipschitz continuity, and lower boundedness of f . Assumption 2.1. 1. It holds that ∇̃if(vq,jt ) satisfies, independent of i, E [ ∇̃if(vq,jt ) ] =\n∇if(vq,jt ); E [∥∥∥∇̃if(vq,jt )−∇if(vq,jt )∥∥∥2] ≤ σ2; E [∥∥∥∇̃if(vq,jt )∥∥∥2] ≤ G.\n2. f is Lipschitz continuously differentiable with constant L and is bounded below by fm.\nWe must also define some assumptions on the probabilistic model governing the asynchronous computation. As these are fairly technical we defer them to the appendix.\nTheorem 2.1. Given assumption 2.1, it holds that, 1 Q ∑J j=1 ∑Q q=1 ∑Kj−1 t=0 [αj,t,qC1 − α2j,t,qC2]E ∥∥∥∇i(j,t,q)f(vq,jt )∥∥∥2 + 1Q ∑J j=1 ∑Q q=1 ∑Kj−1 t=0 [αj,t,qC3 − α2j,t,qC4]E\n∥∥∇i(j,t,q)f(x̄j)∥∥2 ≤ f(x̄0)− fm\n(5)\nwhere C1, C2, C3, and C4 depend on L, B and probabilistic quantities defining the asynchronous computation (see Appendix. Thus there exists a set of such constants such that if αj,t,q = Θ ( 1√ J ) then Algorithm 2 ergodically converges with the standard O(1/ √ J) rate for nonconvex objectives.\nProof Summary: The proof follows the structure of the ergodic convergence proof of K-step local SGD given in Zhou & Cong (2017), wherein at each round of averaging there are QKj total updates to the model vector associated with the Q GPUs and Kj minor iterations.\nInsofar as these updates are close (stochastically as an unbiased estimate, and based on the local models not having changed too much) to the globally synchronized model vector at the last averaging step, there is an expected amount of descent achieved due to the sum of these steps. This is balanced with the amount of possible error in this estimate based on how far the model vector had moved.\nIn cases wherein vq,jt,i = x q,j s,i for s < 0 (i.e., the stochastic gradients are taken, due to local asynchrony, at model vectors with components which existed in memory before the last averaging step), we simply bound the worst-case increase in the objective.\nTo balance these two cases, the analysis takes an approach, inspired partially by the analysis given in Cartis & Scheinberg (2018) of separating these as “good” and ”bad” iterates, with ”good” iterates corresponding to views read after the last model was stored for averaging, with some associated guaranteed descent in expectation, and “bad” iterates those read beforehand.\nBy considering the stochastic process governing the amount of asynchrony as being governed by probabilistic laws, we can characterize the probability of a “good” and “bad” iterate and ultimately seek to balance the total expected descent from one, and worst possible ascent in the other, as a function of these probabilities.\nRemark 2.1. [Speedup due to concurrent updates] Consider the case of classical vanilla local SGD, in which there is complete symmetry in the number of local gradient computations between averaging steps and block sizes across the processors. In this case, for every major iteration there are Q gradient norms on the left hand side, and at the same time it is divided by Q. Thus local SGD as a parallel method does not exhibit classical speedup, rather it can be considered as an approach of using parallelism to have a more robust and stable method of performing gradient updates with multiple batches computed in parallel. However, due to the idle time that exists between the slowest and fastest processors, it will exhibit negative speedup, relative to the fastest processor. With the approach given in this paper, this negative speedup is corrected for in that this potential idleness is filled with additional stochastic gradient computations by the fastest process. Alternatively, one can also consider this as exhibiting positive speedup relative to the slowest process, whereas standard local SGD has zero speedup relative to the slowest process. Above and beyond this, we can consider that as more processes introduces additional latency and delay, which has a mixed effect: on the one hand, we expect that gradient norms at delayed iterates to be larger as the process is converging, thus by having more delayed gradients on the left hand side, convergence is faster, and on the other hand, such error in the degree to which the gradient decreases the objective, would specifically increase the constants C2 and C4." }, { "heading": "3 IMPLEMENTATION DETAILS AND NUMERICAL RESULTS", "text": "" }, { "heading": "3.1 EXPERIMENTAL SET-UP", "text": "Decentralized training. We evaluate the proposed methods LAP-SGD and LPP-SGD comparing them against existing MB-SGD and PL-SGD schemes, using CNN models RESNET-20 (He et al. (2016)), SQUEEZENET (Iandola et al. (2017)), and WIDERESNET-16x8 (Zagoruyko & Komodakis (2016)) for the 10-/100-class image classification tasks on datasets CIFAR-10/CIFAR-100 (Krizhevsky (2009)). We also train RESNET-18 for a 1000-class classification problem on IMAGENET (Russakovsky et al. (2015)) dataset. We keep the sample processing budget identical across the methods. We use the typical approach of partitioning the sample indices among the workers that can access the entire training set; the partition indices are reshuffled every epoch following a seeded\nrandom permutation based on epoch-order. To this effect we use a shared-counter among concurrent process u ∈ Uq in asynchronous methods. Thus, our data sampling is i.i.d.\nPlatform specification. Our experiments are based on a set of Nvidia GeForce RTX 2080 Ti GPUs (Nvidia (2020)) (referred to as GPUs henceforth) with 11 GB on-device memory. We use the following specific settings: (a) S1: a workstation with two GPUs and an Intel(R) Xeon(R) E5-1650 v4 CPU running @ 3.60 GHz with 12 logical cores, (b) S2: a workstation with four GPUs and two Intel(R) Xeon(R) E5-2640 v4 CPUs running @ 2.40 GHz totaling 40 logical cores, and (c) S3: two S2 workstations connected with a 100 GB/s infiniband link. The key enabler of our implementation methodology are multiple independent client connection between a CPU and a GPU. Starting from early 2018 with release of Volta architecture, Nvidia’s technology Multi-process Service (MPS) efficiently support this. For more technical specifications please refer to their doc-pages MPS (2020).\nImplementation framework. We used open-source Pytorch 1.5 (Paszke et al. (2017)) library for our implementations. For cross-GPU/cross-machine communication we use NCCL (NCCL (2020)) primitives provided by Pytorch. MB-SGD is based on DistributedDataParallel Pytorch module. PL-SGD implementation is derived from author’s code (LocalSGD (2020)) and adapted to our setting. Having generated the computation graph of the loss function of a CNN, the autograd package of Pytorch allows a user to specify the leaf tensors with respect to which gradients are needed. We used this functionality in implementing partitioned gradient computation in LPP-SGD.\nLocally-asynchronous Implementation. One key requirement of the proposed methods is to support a non-blocking synchronization among GPUs. This is both a challenge and an opportunity. To specify, we use a process on each GPU, working as a parent, to initialize the CNN model and share it among spawned child-processes. The child-processes work as u ∈ Uq, ∀q ∈ [Q] to compute the gradients and update the model. Concurrently, the parent-process, instead of remaining idle as it happens commonly with such concurrency models, acts as the averaging process aq ∈ P q, ∀q ∈ [Q], thereby productively utilizing the entire address space occupied over the GPUs. The parent- and child-processes share the iteration and epoch counters. Notice that, here we are using the process-level concurrency which is out of the purview of the thread-level global interpreter lock (GIL) (Python (2020)) of python multi-threading framework.\nHyperparameters (HPs). Each of the methods use identical momentum (Sutskever et al. (2013)) and weight-decay (Krogh & Hertz (1991)) for a given CNN/dataset case; we rely on their previously used values (Lin et al. (2020)). The learning rate (LR) schedule for MB-SGD and PL-SGD are identical to Lin et al. (2020). For the proposed methods we used cosine annealing schedule without any intermediate restart (Loshchilov & Hutter (2017)). Following the well-accepted practice, we warm up the LR for the first 5 epochs starting from the baseline value used over a single worker training. In some cases, a grid search (Pontes et al. (2016)) suggested that for LPP-SGD warming up the LR up to 1.25× of the warmed-up LR of LAP-SGD for the given case, improves the results. 3.2 EXPERIMENTAL RESULTS\nIn the following discussion we use these abbreviated notations: U : |Uq|, B : Bloc, Tr.L.: Training Loss, Tr.A.: Training Accuracy, Te.L.: Test Loss, Te.A.: Test Accuracy, and T : Time in Seconds. The asynchronous methods have inherent randomization due to process scheduling by the operation system. Therefore, each micro-benchmark presents the mean of 3 runs unless otherwise mentioned.\nConcurrency on GPUs. We allocate the processes on a GPU up to the availability of the on-device memory. However, once the data-parallel compute resources get saturated, allocating more processes degrades the performance. For example, see Table 4 which lists the average performance of 5 runs for different combinations of U and B for training RESNET20/CIFAR-10 by LAP-SGD on the setting S1.\nAsynchronous Averaging Frequency. Following PL-SGD, as a general rule, for the first half of the training, call it up until P epochs, we set the averaging frequency K = 1. However, notice that, unlike PL-SGD, setting a K < Q in LAP-SGD and LPP-SGD may not necessarily increase the averaging rounds j in aggregation. Intuitively, in a locally-asynchronous setting, along with the nonblocking (barrier-free) synchronization among GPUs, the increment events on the shared-counter S would be “grouped” on the real-time scale if the processes u ∈ Uq do not face delays in scheduling, which we control by allocating an optimal number of processes to maximally utilize the compute\nresources. For instance, Table 5 lists the results of 5 random runs of RESNET-20/CIFAR-10 training with B = 128 and U = 6 with different combinations of K and P over S1. This micro-benchmark indicates that the overall latency and the final optimization result of our method may remain robust under small changes in K, which it may not be the case with PL-SGD.\nScalability. Table 6 presents the results of WIDERESNET16x8/CIFAR-10 training in the three system settings that we consider. We observe that in each setting the relative speed-up of different methods are approximately consistent. In particular, we note the following: (a) reduced communication cost helps PL-SGD marginally outperform MB-SGD, (b) interestingly, increasing Bloc from 128 to 512 does not improve the latency by more than ∼4%; this non-linear\nscalability of data-parallelism was also observed by Lin et al. (2020) , (c) each of the methods scale by more than 2x as the implementation is moved to S2, which has 40 CPU cores, from S1 which has 12 CPU cores, furthermore, this scalability is approximately 2x with respect to performance on S3 in comparison to S2, this shows that for each of the methods we utilize available compute resources maximally, (d) in each setting LAP-SGD achieves ∼30% better throughput compared to MB-SGD standard batch, (e) in each setting LPP-SGD outperforms LAP-SGD by ∼12% making it the fastest method, (f) the training and test accuracy of local large minibatch is poor, and (g) the methods LAP-SGD and LPP-SGD consistently improve on the baseline generalization accuracy.\nOther CIFAR-10/CIFAR-100 Results. Performance of the proposed methods in comparison to the baselines for other training tasks on CIFAR-10/CIFAR-100 datasets are available in Tables 7, 8, and 9. In each case we use K = 16 in LAP-SGD, LPP-SGD, and PL-SGD after 50% of total sample processing budget. As an overview, the relative latency of the methods are as seen in Table 6, whereas in each case LAP-SGD and LPP-SGD recovers or improves the baseline training results.\nImagenet Training Results. Having comprehensively evaluated the proposed methods on CIFAR-10/CIFAR-100, here we present their performance on 1000-classes Imagenet dataset.\n0 20 40 60 80 Epochs\n(1)\n(2)\n(3)\n(4)\n(5)\n(6)\nFigure 2: Top-1 Test Accurcy.\nNotice that for this training task, with 8 commodity GPUs at our disposal we are very much in the small minibatch setting. Plethora of existing work in the literature efficiently train a RESNET on IMAGENET with BS up to multiple thousands. Other system-dependent constraint that our considered setting faces is that there is hardly any leftover compute resources for us to exploit in the local setting of a worker. Yet, we see for RESNET-18, see Table 10, that LAP-SGD improves on generalization accuracy of the baseline with speed-up.\nLR Tuning strategy. It is pertinent to mention that the existing techniques, such as LARS (You et al. (2017)), which provide an adaptive LR tuning strategy for large minibatch settings over a large number of GPUs, wherein each worker locally processes a small minibatch, are insufficient in the case Bloc is increased. For example, see Table 11 which lists the\naverage performance of 5 runs on the setting S1 for Q = 2 for training RESNET-20/CIFAR-10 using the compared methods combined with LARS with η = 0.001 (You et al. (2017)). We scale the LR proportionately: α0 × B×QBloc , where Bloc = 128, Q is the number of workers, 2 here, and α0 = 0.1. Evidently, LARS did not help MB-SGD and PL-SGD in checking the poor generalization due to larger B." }, { "heading": "3.3 ON THE GENERALIZATION OF THE PROPOSED METHODS", "text": "Let us comment on a theoretical basis of the remarkable performance in terms of generalization error of our methods. In Lin et al. (2020), the model of SGD algorithms as a form of a Euler-Maruyama discretization of a Stochastic Differential Equation (SDE) presents the perspective that the batch-size can correspond to the inverse of the injected noise. Whereas distributed SGD combined stochastic gradients as such effectively results in an SGD step with a larger batch-size, local SGD, by averaging models rather than gradients, maintains the noise associated with the local small-batch gradients. Given the well-established benefits of greater noise in improving generalization accuracy (e.g. Smith & Le (2018) and others), this presents a heuristic argument as to why local SGD tends to generalize better than distributed SGD. In the Appendix we present an additional argument for why local SGD generalizes well.\nHowever, we see that our particular variation with asynchronous local updates and asynchronous averaging seems to provide additional generalization accuracy above and beyond local SGD. We provide the following explanation as to why this could be the case, again from the perspective of noise as it would appear in a SDE. Let us recall three facts,\n1. The noise appearing as a discretization of the Brownian motion term in the diffusion SDE, and, correspondingly the injected noise studied as a driver for increased generalization in previous works on neural network training is i.i.d., 2. Clearly, the covariances of a mini-batch gradients as statistical estimates of the gradient at x and x′ are going to be more similar when x and x′ are closer together, 3. (see Section 2) A challenging property from the perspective of convergence analysis with locally asynchronous updates is gradients taken at models taken at snapshots before a previous all-to-all averaging step, and thus far from the current model in memory.\nThus, effectively, the presence of these “highly asynchronous” stochastic gradients, while being potentially problematic from the convergence perspective, effectively brings the analogy of greater injected noise for local SGD over distributed data-parallel closer to accuracy by inducing greater probabilistic independence, i.e., the injected noise, for these updates, is far close to the i.i.d. noise that appears in a discretized SDE." }, { "heading": "4 RELATED WORK", "text": "In the previous sections we cited the existing literature wherever applicable, in this section we present a brief overview of closely related works and highlight our novelty. In the shared-memory\nsetting, HOGWILD! (Recht et al. (2011)) is now the classic approach to implement SGD. However, it remains applicable to a centralized setting of a single worker and therefore is not known to have been practically utilized for large scale DNN training. Its success led to designs of variants which targeted specific system aspects of a multi-core machine. For example, Buckwild! (Sa et al. (2015)) proposed using restricted precision training on a CPU. Another variant, called HOGWILD!++ (Zhang et al. (2016b)), harnesses the non-uniform-memory-access (NUMA) architecture based multi-core computers. In this method, threads pinned to individual CPUs on a multi-socket mainboard with access to a common main memory, form clusters.\nIn principle, the proposed LAP-SGD can be seen as deriving from HOGWILD!++. However, there are important differences: (a) at the structural level, the averaging in HOGWILD!++ is binary on a ring graph of thread-clusters, further, it is a token based procedure where in each round only two neighbours synchronize, whereas in LAP-SGD it is all-to-all, (b) in HOGWILD!++ each cluster maintains two copies of the model: a locally updating copy and a buffer copy to store the last synchronized view of the model, whereby each cluster essentially passes the “update” in the local model since the last synchronization to its neighbour, however, this approach has a drawback as identified by the authors: the update that is passed on a ring of workers eventually “comes back” to itself thereby leading to divergence, to overcome this problem they decay the sent out update; as against this, LAP-SGD uses no buffer and does not track updates as such, averaging the model with each peer, similar to L-SGD, helps each of the peers to adjust their optimization dynamics, (c) it is not known if the token based model averaging of HOGWILD!++ is sufficient for training DNNs where generalization is the core point of concern, in place of that we observed that our asynchronous averaging provides an effective protocol of synchronization and often results in improving the generalization, (d) comparing the HOGWILD!++ thread-clusters to concurrent processes on GPUs in LAP-SGD, the latter uses a dedicated process that performs averaging without disturbing the local gradient updates thereby maximally reducing the communication overhead, (e) finally, the convergence theory of LAP-SGD guarantees its efficacy for DNN training, which we demonstrated experimentally, by contrast, HOGWILD!++ does not have any convergence guarantee.\nRecently, Wang et al. (2020) proposed Overlap-local-SGD, wherein they suggested to keep a model copy at each worker, very similar to HOGWILD!++, which is simultaneously averaged when sequential computation for multiple iterations happen locally. They showed by limited experiments that it reduced the communication overhead in a non-iid training case based on CIFAR-10, however, not much is known about its performance in general cases. The asynchronous partitioned gradient update of LPP-SGD derives from Kungurtsev et al. (2019), however, unlike them we do not use locks and our implementation setting is decentralized, thus scalable." }, { "heading": "5 CONCLUSION", "text": "Picking from where Golmant et al. (2018) concluded referring to their findings: “These results suggest that we should not assume that increasing the batch size for larger datasets will keep training times manageable for all problems. Even though it is a natural form of data parallelism for largescale optimization, alternative forms of parallelism should be explored to utilize all of our data more efficiently”, our work introduces a fresh approach in this direction to addressing the challenge.\nIn our experiments, we observed that the natural system-generated noise in some cases effectively improved the generalization accuracy, which we could not obtain using the existing methods irrespective of any choice of seed for random sampling. The empirical findings suggest that the proposed variant of distributed SGD has a perfectly appropriate place to fit in the horizon of efficient optimization methods for training deep neural networks. As a general guideline for the applicability of our approach, we would suggest the following: monitor the resource consumption of a GPU that trains a CNN, if there is any sign that the consumption was less than 100%, try out LAP-SGD and LPP-SGD instead of arduously, and at times unsuccessfully, tuning the hyperparameters in order to harness the data-parallelism. The asynchronous averaging protocol makes LAP-SGD and LPP-SGD specially attractive to settings with large number of workers.\nThere are a plethora of small scale model and dataset combinations, where the critical batch size– after which the returns in terms of convergence per wall-clock time diminish–is small relative to existing system capabilities (Golmant et al. (2018)). To such cases LAP-SGD and LPP-SGD become readily useful. Yet, exploring the efficiency of LAP-SGD and LPP-SGD to train at massive scales, where hundreds of GPUs enable training IMAGENET in minutes (Ying et al. (2018)), is an ideal future goal. We also plan to extend the proposed methods to combine with communication optimization approaches such as QSGD (Alistarh et al. (2017))." }, { "heading": "A APPENDIX A: CONVERGENCE THEORY", "text": "" }, { "heading": "A.1 PROBABILISTIC ASSUMPTIONS GOVERNING THE ASYNCHRONOUS COMPUTATION", "text": "Now let us discuss the formalities of the asynchronous computation. We consider that the presence of local HogWild-like asynchronous computation introduces stochastic delays, i.e., at each stochastic gradient computation, the set of parameters at which the stochastic gradient is evaluated is random, it follows some distribution. Thus, considering that, in the general case,\nvq,jt,i ∈ ∪k∈{0,...,j}{x q,k s,i }s∈{0,...,t}\nwe can now define a probability that this parameter view block is equal to each of these potential historical parameter values. To this end we define It,i,q,js,k as the event that block i in the view for GPU q at major iteration j and minor iteration t is equal to the actual parameter xq,ks,i , and p t,i,q,j s,k is its probability. Now, vq,jt,i could be, e.g., x q,j−1 s,i for some s ∈ {0, ...,Kj−1}, i.e., it could have been evaluated at a parameter before the last averaging took place.\nIn the nonconvex setting, it would be entirely hopeless to bound ‖xq,j−1s,i − v q,j t,i ‖ in general, in this case we can simply hope that the objective function decrease achieved by iterations with a gradient computed after an averaging step outweighs the worst-case increase that takes place before it. In order to perform such an analysis, we will need bound the probability that this worst case occurs. In order to facilitate the notation for this scenario, let us define xq,j−1,i = x q,j−1 Kj−1−1,i, x q,j −2,i = x q,j−1 Kj−1−2,i, etc., and then pt,i,q,j−1,j correspondingly. We can extend this analogously to delays from before two averaging steps, etc. Note that with this notation, pt,i,q,jl,j is well defined for any l ≤ t, l ∈ Z, of\ncourse as long as |l| ≤ j−1∑ k=0 Kk.\nIn order to derive an expected decrease in the objective, we need to bound the probability of an increase, which means bounding the probability that the view is older than the previous averaging, which can be bounded by a bound on the probability that a particular read is more than some number τ iterations old. We thus make the following assumptions,\nAssumption A.1. It holds that,\n1. pt,i,q,jl,j = 0 for l ≤ t−D (Maximum Delay) 2. There exists {pτ}τ∈{1,...,D} such that for all (q, j, t), it holds that P [ ∪iIt,i,q,jl,j ] ≤ pt−τ for\nl = t− τ (Uniform Bound for Components’ Delays)\nWith these, we can make a statement on the error in the view. In particular it holds that, E [∥∥∥vq,jt − xq,jt ∥∥∥ | ∩i ∪l≥0It,i,q,jl,j ] ≤ α2j,t,qB (6) for some B > 0. This bound comes from Section A.B.6 in Nadiradze et al. (2020). Thus, if the view is taken such that all components were read after the last averaging step then we can bound the error as given." }, { "heading": "A.2 PROOF OF MAIN CONVERGENCE THEOREM", "text": "Recall the major iterations x̄j is defined as the value of the parameters after an averaging step has taken place, which is of course well-defined as every GPU will have the same set of values for the parameters. The update to x̄j can be written as,\nx̄j+1 = 1\nQ Q∑ q=1 x̄j − Kj−1∑ t=0 αj,t,q∇̃i(q,j,t)f(vq,jt ) (7) Where we define, ∇̃if(vq,jt ) as the vector of size n whose i’ith components are the calculated stochastic gradient vector defined at vq,jt , with the rest of the components padded with zeros. We indicate the component chosen i(q, j, t) depends on the GPU and minor and major iteration, allowing for flexibility in the choice of block update (including the entire vector).\nWe are now ready to prove the convergence Theorem. The structure of the proof will follow Zhou & Cong (2017), who derives the standard sublinear convergence rate for local SGD in a synchronous environment for nonconvex objectives.\nWe begin with the standard application of the Descent Lemma,\nf(x̄j+1)− f(x̄j) ≤ −〈∇f(x̄j), 1Q ∑Q q=1 ∑Kj−1 t=0 αj,t,q∇̃i(j,t,q)f(v q,j t )〉\n+L2 ∥∥∥ 1Q∑Qq=1∑Kj−1t=0 αj∇̃i(j,t,q)f(vq,jt )∥∥∥2 (8) Now, since E∇̃i(j,t,q)f(vq,jt ) = ∇i(j,t,q)f(v q,j t ),\n−E〈∇f(x̄j), ∇̃i(j,t,q)f(vq,jt )〉 = − 12 ( ‖∇i(j,t,q)f(x̄j)‖2 + E‖∇i(j,t,q)f(vq,jt )‖2 − E‖∇i(j,t,q)f(v q,j t )−∇i(j,t,q)f(x̄j)‖2 ) (9)\nWe now split the last term by the two cases and use equation 6,\nE‖∇i(j,t,q)f(vq,jt )−∇i(j,t,q)f(x̄j)‖2 = E [ ‖∇i(j,t,q)f(vq,jt )−∇i(j,t,q)f(x̄j)‖2| ∩i ∪l≥0I t,i,q,j l,j ] P [ ∩i ∪l≥0 It,i,q,jl,j ] +E [ ‖∇i(j,t,q)f(vq,jt )−∇i(j,t,q)f(x̄j)‖2| ( ∩i ∪l≥0 It,i,q,jl,j )c] P [( ∩i ∪l≥0 It,i,q,jl,j\n)c] ≤ α2j,t,qBP [ ∩i ∪l≥0 It,i,q,jl,j\n] +2 ( ‖∇i(j,t,q)f(x̄j)‖2 + E‖∇i(j,t,q)f(vq,jt )‖2 ) P [( ∩i ∪l≥0 It,i,q,jl,j\n)c] and thus combining with equation 22 we get the overall bound, −E〈∇f(x̄j), ∇̃i(j,t,q)f(vq,jt )〉 ≤ − 12 ( ‖∇i(j,t,q)f(x̄j)‖2 + E‖∇i(j,t,q)f(vq,jt )‖2 − α2j,t,qB ) P [ ∩i ∪l≥0 It,i,q,jl,j\n] + 12 ( ‖∇i(j,t,q)f(x̄j)‖2 + E‖∇i(j,t,q)f(vq,jt )‖2 ) P [( ∩i ∪l≥0 It,i,q,jl,j\n)c] ≤ − ( ‖∇i(j,t,q)f(x̄j)‖2 + E‖∇i(j,t,q)f(vq,jt )‖2 ) P [ ∩i ∪l≥0 It,i,q,jl,j\n] + 12 ( ‖∇i(j,t,q)f(x̄j)‖2 + E‖∇i(j,t,q)f(vk,jt )‖2 ) + α2j,t,qB 2\n(10)\nIt can be seen from this expression that we must have, P [ ∩i ∪l≥0 It,i,q,jl,j ] ≥ 1\n2 + δ (11)\nfor some δ > 0 to achieve descent in a expectation for iteration j for a sufficient small stepsizes. Since we are taking the sum of such iterations, we must have, ultimately,\n1 Q ∑Q q=1 ∑Kj−1 t=0 αj,t,q [ P [ ∩i ∪l≥0 It,i,q,jl,j ] − 12 − αj,t,q ( QKj + αj,t,qB 2 )] E‖∇i(j,t,q)f(vq,jt )‖2\n+ 1Q ∑Q q=1 ∑Kj−1 t=0 αj,t,q [ P [ ∩i ∪l≥0 It,i,q,jl,j ] − 12 − α2j,t,qB 2 ] E‖∇i(j,t,q)f(x̄j)‖2 ≥ δ̂j\n(12) with ∞∑ j=0 δ̂j ≥ f(x̄0) − fm, where recall that fm is a lower bound on f , in order to achieve asymptotic convergence. The standard sublinear SGD convergence rate is recovered with any choice with αj,t,q = Θ ( 1√ J ) and thus ultimately δ̂j = Ω ( 1√ J ) .\nLet us now consider the quantity P [ ∩i ∪l≥0 It,i,q,jl,j ] in more detail and study how the nature of the concurrency affects the possibility and rate of convergence. In particular notice that, P [ ∩iIt,i,q,jl,j ] ≤ 1− pt−τ for l ≥ t− τ . In general of course we expect this quantity to increase as l is closer to t. Consider two extreme cases: if there is always only one SG iteration for all processes for all major iterations j, i.e.,Kj ≡ 1, any delay means reading a vector in memory before the last major iteration,\nand thus the probability of delay greater than zero must be very small in order to offset the worst possible ascent.\nOn the other hand, if in general Kj >> τ , then while the first τ minor could be as problematic at a level depending on the probability of the small delay times, for t > τ clearly the vector vq,jt satisfies equation 6.\nThus we can sum up our conclusions in the following statements:\n1. Overall, the higher the mean, variance, and thickness of the tails of the delay, the more problematic convergence would be,\n2. The larger the quantity of local iterations each GPU performs in between averaging, the more likely a favorable convergence would occur.\nThe first is of course standard and obvious. The second presents the interesting finding that if you are running HogWild-type SG iterations on local shared memory, performing local SGD with a larger gap in time between averaging results in more robust performance for local SGD.\nThis suggests a certain fundamental harmony between asynchronous concurrency and local SGD, more “aggressive” locality, in the sense of doing more local updates between averaging, coincides with expected performance gains and robustness of more “aggressive” asynchrony and concurrency, in the sense of delays in the computations associated with local processes.\nIn addition, to contrast the convergence in regards to the block size, clearly the larger the block the faster the overall convergence, since the norms of the gradient vectors appear. An interesting possibility to consider is if a process can roughly estimate or predict when averaging could be triggered, robustness could be gained by attempting to do block updates right after an expected averaging step, and full parameter vector updates later on in the major iteration." }, { "heading": "A.3 CONVERGENCE - SIMPLER CASE", "text": "In reference to the classic local SGD theory in particular Stich (2018) for the strongly convex case and Zhou & Cong (2017) for the nonconvex case, we consider the simpler scenario wherein i(q, j, t) = [n] and vq,jt,i = x q,j s,i with s ≥ 0 for all v q,j t,i , i.e., at no point are local updates computed at gradients evaluated at model components existing in memory prior to the last averaging step. We shall see the local asynchrony introduces a mild adjustment in the constants in the strongly convex case, relative to the classic result, and results in no change whatsoever in the nonconvex case." }, { "heading": "A.3.1 STRONGLY CONVEX CASE", "text": "The proof of convergence will be based on Stich (2018), the synchronous case.\nThe formalism of the argument changes to Algorithm 4. Note that this is functionally the same, and simply the concept of major iterations is dispensed with, except to define Kj .\n1 Initialize xq0 for all q for t = 1, ..., T do 2 for all q do 3 Let xqt = x q t−1 − αt,q∇̃f(v q t )\n4 if (tMOD ∑J j=1Kj = 0) for some J then\n5 Let x̄t+1 = 1Q Q∑ q=1 xqt\nThe only change in the procedure is that the stochastic gradients are computed as evaluated at a vector vqt , so we shall see how the convergence results contrasts with Stich (2018) for synchronous averaging when computations are performed in this manner.\nLet, x̄t = 1 Q ∑Q q=1 x q t ,\ngt = 1 Q ∑Q q=1 ∇̃f(x q t ), ḡt = 1 Q ∑Q q=1∇f(x q t ), ĝt = 1 Q ∑Q q=1 ∇̃f(v q t ), g̊t = 1 Q ∑Q q=1∇f(v q t )\nWe have, as in the proof of Lemma 3.1 Stich (2018) ‖x̄t+1 − x∗‖2 = ‖x̄t − αtĝt − x∗‖2 = ‖x̄t − x∗ − αtg̊t‖2 + α2t ‖ĝt − g̊t‖2\n+2αt〈x̄t − x∗ − αtg̊t, g̊t − ĝt〉 (13)\nContinuing, ‖x̄t − x∗ − αtg̊t‖2 = ‖x̄t − x∗‖2 + α2t ‖̊gt‖2 − 2αt〈x̄t − x∗, g̊t〉\n= ‖x̄t − x∗‖2 + α 2 t\nQ ∑Q q=1 ‖∇f(x q t )‖2 − 2αtQ ∑Q q=1〈x̄t − v q t + v q t − x∗,∇f(vt)〉\n= ‖x̄t − x∗‖2 + α 2 t\nQ ∑Q q=1 ‖∇f(x q t )−∇f(x∗)‖2\n− 2αtQ ∑Q q=1〈v q t − x∗,∇f(vt)〉 − 2αtQ ∑Q q=1〈x̄t − v q t ,∇f(vt)〉\nUsing Young’s inequality and L-smoothness, −2〈x̄t − vqt ,∇f(vt)〉 ≤ 2L‖x̄t − v q t ‖2 + 12L‖∇f(v q t )‖2\n= 2L‖x̄t − vqt ‖2 + 12L‖∇f(v q t )−∇f(x∗)‖2 ≤ 2L‖x̄t − vqt ‖2 + (f(v q t )− f∗)\nApplying this to the above estimate of ‖x̄t − x∗ − αtg̊t‖2, we get, ‖x̄t − x∗ − αtg̊t‖2 ≤ ‖x̄t − x∗‖2 + 2αtLQ ∑Q q=1 ‖x̄t − v q t ‖2\n+ 2αtQ ∑Q q=1 (( αtL− 12 ) (f(vqt )− f∗)− µ 2 ‖v q t − x∗‖2 ) Let αt ≤ 14L so ( αtL− 12 ) ≤ − 14 . By the convexity of 1 4 (f(x)− f(x ∗)) + µ2 ‖x− x ∗‖2,\n− 2αtQ ∑Q q=1 ( 1 4 (f(v q t )− f∗) + µ 2 ‖v q t − x∗‖2 ) ≤ − 2αtQ ∑Q q=1 ( 1 4 (f(x q t )− f∗) + µ 2 ‖x q t − x∗‖2\n) + αt2Q ∑Q q=1 ( ‖vqt − x q t‖+ 2µ‖v q t − x q t‖2 )\nPutting this in equation 13 and taking expectations, we get, E‖x̄t+1 − x∗‖2 ≤ (1− µαt)E‖x̄t − x∗‖2 + α2tE‖̊gt − ĝt‖2\n−αt2 E(f(x̄t)− f ∗) + 2αtLQ ∑Q q=1 ‖x̄t − v q t ‖2\n+ αt2Q ∑Q q=1 ( ‖vqt − x q t‖+ 2µ‖v q t − x q t‖2 ) (14)\nBy Assumption 2.1, we have,\nE‖̊gt − ĝt‖2 = E‖ 1\nQ Q∑ q=1 ( ∇̃f(vqt )−∇f(v q t ) ) ‖2 ≤ σ 2 Q (15)\nWe have that, ∑Q q=1 ‖v q t − x q t‖ ≤ ∑Q q=1 ‖v q t − x q t‖1\n≤ ∑Q q=1 ∑t−1 s=max t−τ,t0 αs‖∇̃f(v q s)‖1\n≤ αtQτ √ nG\n(16)\nand similarly, ∑Q q=1 ‖v q t − x q t‖2 ≤ ∑Q q=1 ∑t−1 s=max t−τ,t0 α 2 s‖∇̃f(vqs)‖2\n≤ Qα2t τG2 (17)\nLetting index t0 be such that t − t0 ≤ H := max{Kj} when averaging takes place, i.e. x̄t0 = x q t0 for all q, we have, 1 Q ∑Q q=1 E‖x̄t − v q t ‖2\n= 1Q ∑Q q=1 E‖v q t − x q t + x q t − xt0 − (x̄t − xt0)‖2\n≤ 2Q ∑Q q=1 E [ ‖vqt − x q t‖2 + ‖x q t − xt0 − (x̄t − xt0)‖2 ] ≤ 2Q ∑Q q=1 E‖x q t − xt0‖2 + 2α2t τG2\n≤ 2Q ∑Q q=1Hα 2 t0 ∑t−1 s=t0 E‖∇̃f(xqs)‖2 + 2α2t τG2\n≤ 2Q ∑Q q=1H 2α2t0G 2 ≤ H2α2t0G 2 + 2α2t τG 2 ≤ 8H2α2tG2 + 2α2t τG2\n(18)\nwhere we use E‖X −EX‖2 = E‖X‖2 −‖EX‖2 and equation 17 to go from the third to the fourth line.\nFinally, putting equation 15, equation 18, equation 16 and equation 17 into equation 14 we get that, E‖x̄t+1 − x∗‖2 ≤ (1− µαt)E‖x̄t − x∗‖2 + α2t σ 2\nQ\n−αt2 E(f(x̄t)− f ∗) + 16α3tLH 2G2 + α2tτ √ nG\n2 + 2(µ+ 2L)τα 3 tG 2\nFinally, using Lemma 3.4 Stich (2018), we obtain, with a > max{16κ,H} for κ = L/µ, and wt = (a+ t) 2, Ef (\n1 QSQ ∑Q q=1 ∑T−1 t=0 wtx q t ) − f∗ ≤ µa 3\n2ST ‖x0 − x∗‖2 + 4T (T+2a)µST\n( σ2\nQ + τ √ nG 2 ) + 256Tµ2ST ( 16LH2G2 + 2(µ+ 2L)τG2\n) (19) which simplifies to, using Eµ‖x0 − x∗‖ ≤ 2G,\nEf (\n1 QSQ ∑Q q=1 ∑T−1 t=0 wtx q t ) − f∗\n= O (\n1 µQT + κ+H µQT 2\n) σ2 +O ( τ √ n\nµT\n) G\n+O ( τ √ n(κ+H) µT 2 ) G+O ( κH2+τ(µ+2L) µT 2 + κ3+H3 µT 3 ) G2\n(20)" }, { "heading": "A.3.2 NONCONVEX CASE", "text": "This proof will again follow Zhou & Cong (2017). In this case the step-sizes {α(j, t, q)} are independent of t and q, i.e., they are simple {αj}. Thus,\nx̄j+1 = 1\nQ Q∑ q=1 x̄j − Kj−1∑ t=0 αj∇̃f(vq,jt ) And thus,\nf(x̄j+1)− f(x̄j) ≤ −〈∇f(x̄j), 1Q ∑Q q=1 ∑Kj−1 t=0 αj∇̃f(v q,j t )〉\n+L2 ∥∥∥ 1Q∑Qq=1∑Kj−1t=0 αj∇̃f(vq,jt )∥∥∥2 (21) Now, since E∇̃f(vq,αt ) = ∇f(v q,α t ),\n−E〈∇f(x̄j), ∇̃f(vq,jt )〉 = − 12 ( ‖∇f(x̄j)‖2 + E‖∇f(vq,jt )‖2 − E‖∇f(v q,j t )−∇f(xj)‖2 ) ≤ − 12 ( ‖∇f(x̄j)‖2 + E‖∇f(vq,jt )‖2 − L2E‖v q,j t − x̄j‖2\n) (22) We now continue the proof along the same lines as in Zhou & Cong (2017). In particular, we get\nE‖vq,jt − x̃j‖2 ≤ t2α2jσ2 + tα2jE t−1∑ s=0 ‖∇f(vq,js )‖2\nLet us define K̄ = maxj{Kj} and K = minj{Kj}.\nWe now have, −αj ∑Kj−1 t=0 E〈∇f(x̄j), ∇̃f(v q,j t )〉 ≤ − (K+1)αj 2 ( 1− L 2α2jK(K̄−1) 2(K+1) ) ‖∇f(x̃j)‖2\n−αj2 ( 1− L 2α2j (K̄+1)(K̄−2) 2 )∑K−1 t=0 E‖∇f(v q,j t )‖2 + L2α3jσ 2(2K̄−1)K(K̄−1) 12\nSimilarly, it also holds that,\nL\n2 ∥∥∥∥∥∥ 1Q Kj−1∑ t=0 αj∇̃f(vq,jt ) ∥∥∥∥∥∥ 2 ≤ LK2jα 2 jσ 2 2Q + LKjα 2 j 2 Kj−1∑ t=0 E‖∇f(vq,jt )‖2\nAnd so, finally, Ef(x̄j+1)− f(x̄j) ≤ − (K+1)αj2 ( 1− L 2α2jK̄(K̄−1) 2(K+1) − LαjK̄ K+1) ) ‖∇f(x̃j)‖2\n−αj2 ( 1− L 2α2j (K̄+1)(K̄−2) 2 − LαjK̄ )∑Q q=1 ∑Kj−1 t=1 E‖∇f(v q,j t )‖2 + L2α3jσ\n2(2K̄−1)K̄(K̄−1) 12 + LK2α2jσ 2 2Q\nNow, if/once αj is small enough such that,\n1 ≥ L2α2j (K̄ + 1)(K̄ − 2)\n2 + LαjK̄\nthen the second term above disappears, and the result is exactly the same as in Zhou & Cong (2017). Specifically, if 1− δ ≥ L2α2j\nE ∑J j=1 αj‖∇f(x̄j)‖2∑J l=1 αj ≤ 2(f(x̃1)−F ∗) (K−1+δ) ∑J j=1 αj\n+ ∑J j=1 LK̄α2jM∑J l=1 αl(K−1+δ) ( K̄ Q + L(2K̄−1)(K̄−1)αj 6 )" }, { "heading": "B APPENDIX B: AN ARGUMENT FOR INCREASED GENERALIZATION ACCURACY FOR LOCAL SGD", "text": "" }, { "heading": "B.1 WIDE AND NARROW WELLS", "text": "In general it has been observed that whether a local minimizer is shallow or deep, or how “flat” it is, seems to affect its generalization properties Keskar et al. (2019). Motivated by investigating the impact of batch size on generalization Dai & Zhu (2018) analyzed the generalization properties of SGD by considering the escape time from a “well”, i.e., a local minimizer in the objective landscape, for a constant stepsize variant of SGD by modeling it as an overdamped Langevin-type diffusion process,\ndXt = −∇f(Xt)dt+ √\n2 dWt In general “flatter” minima have longer escape times than shallow ones, where the escape time is the expectation in the number of iterations (defined as a continuous parameter in this sense) until the iterates leave the well to explore the rest of the objective landscape. Any procedure that increases the escape time for flatter minima as compared to shallower ones should, in theory, result in better generalization properties, as it is more likely then that the procedure will return an iterate that is in a shallow minimizer upon termination.\nDenote with indexes w for a “wide” valley local minimizer and n for a “narrow” value, which also corresponds to smaller and larger minimal Hessian eigenvalues, respectively.\nThe work Berglund (2011) discusses the ultimately classical result that as → 0, the escape time from a local minimizer valley satisfies, E[τe] = HeC/ and letting the constant H depend on the type of minimizer, it holds that that Hw > Hn, i.e., this time is longer for a wider valley.\nWe also have from the same reference, P[τe > sE[τe]] = e−s" }, { "heading": "B.2 AVERAGING", "text": "We now contrast two procedures and compare the difference in their escape times for shallow and wider local minimizers. One is the standard SGD, and in one we perform averaging every τa time. In each case there are Q processors, in the first case running independent instances of SGD, and in the other averaging their iterates. We model averaging probabilistically as it resulting in a randomized initialization within the well, and thus the escape time is a sequence of independent trials of length τa with an initial point in the well, i.e., escaping at time τe means that there areQ ⌈ τe τa ⌉ trials wherein none of the Q sequences escaped within τa, and then one of them escaped in the next set of Q trials.\nFor ease of calculation, let us assume that τa = 12E[τ w e ] = 2E[τne ], where τwe and τne are the calculated single process escape time from a wide and shallow well, respectively.\nIf any one of the local runs escapes, then there is nothing that can be said about the averaged point, so a lack of escape is indicated by the case for which all trajectories, while periodically averaged, stay within the local minimizer value.\nNow consider that if no averaging takes place, we sum up the probabilities for the wide valley that they all escape after time (i− 1)τ time and, given that they do so, not all of them escape after iτa. E[τwe ] ≤ ∑∞ i=1 P (τ w e > (i− 1)τa)Q(1− P (τwe > τai|τwe > (i− 1)τa)Q)iτa\n≤ ∑∞ i=1 ( e− Q(i−1) 2 ( 1− e− Q 2 ) τai )\nFor the narrow well this is, E[τne ] ≥ ∑∞ i=1 P (τ n e > (i− 1)τa)Q(1− P (τne > τai|τne > (i− 1)τa)Q)(i− 1)τa\n≥ ∑∞ i=1 ( e−2Q(i−1) ( 1− e−2Q ) τa(i− 1) ) The difference in the expected escape times satisfies,\nE[τwe − τne ] ≤ ∑∞ i=1 [[( e− Q(i−1) 2 ( 1− e− Q 2 )) − ( e−2Q(i−1) ( 1− e−2Q ))] (i− 1)\n+ ( e− Q(i−1) 2 ( 1− e− Q 2 ))] τa\nRecall that in the case of averaging, if escaping takes place between (i− 1)τa and iτa there were no escapes with less that τa for M processors multiplied by i − 1 times trials, and at least one escape between (i− 1)τa and iτa, i.e., not all did not escape between these two times. The expected first escape time for any trajectory among Q from a wide valley, thus, is,\nE[τa,we ] ≤ ∑∞ i=1 ( P[τwe > τa](i−1)Q(1− P[τwe > τa]Mi)τai ) ≤ ∑∞ i=1 e − (i−1)Q2 (1− e− iQ 2 )τai\nAnd now with averaging, the escape time from a narrow valley satisfies, E[τa,ne ] ≥ ∑∞ i=1 ( P[τne > τa](i−1)Q(1− P[τne > τa]Qi)τa(i− 1) ) ≥ ∑∞ i=1 e −2(i−1)Q(1− e−2iQ)τa(i− 1)\nWith now the difference in the expected escape times satisfies, E[τa,we − τa,ne ] ≤ ∑∞ i=1 [[ e− (i−1)Q 2 (1− e− iQ 2 )\n−e−2(i−1)Q(1− e−2iQ) ] (i− 1) + e− (i−1)Q 2 (1− e− iQ 2 ) ] τa\nIt is clear from the expressions then that the upper bound for the difference is larger in the case of averaging. This implies that averaging results in a greater difference between the escape times between wider and shallow local minimizers, suggesting that, on average if one were to stop a process of training and use the resulting iterate as the estimate for the parameters, this iterate would more likely come from a flatter local minimizer if it was generated with a periodic averaging procedure, relative to standard SGD. Thus it should be expected, at least by this argument that better generalization is more likely with periodic averaging.\nNote that since they are both upper bounds, this isn’t a formal proof that in all cases the escape times are more favorable for generalization in the case of averaging, but a guide as to the mathematical intuition as to how this could be the case." } ]
2,020
null
SP:3dffd0add054e13be141cfe939e367f6f6785eb8
[ "This paper deals with the problem of natural language generation for a dialogue system involved in complex communication tasks such as negotiation or persuasion. The proposed architecture consists of two encoders: one for the utterance and the other for dialogue acts and negotiation strategies. The decoder is an RNN that converts the encoded vectors to the output utterance. Each utterance is first passed through BERT to get an utterance-level encoding. The sequence of utterance encodings is then passed through an RNN to generate a conversation level encodings. The negotiation strategies and dialogue acts in a conversation are represented using a node-edge graph, where the nodes are one of the N different strategies/acts and there exists an edge from node a to node b if an utterance with strategy A precedes any utterance with strategy B. The entire architecture is trained in a multi-task setup where the loss function accounts for both the predictions of the model and generated language. The proposed architecture is evaluated on the CraigslistBargain dataset and compared against Zhou et al. 2020. " ]
To successfully negotiate a deal, it is not enough to communicate fluently: pragmatic planning of persuasive negotiation strategies is essential. While modern dialogue agents excel at generating fluent sentences, they still lack pragmatic grounding and cannot reason strategically. We present DIALOGRAPH, a negotiation system that incorporates pragmatic strategies in a negotiation dialogue using graph neural networks. DIALOGRAPH explicitly incorporates dependencies between sequences of strategies to enable improved and interpretable prediction of next optimal strategies, given the dialogue context. Our graph-based method outperforms prior state-of-the-art negotiation models both in the accuracy of strategy/dialogue act prediction and in the quality of downstream dialogue response generation. We qualitatively show further benefits of learned strategy-graphs in providing explicit associations between effective negotiation strategies over the course of the dialogue, leading to interpretable and strategic dialogues.1
[ { "affiliations": [], "name": "NEGOTIATION DIALOGUES" }, { "affiliations": [], "name": "Rishabh Joshi" }, { "affiliations": [], "name": "Vidhisha Balachandran" }, { "affiliations": [], "name": "Shikhar Vashishth" }, { "affiliations": [], "name": "Alan W Black" }, { "affiliations": [], "name": "Yulia Tsvetkov" } ]
[ { "authors": [ "Nicholas Asher", "Julie Hunter", "Mathieu Morey", "Benamara Farah", "Stergos Afantenos" ], "title": "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus", "venue": "In Proceedings of the Tenth International Conference on Language Resources and Evaluation", "year": 2016 }, { "authors": [ "Joost Bastings", "Ivan Titov", "Wilker Aziz", "Diego Marcheggiani", "Khalil Sima’an" ], "title": "Graph convolutional encoders for syntax-aware neural machine translation", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Max H Bazerman", "Margaret Ann Neale" ], "title": "Negotiating rationally", "venue": null, "year": 1993 }, { "authors": [ "Štefan Beňuš", "Agustı́n Gravano", "Julia Hirschberg" ], "title": "Pragmatic aspects of temporal accommodation in turn-taking", "venue": "Journal of Pragmatics,", "year": 2011 }, { "authors": [ "Taylor Berg-Kirkpatrick", "David Burkett", "Dan Klein" ], "title": "An empirical investigation of statistical significance in NLP", "venue": "In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning,", "year": 2012 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on", "venue": "graphs. CoRR,", "year": 2013 }, { "authors": [ "Keke Chen", "Ling Liu" ], "title": "Clustermap: Labeling clusters in large datasets via visualization", "venue": "In Proceedings of the Thirteenth ACM International Conference on Information and Knowledge Management,", "year": 2004 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder–decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Heriberto Cuayáhuitl", "Simon Keizer", "Oliver Lemon" ], "title": "Strategic dialogue management via deep reinforcement learning", "venue": "NIPS’15 Workshop on Deep Reinforcement Learning,", "year": 2015 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "CoRR, abs/1606.09375,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language", "year": 2019 }, { "authors": [ "Ritam Dutt", "Rishabh Joshi", "Carolyn Rose" ], "title": "Keeping up appearances: Computational modeling of face acts in persuasion oriented discussions", "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 7473–7485,", "year": 2020 }, { "authors": [ "Ritam Dutt", "Sayan Sinha", "Rishabh Joshi", "Surya Shekhar Chakraborty", "Meredith Riggs", "Xinru Yan", "Haogang Bao", "Carolyn Penstein Rosé" ], "title": "Resper: Computationally modelling resisting strategies in persuasive conversations", "venue": "In 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL),", "year": 2021 }, { "authors": [ "Roger Fisher", "William L Ury", "Bruce Patton" ], "title": "Getting to yes: Negotiating agreement without giving", "venue": "in. Penguin,", "year": 2011 }, { "authors": [ "Deepanway Ghosal", "Navonil Majumder", "Soujanya Poria", "Niyati Chhaya", "Alexander Gelbukh" ], "title": "DialogueGCN: A graph convolutional neural network for emotion recognition in conversation", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "He He", "Derek Chen", "Anusha Balakrishnan", "Percy Liang" ], "title": "Decoupling strategy and generation in negotiation dialogues", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Wenxiang Jiao", "Haiqin Yang", "Irwin King", "Michael R. Lyu" ], "title": "HiGRU: Hierarchical gated recurrent units for utterance-level emotion recognition", "venue": null, "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Philipp Koehn" ], "title": "Statistical significance tests for machine translation evaluation", "venue": "In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing,", "year": 2004 }, { "authors": [ "Gaurav Kumar", "Rishabh Joshi", "Jaspreet Singh", "Promod Yenigalla" ], "title": "AMUSED: A multi-stream vector representation method for use in natural dialogue", "venue": "In Proceedings of The 12th Language Resources and Evaluation Conference,", "year": 2020 }, { "authors": [ "George Larionov", "Zachary Kaden", "Hima Varsha Dureddy", "Gabriel Bayomi T Kalejaiye", "Mihir Kale", "Srividya Pranavi Potharaju", "Ankit Parag Shah", "Alexander I Rudnicky" ], "title": "Tartan: A retrieval-based socialbot powered by a dynamic finite-state machine architecture", "venue": "arXiv preprint arXiv:1812.01260,", "year": 2018 }, { "authors": [ "David A Lax", "James K" ], "title": "Sebenius. 3-D Negotiation: Powerful tools to change the game in your most important deals", "venue": null, "year": 2006 }, { "authors": [ "Junhyun Lee", "Inyeop Lee", "Jaewoo Kang" ], "title": "Self-attention graph pooling", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Wenqiang Lei", "Xisen Jin", "Min-Yen Kan", "Zhaochun Ren", "Xiangnan He", "Dawei Yin" ], "title": "Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Mike Lewis", "Denis Yarats", "Yann Dauphin", "Devi Parikh", "Dhruv Batra" ], "title": "Deal or no deal? endto-end learning of negotiation dialogues", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Jiwei Li", "Michel Galley", "Chris Brockett", "Georgios Spithourakis", "Jianfeng Gao", "Bill Dolan" ], "title": "A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Yu Li", "Kun Qian", "Weiyan Shi", "Zhou Yu" ], "title": "End-to-end trainable non-collaborative dialog system", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Weixin Liang", "Youzhi Tian", "Chengcai Chen", "Zhou Yu" ], "title": "MOSS: end-to-end dialog system framework with modular supervision", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Diego Marcheggiani", "Ivan Titov" ], "title": "Encoding sentences with graph convolutional networks for semantic role labeling", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Masahiro Mizukami", "Koichiro Yoshino", "Graham Neubig", "David Traum", "Satoshi Nakamura" ], "title": "Analyzing the effect of entrainment on dialogue acts", "venue": "In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue,", "year": 2016 }, { "authors": [ "Will Norcliffe-Brown", "Stathis Vafeias", "Sarah Parisot" ], "title": "Learning conditioned graph structures for interpretable visual question answering", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Amin Parvaneh", "Ehsan Abbasnejad", "Qi Wu", "Javen Shi" ], "title": "Show, price and negotiate: A hierarchical attention recurrent visual negotiator", "venue": "CoRR, abs/1905.03721,", "year": 2019 }, { "authors": [ "P.E. Pope", "S. Kolouri", "M. Rostami", "C.E. Martin", "H. Hoffmann" ], "title": "Explainability methods for graph convolutional neural networks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Dean G Pruitt" ], "title": "Negotiation behavior", "venue": null, "year": 2013 }, { "authors": [ "Jinghui Qin", "Zheng Ye", "Jianheng Tang", "Xiaodan Liang" ], "title": "Dynamic knowledge routing network for target-guided open-domain conversation", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Ekagra Ranjan", "Soumya Sanyal", "Partha P. Talukdar" ], "title": "ASAP: adaptive structure aware pooling for learning hierarchical graph representations", "venue": "In The Thirty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Siva Reddy", "Danqi Chen", "Christopher D. Manning" ], "title": "CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266, March 2019. doi: 10.1162/tacl a 00266", "venue": "URL https://www.aclweb.org/anthology/ Q19-1016", "year": 2019 }, { "authors": [ "Alan Ritter", "Colin Cherry", "Bill Dolan" ], "title": "Unsupervised modeling of twitter conversations", "venue": "In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics,", "year": 2010 }, { "authors": [ "M. Schlichtkrull", "Thomas Kipf", "P. Bloem", "R.V. Berg", "Ivan Titov", "M. Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In ESWC,", "year": 2018 }, { "authors": [ "Weiyan Shi", "Tiancheng Zhao", "Zhou Yu" ], "title": "Unsupervised dialog structure learning", "venue": null, "year": 2019 }, { "authors": [ "Alessandro Sordoni", "Michel Galley", "Michael Auli", "Chris Brockett", "Yangfeng Ji", "Margaret Mitchell", "Jian-Yun Nie", "Jianfeng Gao", "Bill Dolan" ], "title": "A neural network approach to context-sensitive generation of conversational responses", "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2015 }, { "authors": [ "Jianheng Tang", "Tiancheng Zhao", "Chenyan Xiong", "Xiaodan Liang", "Eric Xing", "Zhiting Hu" ], "title": "Targetguided open-domain conversation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ming Tu", "Kevin Huang", "Guangtao Wang", "Jing Huang", "Xiaodong He", "Bowen Zhou" ], "title": "Select, answer and explain: Interpretable multi-hop reading comprehension over multiple documents", "venue": "In AAAI 2020 (accepted),", "year": 2020 }, { "authors": [ "Yi-Lin Tuan", "Yun-Nung Chen", "Hung-yi Lee" ], "title": "DyKgChat: Benchmarking dialogue generation grounding on dynamic knowledge graphs", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Shikhar Vashishth", "Rishabh Joshi", "Sai Suman Prayaga", "Chiranjib Bhattacharyya", "Partha Talukdar" ], "title": "RESIDE: Improving distantly-supervised neural relation extraction using side information", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Shikhar Vashishth", "Naganand Yadati", "Partha Talukdar" ], "title": "Graph-based deep learning in natural language processing", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts,", "year": 2019 }, { "authors": [ "Shikhar Vashishth", "Soumya Sanyal", "Vikram Nitin", "Partha Talukdar" ], "title": "Composition-based multirelational graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Quoc Le" ], "title": "A neural conversational model", "venue": "arXiv preprint arXiv:1506.05869,", "year": 2015 }, { "authors": [ "Xuewei Wang", "Weiyan Shi", "Richard Kim", "Yoojung Oh", "Sijia Yang", "Jingwen Zhang", "Zhou Yu" ], "title": "Persuasion for good: Towards a personalized persuasive dialogue system for social good", "venue": null, "year": 1906 }, { "authors": [ "Wei Wei", "Quoc Le", "Andrew Dai", "Jia Li" ], "title": "AirDialogue: An environment for goal-oriented dialogue research", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "Philip S. Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems, pp", "year": 2020 }, { "authors": [ "Shangsheng Xie", "Mingming Lu" ], "title": "Interpreting and Understanding Graph Convolutional Neural Network using Gradient-based Attribution Method", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Ji Soo Yi", "Rachel Melton", "John Stasko", "Julie A. Jacko" ], "title": "Dust & magnet: Multivariate information visualization using a magnet metaphor", "venue": "Information Visualization,", "year": 2005 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "Will Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "S. Young" ], "title": "Using pomdps for dialog management", "venue": "IEEE Spoken Language Technology Workshop,", "year": 2006 }, { "authors": [ "Ke Zhai", "Jason D. Williams" ], "title": "Discovering latent structure in task-oriented dialogues. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2014 }, { "authors": [ "Saizheng Zhang", "Emily Dinan", "Jack Urbanek", "Arthur Szlam", "Douwe Kiela", "Jason Weston" ], "title": "Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Tianyi Zhang", "Varsha Kishore", "Felix Wu", "Kilian Q. Weinberger", "Yoav Artzi" ], "title": "Bertscore: Evaluating text generation with bert", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhen Zhang", "Jiajun Bu", "Martin Ester", "Jianfeng Zhang", "Chengwei Yao", "Zhi Yu", "Can Wang" ], "title": "Hierarchical graph pooling with structure learning", "venue": null, "year": 1911 }, { "authors": [ "Hao Zhou", "Tom Young", "Minlie Huang", "Haizhou Zhao", "Jingfang Xu", "Xiaoyan Zhu" ], "title": "Commonsense knowledge aware conversation generation with graph attention", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Yiheng Zhou", "He He", "Alan W Black", "Yulia Tsvetkov" ], "title": "A dynamic strategy coach for effective negotiation", "venue": "In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue,", "year": 2019 }, { "authors": [ "Yiheng Zhou", "Yulia Tsvetkov", "Alan W Black", "Zhou Yu" ], "title": "Augmenting non-collaborative dialog systems with explicit semantic and strategic dialog history", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [], "title": "HYPERPARAMETERS We present the hyper-parameters for all the experiments, their corresponding search space and their final values in Table 10. We also present additional details of our experiments below. We use most of the hyperparameters from Zhou et al. (2020)", "venue": null, "year": 2020 }, { "authors": [ "Listing Price" ], "title": "Buyer’s Target Price: 36 Title: 2017 NEW Stans 24 and 26 Tubeless Tire Kit", "venue": null, "year": 2017 }, { "authors": [ "Listing Price" ], "title": "Buyer’s Target Price: 36 Title: 2017 NEW Stans 24 and 26 Tubeless Tire Kit", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Negotiation is ubiquitous in human interaction, from e-commerce to the multi-billion dollar sales of companies. Learning how to negotiate effectively involves deep pragmatic understanding and planning the dialogue strategically (Thompson; Bazerman et al., 2000b; Pruitt, 2013).\nModern dialogue systems for collaborative tasks such as restaurant or flight reservations have made considerable progress by modeling the dialogue history and structure explicitly using the semantic content, like slot-value pairs (Larionov et al., 2018; Young, 2006), or implicitly with encoder-decoder architectures (Sordoni et al., 2015; Li et al., 2016). In such tasks, users communicate explicit intentions, enabling systems to map the utterances into specific intent slots (Li et al., 2020). However, such mapping is less clear in complex non-collaborative tasks like negotiation (He et al., 2018) and persuasion (Wang et al., 2019), where user intent and most effective strategies are hidden. Hence, along with the generated dialogue, the strategic choice of framing and the sequence of chosen strategies play a vital role, as depicted in Figure 1. Indeed, prior work on negotiation dialogues has primarily focused on optimizing dialogue strategies—from highlevel task-specific strategies (Lewis et al., 2017), to more\nspecific task execution planning (He et al., 2018), to fine-grained planning of linguistic outputs given\n1Code, data and a demo system is released at https://github.com/rishabhjoshi/ DialoGraph_ICLR21\nstrategic choices (Zhou et al., 2019). These studies have confirmed that it is crucial to control for pragmatics of the dialogue to build effective negotiation systems.\nTo model the explicit dialogue structure, prior work incorporated Hidden Markov Models (HMMs) (Zhai & Williams, 2014; Ritter et al., 2010), Finite State Transducers (FSTs) (Zhou et al., 2020) and RNNs (He et al., 2018; Shi et al., 2019). While RNN-based models lack interpretability, HMMand FST-based approaches may lack expressivity. In this paper, we hypothesize that Graph Neural Networks (GNNs) (Wu et al., 2020) can combine the benefits of interpretability and expressivity because of their effectiveness in encoding graph-structured data through message propagation. While being sufficiently expressive to model graph structures, GNNs also provide a natural means for interpretation via intermediate states (Xie & Lu, 2019; Pope et al., 2019).\nWe propose DIALOGRAPH, an end-to-end negotiation dialogue system that leverages Graph Attention Networks (GAT) (Veličković et al., 2018) to model complex negotiation strategies while providing interpretability for the model via intermediate structures. DIALOGRAPH incorporates the recently proposed hierarchical graph pooling based approaches (Ranjan et al., 2020) to learn the associations between negotiation strategies, including conceptual and linguistic strategies and dialogue acts, and their relative importance in predicting the best sequence. We focus on buyer–seller negotiations in which two individuals negotiate on the price of an item through a chat interface, and we model the seller’s behavior on the CraigslistBargain dataset (He et al., 2018).2 We demonstrate that DIALOGRAPH outperforms previous state-of-art methods on strategy prediction and downstream dialogue responses. This paper makes several contributions. First, we introduce a novel approach to model negotiation strategies and their dependencies as graph structures, via GNNs. Second, we incorporate these learned graphs into an end-to-end negotiation dialogue system and demonstrate that it consistently improves future-strategy prediction and downstream dialogue generation, leading to better negotiation deals (sale prices). Finally, we demonstrate how to interpret intermediate structures and learned sequences of strategies, opening-up the black-box of end-to-end strategic dialogue systems." }, { "heading": "2 DIALOGRAPH", "text": "We introduce DIALOGRAPH, a modular end-to-end dialogue system, that incorporates GATs with hierarchical pooling to learn pragmatic dialogue strategies jointly with the dialogue history. DIALOGRAPH is based on a hierarchical encoder-decoder model and consists of three main components: (1) hierarchical dialogue encoder, which learns a representation for each utterance and encodes its local context; (2) structure encoder for encoding sequences of negotiation strategies and dialogue acts; and (3) utterance decoder, which finally generates the output utterance. Formally, our dialogue input consists of a sequence of tuples, D = [(u1, da1, ST1), (u2, da2, ST2), ..., (un, dan, STn)] where ui is the utterance, dai is the coarse dialogue act and STi = {sti,1, sti,2, . . . , sti,k} is the set of k fine-grained negotiation strategies for the utterance ui.3 The dialogue context forms the input to (1) and the previous dialogue acts and negotiation strategies form the input to (2). The overall architecture is shown in Figure 2. In what follows, we describe DIALOGRAPH in detail." }, { "heading": "2.1 HIERARCHICAL DIALOGUE ENCODER", "text": "A dialogue context typically comprises of multiple dialogue utterances which are sequential in nature. We use hierarchical encoders for modeling such sequential dialogue contexts (Jiao et al., 2019). To encode the utterance ut at time t, we use the pooled representations from BERT (Devlin et al., 2019) to obtain the corresponding utterance embedding et. We then pass the utterance embeddings through a GRU to obtain the dialogue context encoding till time t, denoted by hUt .\n2We focus on the seller’s side following Zhou et al. (2019) who devised a set of strategies specific to maximizing the seller’s success. Our proposed methodology, however, is general.\n3For example, in an utterance Morning! My bro destroyed my old kit and I’m looking for a new pair for $10, the coarse dialogue act is Introduction, and the finer grained negotiation strategies include Proposing price, Being informal and Talking about family for building rapport." }, { "heading": "2.2 STRUCTURE ENCODER", "text": "Our structure encoder is designed to model the graph representations of the strategies and dialogue acts using GATs and output their structural representations. These structural representations are used to predict the next set of strategies and dialogue acts and enrich the encoded dialogue representation. Below we describe the structure encoder for negotiation strategies.\nWe model the sequence of negotiation strategies, ST = [ST1, ST2, . . . , STt] by creating a directed graph, where STi is the set of k fine-grained negotiation strategies for the utterance ui. Formally, we define a graph G(V, E , X) with |E| edges andN = |V| nodes where each node vi ∈ V represents a particular negotiation strategy for an utterance and has a d-dimensional feature representation denoted by zi. Z ∈ RN×d denotes the feature matrix of the nodes and A ∈ RN×N represents the adjacency matrix, where N is the total number of nodes (strategies) that have occurred in the conversation till that point. Therefore, each node represents a strategy-utterance pair.\nWe define the set of edges as E = {(a, b)}; a, b ∈ V where a and b denote strategies at utterances ua and ub, present at turns ta and tb, such that tb > ta. In other words, we make a directed edge from a particular node (strategy in an utterance) to all the consecutive nodes. This ensures a direct connection from all the previous strategies to the more recent ones.4 In the same way, we form the graph out of the sequence of dialogue acts. These direct edges and learned edge attention weights help us interpret the dependence and influence of strategies on each other.\nTo get the structural representations from the strategy graphs, we pass them through a hierarchical graph pooling based encoder, which consists of l layers of GAT, each followed by the Adaptive Structure Aware Pooling (ASAP) layer (Ranjan et al., 2020). As part of the ASAP layer, the model first runs GAT over the input graph representations to obtain structurally informed representations of the nodes. Then a cluster assignment step is performed which generates a cluster assignment matrix, S, which tells the model which nodes come in a similar structural context. After that, the clusters are ranked and then the graph is pooled by taking the top few clusters as new nodes and forming edges between them using the existing graph. This way the size of the graph is reduced at every step which leads to a structurally informed graph representation. We take advantage of the cluster formulation to obtain the associations between the negotiation strategies, as identified from the cluster assignment matrix, S. These association scores can later be used to interpret which strategies are associated with each other and tend to co-occur in similar contexts. Moreover, we also use the node attention scores from GAT to interpret the influence of different strategies on the\n4Appendix C shows an example of the graph obtained from a sequence of strategies.\nrepresentation of a particular strategy, which essentially gives the dependence information between strategies.\nIn this way, the structure representation is learned and accumulated in a manner that preserves the structural information (Ying et al., 2018; Lee et al., 2019). After each pooling step, the graph representation is summarized using the concatenation of mean and max of the node representations. The summaries are then added and passed through fully connected layers to obtain the final structural representation of the strategies hSTt . We employ a similar structure encoder to encode the graph obtained from the sequence of dialogue acts, to obtain hdat ." }, { "heading": "2.3 UTTERANCE DECODER", "text": "The utterance decoder uses the dialogue context representation and structural representations of dialogue acts and negotiation strategies to produce the dialogue response (next utterance). We enrich the dialogue representation by concatenating the structural representations before passing it to a standard greedy GRU (Cho et al., 2014) decoder. This architecture follows Zhou et al. (2020), who introduced a dynamic negotiation system that incorporates negotiation strategies and dialogue acts via FSTs. We thus follow their utterance decoder architecture to enable direct baseline comparison. For the jth word of utterance ut+1, w j t+1, we condition on the previous word w j−1 t+1 to calculate the probability distribution over the vocabulary as pwjt+1 = softmax(GRU(ht,w j−1 t+1 )) where ht = [hut ;h ST t ;h da t ] and [; ] represents the concatenation operator. For encoding the price, we replace all price information in the dataset with placeholders representing the percentage of the offer price. For example, we would replace $35 with < price− 0.875 > if the original selling price is $40. The decoder generates these placeholders which are then replaced with the calculated price before generating the utterance." }, { "heading": "2.4 MODEL TRAINING", "text": "We use hSTt to predict the next set of strategies STt+1, a binary value vector which represents the k-hot representation of negotiation strategies for the next turn. We compute the probability of the jth strategy occurring in ut+1 as p(stt+1,j |hSTt ) = σ(hSTt ). where σ denotes the sigmoid operator. We threshold the probability by 0.5 to obtain the k-hot representation. We denote the weighted negative log likelihood of strategies LST as the loss function of the task of next strategy prediction LST = − ∑ j δj log(p(stt+1,j)) − ∑ k log(1 − p(stt+1,k)) where the summation of j are over the strategies present (st ′\nt+1,j = 1) and not present (st ′\nt+1,k = 0) in the ground truth strategies set, ST ′ . Here δj is the positive weight associated with the particular strategy. We add this weight to the positive examples to trade off precision and recall. We put δj = # of instances not having strategy j/# of instances having strategy j.\nSimilarly, we use hdat to predict the dialogue act for the next utterance dat+1. Given the target dialogue act da ′\nt+1 and the class weights ρda for the dialogue acts, we denote the class-weighted cross entropy loss over the set of possible dialogue acts, LDA = −ρda log(softmax(hdat )) . We pass ht = [hut ;h ST t ;h da t ] through a linear layer to predict the negotiation success, which is denoted by the sale-to-list ratio r = (sale price− buyer target price)/(listed price− buyer target price) (Zhou et al., 2019). We split the ratios into 5 negotiation classes of equal sizes using the training data and use those to predict the success of negotiation. Therefore, given the predicted probabilities for target utterance u ′ t+1 from §2.3, target ratio class y ′\nr and the learnable parameters Wr and br, we use the cross entropy loss as the loss for the generation task (LNLG) as well as the negotiation outcome prediction task (LR), thus LNLG = − ∑ wj∈u ′ t+1 log(p wj t+1) and\nLR = − ∑ r∈[1,5] y ′\nr log(softmax(Wrht + br)). The LR loss optimizes for encoding negotiation strategies to enable accurate prediction of negotiation outcome.\nWe use hyperparameters α, β and γ to optimize the joint loss Ljoint, of strategy prediction, dialogue act prediction, utterance generation and outcome prediction together, using the Adam optimizer (Kingma & Ba, 2014), to get Ljoint = LNLG + αLST + βLDA + γLR." }, { "heading": "3 EXPERIMENTAL SETUP", "text": "Dataset: We use the CraigslistBargain dataset5 (He et al., 2018) to evaluate our model. The dataset was created using Amazon Mechanical Turk (AMT) in a negotiation setting where two workers were assigned the roles of buyer and seller respectively and were tasked to negotiate the price of an item on sale.The buyer was additionally given a target price. Both parties were encouraged to reach an agreement while each of the workers tried to get a better deal. We remove all conversations with less than 5 turns. Dataset statistics are listed in Table 11 in the Appendix.\nWe extract from the dataset the coarse dialogue acts as described by He et al. (2018). This includes a list of 10 utterance dialogue acts, e.g., inform, agree, counter-price. We augment this list by 4 outcome dialogue acts, namely, 〈offer〉, 〈accept〉, 〈reject〉 and 〈quit〉, which correspond to the actions taken by the users. Negotiation strategies are extracted from the data following Zhou et al. (2019). These include 21 fine-grained strategies grounded in prior economics/behavioral science research on negotiation (Pruitt, 2013; Bazerman & Neale, 1993; Bazerman et al., 2000a; Fisher et al., 2011; Lax & Sebenius, 2006; Bazerman et al., 2000b), e.g, negotiate side offers, build rapport, show dominance. All dialogue acts and strategies are listed in Appendices A and B.\nBaselines: DIALOGRAPH refers to our proposed method. To corroborate the efficacy of DIALOGRAPH, we compare it against our implementation of the present state-of-the-art model for the negotiation task: FST-enhanced hierarchical encoder-decoder model (FeHED) (Zhou et al., 2020) which utilizes FSTs for encoding sequences of strategies and dialogue acts.6 We also conduct and ablation study, and evaluate the variants of DIALOGRAPH with different ways of encoding negotiation strategies, namely, HED, HED+RNN, and HED+Transformer. HED completely ignores the strategy and dialogue act information, whereas HED+RNN and HED+Transformer encode them using RNN and Transformers (Vaswani et al., 2017) respectively. While HED+RNN is based on the dialogue manager of He et al. (2018), HED+Transformer has not been proposed earlier for this task. For a fair comparison, we use a pre-trained BERT (Devlin et al., 2019) model as the utterance encoder (§2.1) and a common utterance decoder (§2.4) in all the models, and only vary the structure encoders as described above. The strategies and dialogue acts in RNN and Transformer based encoders are fed as sequence of k-hot vectors.\nEvaluation Metrics: For evaluating the performance on the next strategy prediction and the next dialogue act prediction task, we report the F1 and ROC AUC scores for all the models. For these metrics, macro scores tell us how well the model performs on less frequent strategies/dialogue acts and the micro performance tells us how good the model performs overall while taking the label imbalance into account. Strategy prediction is a multi-label prediction problem since each utterance can have multiple strategies. For the downstream tasks of utterance generation, we compare the models using BLEU score (Papineni et al., 2002) and BERTScore (Zhang et al., 2020). Finally, we also evaluate on another downstream task of predicting the outcome of negotiation, using the ratio class prediction accuracy (RC-Acc) (1 out of 5 negotiation outcome classes, as described in §2.4). Predicting sale outcome provides better interpretability over the progression of a sale and potentially control to intervene when negotiation has a bad predicted outcome. Additionally, being able to predict the sale outcome with high accuracy shows that the model encodes the sequence of negotiation strategies well." }, { "heading": "4 RESULTS", "text": "We evaluate (1) strategy and dialogue act prediction (intrinsic evaluation), and (2) dialogue generation and negotiation outcome prediction (downstream evaluation). For all metrics, we perform bootstrapped statistical tests (Berg-Kirkpatrick et al., 2012; Koehn, 2004) and we bold the best results for a metric in all tables (several results are in bold if they have statistically insignificant differences).\nStrategy and Dialogue Act Prediction: We compare DIALOGRAPH’s effectiveness in encoding the explicit sequence of strategies and dialogue acts with the baselines, using the metrics described in §3. Table 1 shows that DIALOGRAPH performs on par with the Transformer based encoder in\n5https://github.com/stanfordnlp/cocoa/tree/master/craigslistbargain 6We replace the utterance encoder with BERT for fair comparison. This improved slightly the performance\nof the FeHED model compared to results published in Zhou et al. (2020).\nstrategy prediction macro scores and outperforms it on other metrics. Moreover, both significantly outperform the FST-based based method, prior state-of-the-art. We hypothesize that lower gains for dialogue acts are due to the limited structural dependencies between them. Conversely, we validate that for negotiation strategies, RNNs are significantly worse than DIALOGRAPH. We also observe that higher macro scores show that DIALOGRAPH and Transformers are able to capture the sequences containing the less frequent strategies/dialogue acts as well. These results supports our hypothesis of the importance to encode the structure in a more expressive model. Moreover, DIALOGRAPH also provides interpretable structures which the other baselines do not. We will discuss these findings in §5.\nAutomatic Evaluation on Downstream tasks: In this section, we analyze the impact of DIALOGRAPH on the downstream task of Negotiation Dialogue based on the automatic evaluation metrics described in §3. In Table 2, we show that DIALOGRAPH helps improve the generation of dialogue response. Even though DIALOGRAPH attains higher BLEU scores, we note that single-reference BLEU assumes only one possible response while dialogue systems can have multiple possible responses to the same utterance. BERTScore alleviates this problem by scoring semantically similar responses equally high (Zhang et al., 2020). We also find that both Transformer and DIALOGRAPH have a comparable performance for negotiation outcome prediction, which is significantly better than the previously published baselines (FeHED and HED+RNN). A higher performance on this metric demonstrates that our model is able to encode the strategy sequence better and consequently predict the negotiation outcome more accurately. Additionally, ablation results in Table 3 show that both strategy and dialogue act information helps DIALOGRAPH in improving dialogue response. The difference in BERTScore F1 scores in Tables 2 and 3 arises due to different metrics chosen for early stopping. More details in Appendix D.\nAlthough, both HED+Transformer and DIALOGRAPH are based on attention mechanisms, DIALOGRAPH has the added advantage of having structural attention which helps encode the pragmatic structure of negotiation dialogues which in turn provides an interpretable interface. The components in our graph based encoder such as the GAT and ASAP layer provide strategy influence and cluster association information which is useful to understand and control negotiation systems. This is described in more detail in §5. Though transformers have self attention, the architecture is limited and doesn’t model the structure/dependence between strategies providing only limited understanding. Further, our results show that DIALOGRAPH maintains or improves performance over strong models like Transformer and has much more transparent interpretability. We later show that DIALOGRAPH performs significantly better than HED+Transformer in human evaluation.\nHuman Evaluation: Since automatic metrics only give us a partial view of the system, we complement our evaluation with detailed human evaluation. For that, we set up DIALOGRAPH and the baselines on Amazon Mechanical Turk (AMT) and asked workers to role-play the buyer and negotiate with a single bot. After their chat is over, we ask them to fill a survey to rate the dialogue on how persuasive (My task partner was persuasive.), coherent (My task partner’s responses were on topic and in accordance with the conversation history.), natural (My task partner was humanlike.) and understandable (My task partner perfectly understood what I was typing.) the bot was 7. Prior research in entailment has shown that humans tend to get better as they chat (Mizukami et al., 2016; Beňuš et al., 2011) and so we restrict one user to chat with just one of the bots. We further\n7We use the setup of https://github.com/stanfordnlp/cocoa/. Screenshots in Appendix H.\nTable 2: Downstream evaluation of negotiation dialogue generation and negotiation outcome prediction. The best results (along with all statistically insignificant values to those) are bolded.\nGeneration Outcome\nBERTScore Prediction\nModel BLEU Precision Recall F1 RC-Acc\nHED 20.9 21.8 22.3 22.1 35.2 FeHED 23.7 27.1 26.8 27.0 42.3 HED+RNN 22.5 22.9 22.7 22.8 47.9 HED+Transformer 24.4 27.4 28.1 27.7 53.7\nDIALOGRAPH 24.7 27.8 28.3 28.1 53.1\nTable 3: DIALOGRAPH ablation analysis. This shows that all the different components provide complementary benefits. We also evaluate without BERT for comparison with previously published works.\nModel BERT Score F1\nDIALOGRAPH 27.4 w/o Strategy (ST) 26.8 w/o ST, Dialogue Acts (DA) 26.3 w/o ST, DA, BERT 22.7\nPositive\nBuyer (Human)\nSeller (Bot)\nu3\nu4\nInformal\n3rd Person\nTrade In\nInformal\nPropose\nHedge\nFamily\nPositive\nConcern\nu5\nu6\nu7\n1\n1\n0.96\n1\n0.87 0.37\n0.34\n1 1\n0.31\n1\n0.93\nHey, you need a router?\nYeah, I was looking at the Apple router.\nOh, it supports multiple devices ...\nGreat, will it support laptop?\nDefinitely both .... $X and we re square\nGreat! Is there any scratch/damage to it?\nNah.. few months only. We re moving...\nu1\nu2\nu3\nu4\nu5\nu6\nu7 0.01\n0.01\nFigure 3: Visualization of the learnt latent strategy sequences in DIALOGRAPH where bolder edges represent higher influence. Here we present only a few edges for brevity and visualize min-max normalized attention values as edge weights to analyze the relative ranking of strategies. For example, for family at u7, informal of u5 has the most influence followed by propose. We present the full attention map for this example in Figure 5 in the Appendix.\nprune conversations which were incomplete potentially due to dropped connections. Finally, we manually inspect the conversations extracted from AMT to extract the agreed sale price and remove conversations that were not trying to negotiate at all.\nThe results of human evaluations of the resulting 90 dialogues (about 20 per model) are presented in Table 4. We find that baselines are more likely to accept unfair offers and apply inappropriate strategies. Additionally, DIALOGRAPH bot attained a significantly higher Sale Price Ratio, which is the outcome of negotiation, showing that effectively modeling strategy sequences leads to more effective negotiation systems. Our model also had a higher average total number of turns and wordsper-turn (for just the bots) compared to all baselines, signifying engagement. It was also more persuasive and coherent while being more understandable to the user. From qualitative inspection we observe that the HED model generates utterances that are shorter and less coherent. They are natural responses like “Yes it is”, but generic and contextually irrelevant. We hypothesize that this is due to the HED model not being optimized to encode the sequence of negotiation strategies and dialogue acts. We believe that this is the reason for the high natural score for HED. From manual inspection we see that HED is not able to produce very persuasive responses. We provide an example of a dialogue in Appendix F. We see that although HED+Transformer model performs well, DIALOGRAPH achieves a better sale price outcome as it tries to repeatedly offer deals to negotiate the price. We see that the HED is unable to understand the user responses well and tends to repeat itself. Both the FeHED and HED baselines tend to agree with the buyer’s proposal more readily whereas HED+Transformers and DIALOGRAPH provide counter offers and trade-ins to persuade the user." }, { "heading": "5 INTERPRETING LEARNED STRATEGY GRAPHS", "text": "We visualize the intermediate attention scores generated by the GATs while obtaining the strategy node representations. These attention scores tell us what strategies influenced the representation of a particular strategy and can be used to observe the dependence between strategies (cf. Xie & Lu,\n2019; Norcliffe-Brown et al., 2018). We show an example in Figure 3 where for brevity, we present a subset of few turns and only the top few most relevant edges in the figure. For visualization, we re-scale the attention values for all incoming edges of a node (strategy) using min-max normalization. This is done because the range of raw attention values would differ based on the number of edges and this allows us to normalize any difference in scales and visualize the relative ranking of strategies (Yi et al., 2005; Chen & Liu, 2004). We notice that as soon as the first propose at u5 happens, the strategies completely change and become independent of the strategies before the propose point. From Figure 3, we see that the edge weight from u4 to u6 is 0.01, signifying very low influence. We noticed this trend in other examples as well, wherein, the influence of strategies coming before the first propose turn to strategies coming after that, is very low. A similar phenomenon was also observed by Zhou et al. (2019) who study the conversations by splitting into two parts based on the first propose turn. Another interesting thing we note is that the trade-in and propose strategies at u5 seem to be heavily influenced by informal from u3. Similarly, the informal of u5 was influenced by positive sentiment from u4. This indicates that the seller was influenced by previous informal interactions to propose and trade-in at this turn, and that sellers tend to be more informal if the conversation partner is positive. In other examples, we see that at a particular utterance, different strategies depend on separate past strategies and also observe that the attention maps usually demonstrate the strategy switch as soon as the first propose happens, which is similar to what has been observed by prior work. These examples demonstrate that DIALOGRAPH can model fine-grain strategies, learn dependence beyond just utterances and give interpretable representations, which previous baselines, including the FSTs, lack. Specifically, each state of the FST is explicitly represented by an action distribution which can only be used to see the sequence of strategies and not observe associations or dependence information which DIALOGRAPH provides.\nWe utilize these cluster attention scores from the ASAP pooling layer to observe the association between various strategies which can help us observe strategies with similar contextual behaviour and structural co-occurrence. We take the average normalized value of the cluster attention scores between two strategies to obtain the association score between them. In Table 5, we show some examples of strategies and their obtained association scores. We observe that negative sentiment tends to be most associated to propose. We hypothesize that this is because that people who disagree more tend to get better deals. We observe that people do not tend to associate negative sentiment with trade-in, which is in-fact highly associated with positive sentiment, because people might want to remain positive while offering something. Similarly, people tend to give vague proposals by hedging, for instance, I could go lower if you can pick it up, than when suggesting trade-in. Concern also seems to be least associated with certainty, and most with politeness-based strategies. Thus, we observe that our model is able to provide meaningful insights which corroborate prior observations, justifying its ability to learn strategy associations well." }, { "heading": "6 RELATED WORK", "text": "Dialogue Systems: Goal-oriented dialogue systems have a long history in the NLP community. Broadly, goal-oriented dialogue can be categorized into collaborative and non-collaborative systems. The aim of agents in a collaborative setting is to achieve a common goal, such as travel and flight reservation (Wei et al., 2018) and information-seeking (Reddy et al., 2019). Recent years have seen a rise in non-collaborative goal-oriented dialogue systems such as persuasion (Wang et al., 2019; Dutt et al., 2020; 2021), negotiation (He et al., 2018; Lewis et al., 2017) and strategy games (Asher et al., 2016) due to the challenging yet interesting nature of the task. Prior work has also focused on decision-making games such as Settlers of Catan (Cuayáhuitl et al., 2015) which mainly involve decision-making skills rather than communication. Lewis et al. (2017) developed the DealOrNoDeal dataset in which agents had to reach a deal to split a set of items. Extensive work has been done on capturing the explicit semantic history in dialogue systems (Kumar et al., 2020; Vinyals & Le, 2015; Zhang et al., 2018). Recent work has shown the advantage of modeling the dialogue history in the form of belief span (Lei et al., 2018) and state graphs (Bowden et al., 2017). He et al. (2018) proposed a bargaining scenario that can leverage semantic and strategic history. Zhou et al. (2020) used unsupervisedly learned FSTs to learn dialogue structure. This approach, however, although effective in explicitly incorporating pragmatic strategies, does not leverage the expressive power of neural networks. Our model, in contrast, combines the interpretablity of graph-based approaches and the expressively of neural networks, improving the performance and interpretability of negotiation agents.\nGraph Neural Networks: The effectiveness of GNNs (Bruna et al., 2013; Defferrard et al., 2016; Kipf & Welling, 2017) has been corroborated in several NLP applications (Vashishth et al., 2019), including semantic role labeling (Marcheggiani & Titov, 2017), machine translation (Bastings et al., 2017), relation extraction (Vashishth et al., 2018), and knowledge graph embeddings (Schlichtkrull et al., 2018; Vashishth et al., 2020). Hierarchical graph pooling based structure encoders have been successful in encoding graphical structures (Zhang et al., 2019). We leverage the advances in GNNs and propose to use a graph-based explicit structure encoder to model negotiation strategies. Unlike HMM and FST based encoders, GNN-based encoders can be trained by optimizing the downstream loss and have superior expressive capabilities. Moreover, they provide better interpretability of the model as they can be interpreted based on observed explicit sequences (Tu et al., 2020; NorcliffeBrown et al., 2018). In dialogue systems, graphs have been used to guide dialogue policy and response selection. However, they have been used to encode external knowledge (Tuan et al., 2019; Zhou et al., 2018) or speaker information (Ghosal et al., 2019), rather than compose dialogue strategies on-the-fly. Other works (Tang et al., 2019; Qin et al., 2020) focused on keyword prediction using RNN-based graphs. Our work is the first to incorporate GATs with hierarchical pooling, learning pragmatic dialogue strategies jointly with the end-to-end dialogue system. Unlike in prior work, our model leverages hybrid end-to-end and modularized architectures (Liang et al., 2020; Parvaneh et al., 2019) and can be plugged as explicit sequence encoder into other models." }, { "heading": "7 CONCLUSION", "text": "We present DIALOGRAPH, a novel modular negotiation dialogue system which models pragmatic negotiation strategies using Graph Attention Networks with hierarchical pooling and learns an explicit strategy graph jointly with the dialogue history. DIALOGRAPH outperforms strong baselines in downstream dialogue generation, while providing the capability to interpret and analyze the intermediate graph structures and the interactions between different strategies contextualized in the dialogue. As future work, we would like to extend our work to discover successful (e.g.: good for the seller) and unsuccessful strategy sequences using our interpretable graph structures." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors are grateful to the anonymous reviewers for their invaluable feedback, and to Alissa Ostapenko, Shruti Rijhwani, Ritam Dutt, and members of the Tsvetshop at CMU for their helpful feedback on this work. The authors would also like to thank Yiheng Zhou for helping with negotiation strategy extraction and FeHED model. This material is based upon work supported by the National Science Foundation under Grant No. IIS2007960 and by the Google faculty research award. We would also like to thank Amazon for providing GPU credits." }, { "heading": "A DIALOGUE ACTS", "text": "Here we provide the details about the dialogue acts that we have used to annotate the utterances. 10 are taken from He et al. (2018) and 4 are based on the actions taken by the users. The rule based acts are extracted using the code provided by them8. The details are in Table 6." }, { "heading": "B NEGOTIATION STRATEGIES", "text": "Here we provide the details about the 15 Negotiation Strategies (Zhou et al., 2019) and 21 Negotiation Strategies (Zhou et al., 2020) in Tables 7 and 8." }, { "heading": "C STRATEGY-GRAPH VISUALIZATION", "text": "A visualization of a strategy sequence graph. Refer to §2.2 for more details. We also provide additional details regarding the number of nodes and edges in our strategy graphs in Table 9." }, { "heading": "D HYPERPARAMETERS", "text": "We present the hyper-parameters for all the experiments, their corresponding search space and their final values in Table 10. We also present additional details of our experiments below. We use most of the hyperparameters from Zhou et al. (2020). Each training run took at most 3 hours on a single Nvidia GeForce GTX 1080Ti GPU and all the models were saved based on Strategy Macro F1 performance.\nFor experiments for Table 1 and 2 we saved the best models on best Strategy Macro F1 performance (HED being saved on outcome class prediction). This is because we wanted to prioritize and optimize our final model to capture sequence-structural information owing to our focus on interpretability. While performing ablation studies for Table 3, not all models have structure encoders, and hence for a fair comparison we chose a metric independent of the different modules for all the models in ablations. We use the negotiation outcome class prediction (RC-Acc) scores as that optimizes the dialogue for good negotiation outcome, which indirectly helps train the model to capture the sequence of strategies." }, { "heading": "E NEGOTIATION DATASET STATISTICS", "text": "In Table 11 we provide the CraiglistBargain dataset statistics along with data sizes after filtering conversations with less than 5 turns. The maximum and average number of turns in any conversation is 47 and 9.2 respectively. Also, the maximum and average number of strategies in an utterance is 13 and 3 respectively." }, { "heading": "F EXAMPLE CONVERSATIONS", "text": "G INFLUENCE VISUALIZATION\nRefer to Figure 5." }, { "heading": "H HUMAN EVALUATION INTERFACE", "text": "" } ]
2,021
null
SP:3b3e7833784c53527eb32d5f6ac8d720f9d764bd
[ "The paper studies a problem of learning step-size policy for L-BFGS algorithm. This paper falls into a general category of meta-learning algorithms that try to derive a data-driven approach to learn one of the parameters of the learning algorithm. In this case, it is the learning rate of L-BFGS. The paper is very similar in nature to the papers of Ravi & Larochelle, MAML and Andrychowicz." ]
We consider the problem of how to learn a step-size policy for the LimitedMemory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. This is a limited computational memory quasi-Newton method widely used for deterministic unconstrained optimization but currently avoided in large-scale problems for requiring step sizes to be provided at each iteration. Existing methodologies for the step size selection for L-BFGS use heuristic tuning of design parameters and massive re-evaluations of the objective function and gradient to find appropriate step-lengths. We propose a neural network architecture with local information of the current iterate as the input. The step-length policy is learned from data of similar optimization problems, avoids additional evaluations of the objective function, and guarantees that the output step remains inside a pre-defined interval. The corresponding training procedure is formulated as a stochastic optimization problem using the backpropagation through time algorithm. The performance of the proposed method is evaluated on the training of classifiers for the MNIST database for handwritten digits and for CIFAR-10. The results show that the proposed algorithm outperforms heuristically tuned optimizers such as ADAM, RMSprop, L-BFGS with a backtracking line search and L-BFGS with a constant step size. The numerical results also show that a learned policy can be used as a warm-start to train new policies for different problems after a few additional training steps, highlighting its potential use in multiple large-scale optimization problems.
[]
[ { "authors": [ "A. Agrawal", "B. Amos", "S. Barratt", "S. Boyd", "S. Diamond", "Z. Kolter" ], "title": "Differentiable convex optimization layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "M. Andrychowicz", "M. Denil", "S. Gomez", "M.W. Hoffman", "D. Pfau", "T. Schaul", "B. Shillingford", "N. de Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "arXiv preprint arXiv:1606.04474,", "year": 2016 }, { "authors": [ "Y. Bengio" ], "title": "Gradient-based optimization of hyperparameters", "venue": "Neural computation,", "year": 2000 }, { "authors": [ "J. Bergstra", "Y. Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "J.S. Bergstra", "R. Bardenet", "Y. Bengio", "B. Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "R. Bollapragada", "D. Mudigere", "J. Nocedal", "H.-J.M. Shi", "P.T.P. Tang" ], "title": "A progressive batching L-BFGS method for machine learning", "venue": "arXiv preprint arXiv:1802.05374,", "year": 2018 }, { "authors": [ "C. Daniel", "J. Taylor", "S. Nowozin" ], "title": "Learning step size controllers for robust neural network training", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "X. Dong", "J. Shen", "W. Wang", "Y. Liu", "L. Shao", "F. Porikli" ], "title": "Hyperparameter optimization for tracking with continuous deep Q-learning", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "S. Hochreiter", "A.S. Younger", "P.R. Conwell" ], "title": "Learning to learn using gradient descent", "venue": "In International Conference on Artificial Neural Networks,", "year": 2001 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "K. Li", "J. Malik" ], "title": "Learning to optimize", "venue": "arXiv preprint arXiv:1606.01885,", "year": 2016 }, { "authors": [ "D.C. Liu", "J. Nocedal" ], "title": "On the limited memory BFGS method for large scale optimization", "venue": "Mathematical programming,", "year": 1989 }, { "authors": [ "L. Metz", "N. Maheswaranathan", "J. Nixon", "D. Freeman", "J. Sohl-Dickstein" ], "title": "Understanding and correcting pathologies in the training of learned optimizers", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "P. Moritz", "R. Nishihara", "M. Jordan" ], "title": "A linearly-convergent stochastic L-BFGS algorithm", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "A. Paszke", "S. Gross", "F. Massa", "A. Lerer", "J. Bradbury", "G. Chanan", "T. Killeen", "Z. Lin" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "J. Snoek", "O. Rippel", "K. Swersky", "R. Kiros", "N. Satish", "N. Sundaram", "M. Patwary", "M. Prabhat", "R. Adams" ], "title": "Scalable bayesian optimization using deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "S. Thrun", "L. Pratt" ], "title": "Learning to learn", "venue": "Springer Science & Business Media,", "year": 1998 }, { "authors": [ "A. Wills", "T. Schön" ], "title": "Stochastic quasi-newton with line-search regularization", "venue": "arXiv preprint arXiv:1909.01238,", "year": 2019 }, { "authors": [ "Z. Xu", "A.M. Dai", "J. Kemp", "L. Metz" ], "title": "Learning an adaptive learning rate schedule", "venue": "arXiv preprint arXiv:1909.09712,", "year": 2019 }, { "authors": [ "M.D. Zeiler" ], "title": "Adadelta: an adaptive learning rate method", "venue": "arXiv preprint arXiv:1212.5701,", "year": 2012 }, { "authors": [ "Z. Zhang" ], "title": "Derivation of backpropagation in convolutional neural network (cnn)", "venue": null, "year": 2016 }, { "authors": [ "C. Zhou", "W. Gao", "D. Goldfarb" ], "title": "Stochastic adaptive quasi-newton methods for minimizing expected values", "venue": "In International Conference on Machine Learning,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Consider the unconstrained optimization problem\nminimize x f(x) (1)\nwhere f : Rn → R is an objective function that is differentiable for all x ∈ Rn, with n being the number of decision variables forming x. Let ∇xf(x0) be the gradient of f(x) evaluated at some x0 ∈ Rn. A general quasi-Newton algorithm for solving this problem iterates\nxk+1 = xk − tkHkgk (2) for an initial x0 ∈ Rn until a given stop criterion is met. At the k-th iteration, gk = ∇xf(xk) is the gradient,Hk is a positive-definite matrix satisfying the secant equation (Nocedal and Wright, 2006, p. 137) and tk is the step size.\nIn this paper, we develop a policy that learns to suitably determine step sizes tk when the product Hkgk is calculated by the Limited-Memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm (Liu and Nocedal, 1989). The main contributions of the paper are:\n1. We propose a neural network architecture defining this policy taking as input local information of the current iterate. In contrast with more standard strategies, this policy is tuning-free and avoids re-evaluations of the objective function and gradients at each step. The training procedure is formulated as a stochastic optimization problem and can be performed by easily applying backpropagation through time (TBPTT).\n2. Training classifiers in the MNIST database (LeCun et al., 1998), our approach is competitive against heuristically tuned optimization procedures. Our tests show that the proposed policy is not only able to outperform competitors such as ADAM and RMSprop in wall-clock time and optimal/final value, but also performs better than L-BFGS with backtracking line searches, which is the gold standard, and with constant step sizes, which is the baseline.\n3. According to subsequent experiments on CIFAR-10 (Krizhevsky et al., 2009), the proposed policy can generalize to different classes of problems after a few additional training steps on examples from these classes. This indicates that learning may be transferable between distinct types of tasks, allowing to explore transfer learning strategies.\nThis result is a step towards the development of optimization methods that frees the designer from tuning control parameters as it will be motivated in Section 2. The remaining parts of this paper are organized as follows: Section 3 presents the classical L-BFGS algorithm and discuss some methodologies to determine step sizes; Section 4 contains the architecture for the proposed policy and also discussions on how it was implemented; Section 5 describes the training procedure; and, finally, Section 6 presents experiments using classifiers to operate on MNIST and CIFAR-10 databases. The notation is mainly standard. Scalars are plain lower-case letters, vectors are bold lower-case, and matrices are bold upper-case. The clip function is defined as clipul (y) := min (u,max (l, y))." }, { "heading": "2 MOTIVATION", "text": "Most algorithms used in artificial intelligence and statistics are based on optimization theory, which has widely collaborated for the success of machine learning applications in the last decades. However, this two-way bridge seems not to be currently leveraging its full potential in the other sense, that is, to learn how to automate optimization procedures. Indeed, performing satisfactory optimization, or solving learning problems, still relies upon the appropriate tuning of parameters of the chosen algorithm, which are often grouped with other hyper-parameters of the learning task. Despite the existence of several methodologies to obtain good values for these parameters (Bengio, 2000; Bergstra et al., 2011; Bergstra and Bengio, 2012; Snoek et al., 2015; Daniel et al., 2016; Dong et al., 2018), the search for tuning-free algorithms that perform better than heuristically designed ones is of great interest among practitioner and theoreticians. Indeed, besides the generally-desirable faster convergence, the ready-to-use nature of such algorithms allows the user to focus his attention on other problem-level hyper-parameters while the optimization procedure is automatically performed, resulting in better time and effort allocation. As recent advancements of machine learning have helped automatize the solution of numberless problems, optimization theory should equally benefit from these, balancing the bridge flows.\nFrom a wider viewpoint, most optimization problem requires the user to select an algorithm and tune it to some extent. Although intuition and knowledge about the problem can speed-up these processes, trial-and-error methodologies are often employed which can be a time-consuming and inefficient task. With that in mind, the concept of Learned optimizers has been gathering attention in the last few years and, basically, refers to optimization policies and routines that were learned by looking at instances of optimization problems, here called tasks. This idea was introduced by Li and Malik (2016) and Andrychowicz et al. (2016) building upon previous results of “learning to learn” or “meta-learning” (Thrun and Pratt, 1998; Hochreiter et al., 2001). In the former, the authors presented an optimization policy based on a neural network trained by reinforcement learning and taking as input the history of gradient vectors at previous iterations. The latter adopts a long short-term memory (LSTM) to achieve a similar task, but the learning is done by truncated backpropagation through time after unrolling the proposed optimizer for a certain number of steps. Subsequently, it was shown in Metz et al. (2019) how multilayer perceptrons (MLP), adequately trained using a combined gradient estimation method, can perform faster in wall-clock time compared to current algorithms of choice. Also within this scenario, in Xu et al. (2019) a reinforcement learning-based methodology to auto-learn an adaptive learning rate is presented. Following this same fashion, in this present paper, instead of completely learning an optimizer from data, we propose a mixture of these ideas into a classical optimization procedure. Thus, the resulting optimizer, composed by a combination of L-BFGS and the proposed policy, will be learned in a constrained domain that assures valuable mathematical properties. The idea is to leverage both frameworks, inheriting the theoretical aspects assured by optimization theory while learning a policy to rule out the hand-design of parameters.\nAlgorithm 1: L-BFGS algorithm Input: si = xi+1 − xi, yi = gi+1 − gi and ρi = 1/(sTi yi) for all i ∈ k −m, . . . , k − 1; and current gradient gk, Result: update direction dk = −Hkgk\n1 q ← gk; 2 for i = k − 1, . . . , k −m do 3 αi ← ρisTi q; 4 q ← q − αiyi; 5 end 6 γ = |sTk−1yk−1|/(yTk−1yk−1) ;\n7 r ← γq; 8 for i = k −m, . . . , k − 1 do 9 β ← ρiyTi r;\n10 r ← r + si(αi − β); 11 end 12 dk ← −r;" }, { "heading": "3 L-BFGS ALGORITHM", "text": "The L-BFGS algorithm was originally presented in Liu and Nocedal (1989) and is here transcribed into Algorithm 1. It is a quasi-Newton method derived from the BFGS algorithm (Nocedal and Wright, 2006) lowering space complexity from quadratic to linear in the problem dimension at the expense of precision. This algorithm calculates a descending direction in the search space taking into account an estimate of the inverse hessian matrix of f(x), given by Hk. This matrix is not explicitly constructed but rather the product dk := −Hkgk is obtained from the past m values of xk and gk, which have to be stored. This property makes it often the algorithm of choice for large-scale deterministic non-linear optimization problems. If f(x) is convex in x, this algorithm is guaranteed to provide a descending update direction, but the same does not apply for non-convex objective functions. However, a simple way to circumvent this is by removing iterations i in lines 2 and 8 of Algorithm 1 such that ρi ≤ 0 (Nocedal and Wright, 2006, p. 537), which is used in this paper.\nA matter of great relevance within this scope is how to choose an appropriate step size tk to apply the update rule in Eq. (2). To the best of our knowledge, there does not seem to exist a consensus on how to choose tk in a general way for non-convex objective functions. The scaling factor γ in lines 6-7 of Algorithm 1 is known to assure that the step size tk = 1 is accepted in most iterations in the convex optimization context, but not always. We will refer to a constant step-size policy that outputs tk = 1 as the baseline L-BFGS. However, a line search (LS) procedure is often combined with L-BFGS to assure its convergence. Ideally, this should be performed by solving tk = arg mint>0 f(xk + tdk) but this exact approach is often too expensive to be adopted, motivating the use of inexact ones. An example is the backtracking line search (BTLS), which takes an initial length tk for the step size and shrinks it repeatedly until the so-called sufficient decrease Wolfe Condition f(xk + tkdk) ≤ f(xk) + c1tkg T k dk is fulfilled, where c1 ∈ (0, 1) is a control parameter to be tuned. Another parameter that has to be designed is the contraction factor c2 ∈ (0, 1) that shrinks the step size, i.e., tk ← c2tk, see Nocedal and Wright (2006, p. 37). This method assures convergence to a localminima at the cost of re-evaluating the objective function several times per iteration. This is a price that the user is, in some cases, willing to pay, but for large-dimensional problems this procedure is likely to become the bottle-neck of the optimization task. It is important to highlight that the method to be presented may also apply to other optimization algorithms that deeply rely on line searches to perform well. However, this paper focus on L-BFGS as it is often the algorithm of choice in large-scale deterministic optimization.\nIn the context of stochastic optimization, many modified versions of Algorithm 1 together with methodologies for choosing tk are available (Moritz et al., 2016; Zhou et al., 2017; Bollapragada et al., 2018; Wills and Schön, 2019), but for sake of simplicity, our work will deal exclusively with deterministic non-linear optimization problems." }, { "heading": "4 LEARNED POLICY FOR SELECTING STEP SIZES", "text": "Recalling the definition of sk and yk in Algorithm 1, our policy is defined as tk = π(dk, gk, sk−1,yk−1;θ) and selects an adequate step size for L-BFGS but neither relying on any parameter tuning nor requiring additional evaluations of the objective function. Instead, its parameters that are represented by θ should be learned from data. Let us, from now on, de-\nfine this policy combined with Algorithm 1 as the L-BFGS-π approach. The architecture of the policy π(dk, gk, sk−1,yk−1;θ) is shown in Fig. 1. To allow the policy to be independent from the problem size n, only the inner products between its inputs are used. These values define u0 = dotln(dk, gk, sk−1,yk−1) where dotln(·) returns the component-wise application of f(x) = ln(min(x, )) to the elements of X = [dk gk sk−1 yk−1]T [dk gk sk−1 yk−1] but with the superdiagonal entries having their signs reversed. We have chosen = 10−8 to avoid imaginaryvalued entries.\nThe vector u0 is the input to two parallel input layers, which are fully connected linear layers that transport information in u0 to another vector space Rnh (in our tests, we adopted nh = 6). Their outputs, as usual, are defined as u1 = W01u0 + b01 and u2 = W02u0 + b02. The logarithm operation was adopted to let the linear layers evaluate products and divisions between powers of the inputs by simply summing and subtracting them. Moreover, as the output is positive, working in the logarithmic vector space allows us to use a wider range of numerical values. Subsequently, let us define the normalized vectors ū1 = u1/‖u2‖ and ū2 = u2/‖u2‖ to calculate the scalar projection of ū1 onto ū2 and clip the result to some interval [τm, τM ], yielding the log-step size\nτk = clip τM τm ( ūT2 ū1 ) =: p(u1,u2) (3)\nFinally, the selected step size is obtained as tk = eτk . To geometrically interpret this, we sketch three different scenarios in Fig. 2. The dashed lines represent orthogonal axes spanned by some arbitrary ū2 and the gray strip represents the interval [τm, τM ] along the direction of ū2 whence τk should be taken. When the Linear Layer 1 maps u0 into u′1, the scalar projection of ū ′ 1 onto ū2 is beyond the maximal τM , so τk is clipped to it. In the same way, for ū′′′1 the step size will be the minimal one tk = eτm whereas for the intermediate ū′′1 we have τk ∈ (τm, τM ). The two layers, jointly trained, should learn how to position ū1 and ū2 in the lifted space to represent important directional information of dk and gk by looking at similar optimization tasks, being thus able to produced suitable step sizes.\nū2\nū′1\nū′′′1\nū′′1\n[τm , τM\n]\nfor example, in the sufficient decrease Wolfe condition for backtracking line search, which makes our policy comparable to them in the sense that π(·;θ) does not require additional information to operate.\nHowever, notice that the clip function is not suitable for training given that it is non-differentiable and gradients cannot be backpropagated through it. Fortunately, the clip operation (3) can be cast as\na convex optimization problem\nτk = arg min τ∈R ‖u2τ − u1‖2 (4)\ns.t. τm ≤ τ ≤ τM (5) allowing τk to be calculated by a convex optimization layer, defined here as a CVX Layer, (Agrawal et al., 2019). This last layer can output the solution to a parameter-dependent convex optimization problem. For the special case where a solution is not differentiable with respect to the input (e.g., in our case when an inequality constraint is active), the automatic differentiation procedure delivers an heuristic quantity that can be employed as a gradient. The use of a CVX Layer is therefore convenient for training our policy but, on the other hand, using Eq. (3) in its place when applying the already-trained policy significantly speeds up the step-size evaluation, compared to solving (4).\nIt is important to remark that this policy is defined as independent from both the memory length m of Algorithm 1 and the problem dimension n. Additionally, the lower and upper limits for the log-step size are τm and τM , respectively, and can also be learned. In this work, however, we chose τm = −3 and τM = 0, letting tk ∈ [0.0497, 1]. This interval is comprehensive enough to let our method be compared in a fair way to backtracking line searches. Moreover, when we allowed τM to be learned in our tests it converged to values that were very close to τM = 0, indicating that 1 was already an adequate upper limit for the step size." }, { "heading": "5 TRAINING THE POLICY", "text": "The L-BFGS-π procedure can be trained by truncated backpropagation through time (TBPTT), in a similar way to Andrychowicz et al. (2016). From this point on, training the optimizer is referred to as the outer optimization problem whereas an instance of a task in the form of (1) is called the inner optimization problem. Therefore, this outer problem is defined as\nminimize θ\nF (θ) := Ex0∼RnEf∼T (∑K k=1 wkf(xk) )\n(6)\ns.t. xk+1 = xk + π(dk, gk, sk−1,yk−1;θ)dk (7)\nwhere dk is given by Algorithm 1, K ∈ N is the truncated horizon over which optimization steps are unrolled, wk, k = 1, . . . ,K are weight scalars, herein considered wk = 1, and T is some set of tasks formed by inner objective functions f(x) to be optimized. In (6), the innermost expected value is approximated by sampling tasks within a training set Ttrain, one at a time, and unrolling the optimization for K inner steps for some random x0 with i.i.d. components. One outer optimization step consists of, performing K inner steps, computing a gradient for the outer optimization problem ∇θF (θ) and updating θ, in our case, by ADADELTA with learning rate equals 1 (Zeiler, 2012). To assure that different orders of magnitude of x are seen during this training, we set the initial point for the next outer step to be the last iterate from the previous one, i.e., x0 ← xK , and perform another whole outer optimization step. This is repeated for T outer steps or until ‖gk‖ < = 10−10, when a new random x0 is then sampled. Backpropagation to calculate∇θF (θ) happens through all operations with exception of the inner gradient evaluation gk, which is considered an external input. Double floating-point precision is used to assure accurate results in the comparisons." }, { "heading": "6 EXAMPLE: TRAINING A CLASSIFIER ON MNIST", "text": "All tests were carried out with the aid of PyTorch (Paszke et al., 2019) to backpropagate gradients and of cvxpylayers (Agrawal et al., 2019) to implement the CVX layers. They were run on an Intel Xeon Gold 6132 equipped with an NVidia V100. First, we carried out tests on convex optimization problems, namely quadratic functions and logistic regression problems, but no significant difference was noticed and these were, therefore, omitted.\nIn this example, we trained an MLP with nl = 1 hidden layer with nu = 20 units and sigmoid activation functions to classify images of digits in MNIST database (LeCun et al., 1998). This model is adapted from one example in Andrychowicz et al. (2016). We used a full-batch gradient at every iteration, even though stochastic optimization is generally the most common strategy employed in similar tasks. Nevertheless, our main interest in this example is not the classification problem itself but rather to analyze the optimization problem and how our deterministic algorithm performs on it.\nThe n = 16,280 parameters defining an MLP are concatenated in x and f(x) is the associated cross-entropy loss function for a given set of images. This is known to be a non-convex optimization problem, mainly because of the presence of non-linear activation functions. A training set of tasks Ttrain was constructed by randomly grouping images in MNIST training set into 60 batches of N = 1,000 images. For each of these batches one initial conditionx0 was sampled, which altogether compose Ttrain with 60 tasks. The policy π(·; θ) was trained for 50 epochs,K = 50 and T = 8, and its performance was compared to other methods, namely, ADAM, RMSProp, L-BFGS with a BTLS and the baseline L-BFGS (referred to as L-BFGS-B). For running Algorithm 1, we selected m = 5. The learning rates of ADAM and RMSProp were heuristically tuned to yield fast convergence by exhaustive search within the set { i × 10j : i ∈ {1, 3}, j ∈ {−3, . . . ,−1} } and the values 0.03 and 0.01 were used, respectively. The BTLS parameters c1 and c2 were searched in the set {0.25, 0.5, 0.75}2 and c1 = 0.25, c2 = 0.5 were chosen, associated to the best results, i.e., fastest convergence on tasks in Ttrain. The initial step size for the BTLS was tk = 1. The following comparisons were performed in a test set of tasks Ttest built similarly to Ttrain but considering all images in the MNIST test set split into 10 batches of N = 1,000 images, and 100 random samples of x0 were generated for each batch, resulting in 1,000 tasks. The optimization was performed for K = 800 steps or until ‖gk‖ < 10−8. The first 3 samples for each optimizer were considered “warm-up” runs and, therefore, were discarded to avoid having time measurement affected by any initial overhead.\nThe objective function value for three selected tasks is shown in the upper plots of Fig. 3 along with the correspondent selected step sizes by π(·;θ) and the BTLS, on the bottom ones. For Task 1, L-BFGS-π was successful in attaining lower values for f(x) when compared to the other algorithms. For some tasks, such as Task 2, poorer performance was noticed when compared to the other LBFGS approaches and, for some other tasks as Task 3, Adam and RMSprop are more successful than the others. This suggests that none of these methods outperforms the others in a general case. Notice that in these figures, the spikes in the curves associated with L-BFGS-π and with the baseline L-BFGS-B represent steps at which tk = π(·;θ) and tk = 1, respectively, were not step sizes that provided a decrease. Results for other tasks are presented in Appendix B.\nFor each individual task, the first instant of time tf at which the optimization procedure attained some precision-based stop criteria ‖gk‖ < ε for different values of ε was computed for all four optimization procedures, and a comparison between our methodology and others is shown in Fig. 4. These plots compare algorithms two-by-two and the way to interpret them is by observing that a point above the blue line represents a task for which the algorithm associated to the x axes reached the precision criterion faster, and vice-versa. If such tf does not exist, we define tf = ∞ and the two subplots, on the right and on the top, are use to represent tasks for which the given precision was reached by one of the algorithms but not by the other. Tasks for which the criterion was not reached by both algorithms are not displayed. Notice that better precision values were reached by our approach when compared to ADAM and RMSProp but a similar performance was obtained\nwhen compared to the heuristically designed backtracking line search method, which is the gold standard.\nAdditionally, Table 1 presents the percentage of times that L-BFGS-π reached the defined precision before other methods, characterizing a “win”, and that both methods reached the precision at the exact same time (which is very unlikely) or have not reached this precision after K = 800 inner steps, denoting a “tie”. To investigate whether our policy is able to generalize and perform well on higher-dimension problems, we also present these results for (nl, nu) equals to (2, 400) and (4, 800), characterizing problems of size n = 637, 600 and n = 128, 317, 600 respectively. Different values of ε were considered as smaller values for ‖gk‖ were reached for these two last cases. For (nl, nu) = (1, 20), which contains problems of the same dimension as those seen during training, L-BFGS-π clearly outperforms RMSProp and ADAM whereas it presents a slightly faster convergence than L-BFGS-B and L-BFGS-BTLS for smaller ε. Unfortunately, for higher dimension problems, the proposed policy did not achieve the same level of performance as in the problem it was trained for.\nIn spite of that, given the non-convexity of this problem, it is also important to observe what were the minimum values obtained for f(x) by each algorithm. As the proposed policy does not assure a decreasing step size at each iteration, instead of the final value f(xK) we looked at f∗ := mink f(xk), which can easily be stored and updated during the optimization. However an analogous discussion is presented in Appendix C but considering only the final values f(xK) and the same conclusions are drew. More than simply looking at the minimum values, we would like to verify whether L-BFGS-π\nattains lower values f∗ when compared to other algorithms. To this end we present the index Ia(f) = ln ( f [a] ∗ /f [L-BFGS-π] ∗ ) (8)\nwhere f [a]∗ represents the minimum value reached by some algorithm a for f(x). Hence, Ia(f) > 0 implies that L-BFGS-π performs better than a in the task associated to f(x) and its initial condition. Box plots of the obtained values for all tasks and each one of the other algorithms are presented in Fig. 5. In these plots we can notice that L-BFGS-π reaches, in average, better values than all of its competitors. Also, ADAM and RMSprop generalized very poorly to higher-dimension problems, indicating that some re-tuning is required for these algorithms. Under this perspective, L-BFGS-π had a similar performance to L-BFGS-BTLS and L-BFGS-B, despite the presence of some outliers indicating cases where our policy reached bad local minima. This showed how the proposed policy was successful in learning to provide step sizes in a single shot that are as good as those generated by a heuristically designed line search, which benefits from the possibility of re-evaluating the objective function as much as needed.\nFinally, as a last experiment, we applied the learned policy and these competitors to a class of tasks comprising the training of a Convolutional Neural Network (CNN) to classify images in CIFAR-10, see (Krizhevsky et al., 2009). The adopted architecture is described in Zhang (2016) but sigmoid activation functions were replaced by ReLU to make this problem even more distant from the one π was trained on. A training and a test set of tasks, T Ctrain and T Ctest, were built similarly to Ttrain and Ttest but using images in CIFAR-10 instead. Evaluating these algorithms in T Ctest and computing the index Ia(f) for each task allows us to present the first box plot in Fig. 6. This figure indicates that π do not perform as good as before in these problems. This could be expected as a different architecture directly affects the nature of the objective function. To investigate whether the learned policy π can be used as a warm-start for the training a new policy πC , we perform additional training steps on π corresponding to 10 epochs in the training set T Ctrain, but eliminating 5/6 of its tasks. This is done to show that even with very low effort placed in this retraining phase and considering fewer learning data, we can benefit from previous learning to speed-up new training procedures. The corresponding results are presented in the second box plot of Fig. 6, which shows that the new policy πC performs comparably to the competitors. Certainly, further investigation is required but this suggests that some learning can be transferred across distinct problem domains." }, { "heading": "7 CONCLUSIONS", "text": "In this work we demonstrate how to build and train a neural network to work as step-size policy for the L-BFGS algorithm. The step sizes provided by this policy are of the same quality as those of a backtracking line searches, hence making the latter superfluous. Moreover, L-BFGS with our step-size policy outperforms, in wall-clock time and optimal/final value, ADAM and RMSprop with heuristically tuned parameters in training classifiers for the MNIST database. Also, we showed how a learned policy can be used to warm start the training of new policies to operate on different classes of problems. In future work, we intend to extend this result for stochastic optimization,\nallowing us to learn policies to determine, for example, learning rates in other classic machine learning algorithms." }, { "heading": "B SAMPLES OF TEST TASKS", "text": "In this appendix we provide in Fig. 7 the objective function f(xk) obtained in our tests for the 10 first tasks in Ttest. Differently form the results in Fig. 3 that were chosen by inspection, the plots in Figure 7 should represent a more uniform visualization of the policy behavior in this set." }, { "heading": "C FINAL VALUE ANALYSIS", "text": "Here we present the results regarding the index Ia(f) defined in (8) but in the case where one chooses to define f [a]∗ based on the final value f(xK) obtained after applying the algorithm a for K = 800 iterations. The box plot in Fig. 5 is reconstructed and presented in Fig. 8. The conclusion drawn from this analysis is the same as the one obtained in the former definition of f [a]∗ , based on the minimum value f(xk) for all k. However, in the context of deterministic nonlinear optimization it is a good idea to keep the best visited iterate so far and allow the algorithm to explore other areas of the decision space. This is the reasoning that motivates considering the minimum over the iterations in the main analysis." } ]
2,020
null
SP:7a92beaba926a93a627208abebe4a455ae3e0400
[ "This paper proposes a new calibration error measurement named UCE (Uncertainty Calibration Error) for deep classification models. It consists in doing a calibration in order to achieve \"perfect calibration\" (i.e., the uncertainty provided is equivalent to the classification error at all levels in [0, 1]), relying on normalized entropy for multiclass classification. This UCE is well justified for classification problems with several classes to process, where the entropy is demonstrated to be asymptotically equivalent to the classification (top-1) error. A point with this UCE metric is that is has some interpretability properties in terms of its value, and is said to be robust to the number of bins used." ]
Various metrics have recently been proposed to measure uncertainty calibration of deep models for classification. However, these metrics either fail to capture miscalibration correctly or lack interpretability. We propose to use the normalized entropy as a measure of uncertainty and derive the Uncertainty Calibration Error (UCE), a comprehensible calibration metric for multi-class classification. In our experiments, we focus on uncertainty from variational Bayesian inference methods and compare UCE to established calibration errors on the task of multi-class image classification. UCE avoids several pathologies of other metrics, but does not sacrifice interpretability. It can be used for regularization to improve calibration during training without penalizing predictions with justified high confidence.
[]
[ { "authors": [ "Arsenii Ashukha", "Alexander Lyzhov", "Dmitry Molchanov", "Dmitry Vetrov" ], "title": "Pitfalls of in-domain uncertainty estimation and ensembling in deep learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern Recognition and Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural network", "venue": "In ICML, pp", "year": 2015 }, { "authors": [ "Glenn W Brier" ], "title": "Verification of forecasts expressed in terms of probability", "venue": "Monthly Weather Review,", "year": 1950 }, { "authors": [ "Chenyi Chen", "Ari Seff", "Alain Kornhauser", "Jianxiong Xiao" ], "title": "Deepdriving: Learning affordance for direct perception in autonomous driving", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Andre Esteva", "Brett Kuprel", "Roberto A Novoa", "Justin Ko", "Susan M Swetter", "Helen M Blau", "Sebastian Thrun" ], "title": "Dermatologist-level classification of skin cancer", "venue": null, "year": 2017 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical Reparameterization with Gumbel-Softmax", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Max Welling" ], "title": "Variational dropout and the local reparameterization trick", "venue": "In NeurIPS, pp", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny", "venue": null, "year": 2009 }, { "authors": [ "Meelis Kull", "Miquel Perello Nieto", "Markus Kängsepp", "Telmo Silva Filho", "Hao Song", "Peter Flach" ], "title": "Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Ananya Kumar", "Percy S Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Sunita Sarawagi", "Ujjwal Jain" ], "title": "Trainable calibration measures for neural networks from kernel mean embeddings", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative normalizing flows for variational bayesian neural networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "In Bayesian Deep Learning Workshop,", "year": 2016 }, { "authors": [ "Wesley J Maddox", "Pavel Izmailov", "Timur Garipov", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory F Cooper", "Milos Hauskrecht" ], "title": "Obtaining Well Calibrated Probabilities Using Bayesian Binning", "venue": "In AAAI,", "year": 2015 }, { "authors": [ "Jeremy Nixon", "Michael W Dusenberry", "Linchuan Zhang", "Ghassen Jerfel", "Dustin Tran" ], "title": "Measuring calibration in deep learning", "venue": "In CVPR Workshops,", "year": 2019 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D. Sculley", "Sebastian Nowozin", "Joshua Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Gabriel Pereyra", "George Tucker", "Jan Chorowski", "Łukasz Kaiser", "Geoffrey Hinton" ], "title": "Regularizing neural networks by penalizing confident output distributions", "venue": "In ICLR Workshop,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": null, "year": 1929 }, { "authors": [ "Juozas Vaicenavicius", "David Widmann", "Carl Andersson", "Fredrik Lindsten", "Jacob Roll", "Thomas Schön" ], "title": "Evaluating model calibration in classification", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "Sida Wang", "Christopher Manning" ], "title": "Fast dropout training", "venue": "In ICML, pp", "year": 2013 }, { "authors": [ "Backprop Blundell" ], "title": "Gaussian distribution with diagonal covariance matrix as variational posterior q(w|θ), parameterized by mean μ and standard deviation σ, where θ = {μ,σ}. A sample of the weights can be obtained by sampling a multivariate unit Gaussian and", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Advances in deep learning have led to superior accuracy in classification tasks, making deep learning classifiers an attractive choice for safety-critical applications like autonomous driving (Chen et al., 2015) or computer-aided diagnosis (Esteva et al., 2017). However, the high accuracy of recent deep learning models alone is not sufficient for such applications. In cases where serious decisions are made upon model’s predictions, it is essential to also consider the uncertainty of these predictions. We need to know if a prediction is likely to be incorrect or if invalid input data is presented to a deep model, e.g. data that is far away from the training domain or obtained from a defective sensor. The consequences of a false decision based on an uncertain prediction can be fatal.\nA natural expectation is that the certainty of a prediction should be directly correlated with the quality of the prediction. In other words, predictions with high certainty are more likely to be accurate than uncertain predictions, which are more likely to be incorrect. A common misconception is the assumption that the estimated softmax likelihood can be directly used as a confidence measure for the predicted class. This expectation is dangerous in the context of critical decision-making. The estimated likelihood of models trained by minimizing the negative log-likelihood (i.e. cross entropy) is highly overconfident; that is, the estimated likelihood is considerably higher than the observed frequency of accurate predictions with that likelihood (Guo et al., 2017)." }, { "heading": "2 UNCERTAINTY ESTIMATION", "text": "In this work, we focus on uncertainty from approximately Bayesian methods. We assume a general multi-class classification task with C classes. Let input x ∈ X be a random variable with corresponding label y ∈ Y = {1, . . . , C}. Let fw(x) be the output (logits) of a neural network with weight matrices w, and with model likelihood p(y=c |fw(x)) for class c, which is sampled from a probability vector p = σSM(fw(x)), obtained by passing the model output through the softmax function σSM(·). From a frequentist perspective, the softmax likelihood is often interpreted as confidence of prediction. Throughout this paper, we follow this definition.\nThe frequentist approach assumes a single best point estimate of the parameters (or weights) of a neural network. In frequentist inference, the weights of a deep model are obtained by maximum likelihood estimation (Bishop, 2006), and the normalized output likelihood for an unseen test input does not consider uncertainty in the weights (Kendall & Gal, 2017). Weight uncertainty (also referred to as model or epistemic uncertainty) is a considerable source of predictive uncertainty for models\ntrained on data sets of limited size (Bishop, 2006; Kendall & Gal, 2017). Bayesian neural networks and recent advances in their approximation provide valuable mathematical tools for quantification of model uncertainty (Gal & Ghahramani, 2016; Kingma & Welling, 2014). Instead of assuming the existence of a single best parameter set, we place distributions over the parameters and want to consider all possible parameter configurations, weighted by their posterior. More specifically, given a training data set D and an unseen test sample x with class label y, we are interested in evaluating the predictive distribution p(y|x,D) = ∫ p(y|x,w)p(w|D) dw . This integral requires to evaluate the posterior p(w|D), which involves the intractable marginal likelihood. A possible solution to this is to approximate the posterior with a more simple, tractable distribution q(w) by optimization.\nIn this work, we incorporate the following approximately Bayesian methods which we use in our experiments to obtain weight uncertainty: Monte Carlo (MC) dropout (Gal & Ghahramani, 2016), Gaussian dropout (Wang & Manning, 2013; Kingma et al., 2015), Bayes by Backprop (Blundell et al., 2015), SWA-Gaussian (Maddox et al., 2019), and (although not Bayesian) deep ensembles (Lakshminarayanan et al., 2017). A short review of each of the methods can be found in Appendix A.2." }, { "heading": "3 RELATED CALIBRATION METRICS", "text": "Expected Calibration Error The expected calibration error (ECE) is one of the most popular calibration error metrics and estimates model calibration by binning the predicted confidences p̂ = maxc p(y = c |x) into M bins from equidistant intervals and comparing them to average accuracies per bin (Naeini et al., 2015; Guo et al., 2017):\nECE = M∑ m=1 |Bm| n ∣∣acc(Bm)− conf(Bm)∣∣ , (1) with number of test samples n and acc(B) and conf(B) denoting the accuracy and confidence of bin B, respectively. Several recent works have described severe pathologies of the ECE metric (Ashukha et al., 2020; Nixon et al., 2019; Kumar et al., 2019). Most notably, the ECE metric is minimized by a model constantly predicting the marginal distribution of the majority class which makes it impossible to directly optimize it (Kumar et al., 2018). Additionally, the ECE only considers the maximum class probability and ignores the remaining entries of the probability vector p(x).\nAdaptive Calibration Error Nixon et al. (2019) proposed the adaptive calibration error (ACE) to address the issue of fixed bin widths of ECE-like metrics. For models with high accuracy or overconfidence, most of the predictions fall into the rightmost bins, whereas only very few predictions fall into the rest of the bins. ACE spaces the bins such that an equal number of predictions contribute to each bin. The final ACE is computed by averaging over per-class ACE values to address the issue raised by Kull et al. (2019). However, this makes the metric more sensitive to the manually selected number of bins M as the number of bins effectively becomes C ·M , with number of classes C. Using fixed bin widths, the numbers of samples in the sparsely populated bins is further reduced, which increases the variance of each measurement per bin. Using adaptive bins, this results in the lower confidence bins spanning a wide range of values, which increases the bias of the bin’s measurement.\nNegative Log-Likelihood Deep models for classification are usually trained by minimizing the average negative log-likelihood (NLL):\nNLL = 1\nN N∑ i=1 − log p(y = yi |xi) . (2)\nThe NLL is also commonly used as a metric for measuring the calibration of uncertainty. However, the NLL is minimized by increasing the confidence maxc p(y = c |x), which favors over-confident models and models with higher accuracy (Ashukha et al., 2020). This metric is therefore unable to compare the calibration of models with different accuracies and training a model by minimizing NLL does not necessarily lead to good calibration.\nBrier Score The average Brier score is another popular metric for assessing the quality of predictive uncertainty and is defined as (Brier, 1950; Lakshminarayanan et al., 2017)\nBS = 1\nN N∑ i=1 C∑ c=1 (1(yi = c)− p(y = c |xi))2 . (3)\nSimilarly to the NLL, the Brier score favors high probabilities for correct predictions and low probabilities for incorrect predictions. Thus, models with higher accuracy tend to show a better Brier score, which makes the metric unsuitable for comparing the quality of uncertainty for models with different accuracies.\nMaximum Mean Calibration Error Common recalibration methods are applied post-hoc, e.g. temperature scaling on a separate calibration set. Kumar et al. (2018) proposed the maximum mean calibration error (MMCE), a trainable calibration surrogate for the calibration error. It is defined as\nMMCE2(D) = ∑ i,j∈D (1(ŷi = yi)− p̂i) (1(ŷj = yj)− p̂j) k(p̂i, p̂j) m2\n(4)\nover batch D ⊂ D with batch size m, matrix-valued universal kernel k and ŷ = argmaxc p(y = c |x). Trainable calibration metrics are used in joint optimization with the negative log-likelihood\nargmin w ∑ D NLL(D,w) + λMMCE(D,w) . (5)\nKumar et al. (2018) claim to have addressed the issue that the ECE is unsuitable for direct optimization due to its high discontinuity in w. However, MMCE is also minimized by a model constantly predicting the marginal distribution of the classes. This leads to subpar logit temperature when training with MMCE and temperature scaling can further reduce miscalibration (Kumar et al., 2018)." }, { "heading": "4 UNCERTAINTY CALIBRATION ERROR", "text": "To give an insight into our general approach to measuring the calibration of uncertainty, we will first revisit the definition of perfect calibration of confidence (Guo et al., 2017) and show how this concept can be extended to calibration of our definition uncertainty.\nLet ŷ = argmaxp be the most likely class prediction of input x with confidence p̂ = maxp and true label y. Then, following Guo et al. (2017), perfect calibration of confidence is defined as\nP [ŷ = y | p̂ = α] = α, ∀α ∈ [0, 1] . (6)\nThat is, the probability of a correct prediction ŷ = y given the prediction confidence p̂ should exactly correspond to the prediction confidence. Instead of using only the probability of the predicted class, we use the entropy of p to express prediction uncertainty:\nH(p) = − C∑\nc=1\np(c) log p(c) . (7)\nLet q(k) := (P[y = 1| argmaxp(x) = k], . . . ,P[y = C| argmaxp(x) = k]) (8)\nbe a probability vector of true marginal class probabilities for all inputs x predicted with class k. Consider the following example: Three i.i.d. inputs x1:3 in a binary classification task with ground truth labels {1, 1, 2} have all been predicted with argmaxp(x1:3) = 1. Then, q(1) = ( 2 3 , 1 3 ) . With this, we define a model to be perfectly calibrated if\nH(q(k)) = H(p | argmaxp = k) ∀ k ∈ {1, . . . , C} . (9)\nFrom this, we derive an error metric for calibration of uncertainty: Ep [ |H(q)−H(p)| ] . (10)\nHowever, this metric and the use of the entropy as measure of uncertainty lacks interpretability, as the entropy scales with the number of classes C. This does not allow to compare the uncertainty or\nthe calibration of models trained on different data sets. Therefore, we propose to use the normalized entropy to scale the values to a range between 0 and 1:\nH̃(p) := − 1 logC C∑ c=1 p(c) log p(c) , H̃ ∈ [0, 1] . (11)\nWe further increase interpretability and argue, that the normalized entropy should correlate with the model error. From Eq. (6) and Eq. (11), we define perfect calibration of uncertainty as\nP [ ŷ 6= y | H̃(p) = α ] = α, ∀α ∈ [0, 1] . (12)\nThat is, in a batch of inputs that are all predicted with uncertainty of e. g. 0.2, a top-1 error of 20% is expected. The confidence is interpreted as the probability of belonging to a particular class, which should naturally correlate with the model error of that class. This characteristic does not generally apply to entropy, and thus the question arises why entropy should correspond with the model error.\nProposition 1. The normalized entropy (uncertainty) H̃(p) approaches the top-1 error in the limit of number of classes C if the model p is well-calibrated.\nProof. lim\nC→∞ H̃(p) = (1− p̂) (13)\nThe top-1 error equals (1 − p̂) if the model is perfectly calibrated in the sense of Eq. (6). For a detailed proof, see Appendix A.1.\nThus, the normalized entropy gives us an intuitive and interpretable measure of uncertainty. If a model is perfectly calibrated, H̃ corresponds to the top-1 error. We propose the following notion to quantify miscalibration of uncertainty:\nEH̃ [ ∣∣P[ŷ 6= y | H̃(p) = α]− α∣∣ ], ∀α ∈ [0, 1] . (14)\nWe refer to this as Expected Uncertainty Calibration Error (UCE) and approximate with\nUCE := M∑ m=1 |Bm| n ∣∣err(Bm)− uncert(Bm)∣∣ , (15) using the same binning scheme as in ECE estimation. The error per bin is defined as\nerr(Bm) := 1 |Bm| ∑ i∈Bm 1(ŷi 6= y) , (16)\nwhere 1(ŷi 6= y) = 1 and 1(ŷi = y) = 0. Uncertainty per bin is defined as\nuncert(Bm) := 1 |Bm| ∑ i∈Bm H̃(pi) . (17)\nProperties of UCE The proposed UCE metric solves several problems of other metrics. First, the UCE is not zero for a model constantly predicting the marginal class distribution. Estimators of metrics with this pathology (e.g. ECE, MMCE) suffer from varying bias and therefore do not allow comparing miscalibration of different models (Ashukha et al., 2020; Vaicenavicius et al., 2019). In contrast to ACE, UCE is not highly sensitive to the numbers of bins and provides a consistent ranking of different models for the same classification task (see Fig. 1). Additionally, UCE can be used as a trainable regularizer in similar manner to MMCE. During training, we compute the UCE over mini-batches D ⊂ D and add it to the NLL training objective\nargmin w ∑ D NLL(D,w) + λUCE(D,w) , (18)\nweighted by a factor λ. UCE is zero for an optimal model and thus does not penalize high confident predictions for models with high accuracy, which is a major disadvantage of plain entropy regularization (Pereyra et al., 2017). Predictions with low uncertainty, but high top-1 error are penalized whereas predictions with high accuracy are encouraged to have low uncertainty." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate the proposed uncertainty calibration error on mutli-class image classification on CIFAR10 with ResNet-34 and on CIFAR-100 with ResNet-50 (He et al., 2016; Krizhevsky & Hinton, 2009). The feature extractor of ResNet is used as implemented in PyTorch 1.6 (Paszke et al., 2019) and the last linear layer is implemented using the different Bayesian approximations from § 2. All models were trained from random initialization. We employed early stopping at highest validation set accuracy. More details on the training procedure and a link to our source code can be found in Appendix A.3.\nFirst, we compute the accuracies and all calibration error metrics from § 3 and the UCE on the test sets of CIFAR-10/100 for all models. We investigate the effect of the number of bins in the estimators of the metrics involving binning and analyze the ranking of different models under varying softmax temperature τ , where p = σSM(τ−1fw(x)). Finally, we train a ResNet on CIFAR-10/100, SVHN, and Fashion-MNIST with added calibration error regularization as in Eq. (5) and (18). We compare UCE regularization (λ = 10) to regularization with MMCE (λ = 10) and confidence penalty argmin ∑ D NLL(D,w)+λH(D,w) with λ = 0.1, which penalizes the entropy of the probability vector p of each prediction (Pereyra et al., 2017). We combine the regularization experiments with post-hoc calibration using temperature scaling (Guo et al., 2017).\nAdditionally, we analyze the utility of the normalized entropy as a measure of uncertainty and perform rejection and out-of-distribution (OoD) detection experiments using H̃. We define an uncertainty threshold Hmax and reject all predictions from the test set where H̃(p) > Hmax. A decrease in false predictions of the remaining test set is expected. To demonstrate the OoD detection ability, we provide images from CIFAR-100 to a deep model trained on CIFAR-10 (note that both CIFAR data sets have no mutual classes). In this experiment, we compose a batch of 100 random samples from the test set of the training domain and stepwise replace images with out-of-distribution data. In practice, it is expected that models are applied to a mix of known and unknown classes. After each step, we evaluate the mean batch uncertainty and expect, that the mean uncertainty monotonically increases as a function of the fraction of OoD data." }, { "heading": "5.1 RESULTS", "text": "In this section, the results of the above mentioned experimental setups are presented and discussed.\nComparison of Calibration Error Metrics Table 2 shows test set accuracy and all calibration error results for all model/data set configurations. Without any post-hoc calibration, such as temperature scaling, all metrics provide the same ranking of the models. The deep ensemble and SWAG perform\nbest in terms of test set accuracy and calibration of uncertainty. Brier score and NLL are both highly sensitive to the model accuracy, which is especially apparent on CIFAR-10. For the first three models with similar accuracy, the Brier scores differ only marginally. Thus, both the Brier score and the NLL are unsuitable for comparing the calibration of different models. Ashukha et al. (2020) propose to use the calibrated NLL at optimal temperature for model comparison. However, Fig. 5–6 plot the metrics over varying softmax temperature and show, that the models with highest accuracy have lowest Brier and NLL, regardless of the temperature. From this we deduce that both Brier and NLL should not be used for comparison of multi-class calibration, even at optimal temperature. The remaining metrics show consistent ranking before and after the point of optimal temperature. The metrics ECE, UCE and MMCE have a narrow region in which the optimal temperature for all models can be found. This allows comparison of calibration of models if they are all over- or underconfident. However, all metrics fail at comparing underconfident models to overconfident models (see model ranking left and right of optimal temperature in Fig. 6).\nFig. 1 shows the effect of the number of bins M in the estimators of ECE, ACE and UCE. Both ECE and ACE are more sensitive to the number of bins and do not provide a consistent ranking of models under varying bin count. This is due to the fact that fewer bins are populated using H̃ as uncertainty (cf. Fig. 10 in the appendix). This can be interpreted as possible downside of the UCE metric as the adaptive binning scheme of ACE explicitly addresses that. However, we argue that consistent ranking due to robustness against bin count results in a metric that is more useful in practice.\nUncertainty Regularization Tab. 1 shows results of ResNet-50 with SWAG trained on CIFAR-100. All regularization methods considerably reduce miscalibration compared to unregularized models. Plain entropy regularization is surprisingly effective on CIFAR-100; however, on CIFAR-10 (see Tab. 1), it increased miscalibration and is generally outperformed by MMCE and UCE regularization at optimal temperature. Therefore, when performing post-hoc temperature scaling, MMCE and UCE regularization is preferable to entropy regularization. UCE regularization can be interpreted as entropy penalization for predictions with low accuracy. As UCE is zero for an optimal model, it encourages a model to reach high accuracy.\nRejection & OoD Detection Fig. 2 (left) shows the top-1 error as a function of decreasing uncertainty thresholdHmax and (right) shows the mean batch uncertainty at increasing OoD data. Robust rejection of uncertain predictions and detection of OoD data based on the normalized entropy H̃(p) is possible and is generally more sensitive to OoD data than the confidence maxp." }, { "heading": "6 CONCLUSION", "text": "We have proposed to measure uncertainty based on the normalized entropy. From this, we derived the uncertainty calibration error; a new metric that avoids several pathologies of existing calibration errors. In our experimental evaluation, we focused on uncertainty from approximate Bayesian methods and deep ensembles. The UCE does not only consider the class with the highest probability and is not minimized by a constant model predicting the marginal class distribution. In contrast to the Brier score and NLL, it allows comparison of models with different accuracy. It is not sensitive to a varying number of bins and provides a consistent ranking of models. However, we follow the suggestion of Ashukha et al. (2020) and state that comparison of calibration for different models should only be done at optimal softmax temperature. Regularization with UCE during training reduces miscalibration and does not penalize high accuracy and predictions with justified high confidence. UCE regularization with temperature scaling often performed best in our experiments in terms of calibration. The normalized entropy itself is a useful measure of uncertainty and allows for robust rejection of uncertain predictions and detection of OoD data.\nWe hope to have provided a new useful metric for reliable evaluation of uncertainty estimation. UCE is easy to implement and interpretable as it expresses the discrepancy of the uncertainty from the model error, which increases the chance of being accepted by deep learning practitioners." }, { "heading": "7 REBUTTAL", "text": "In this section, new or updated results during the rebuttal phase are presented. We identified the main question of the reviewers as to why/where our calibration metric (UCE) is beneficial compared to previous metrics (ECE, MMCE, ACE) and how normalized entropy is beneficial over vanilla entropy. We project the following main changes for the final manuscript:\n• illustrate the advantage of normalized entropy over vanilla entropy • add toy experiments to show that UCE is able to capture miscalibration where ECE, ACE\nand MMCE fail (see results below) • add results from experiments on additional data sets (SVHN, Fashion-MNIST)\nTo make room for the following content, we propose the removal/reduction (approx. -1 page) of\n• the review of Bayesian methods and refer to previous work instead, • experiments regarding temperature (remove Table 1 and Figure 1)\nAdditionally, we will include all minor comments of the reviewers in the final manuscript." }, { "heading": "7.1 UNCERTAINTY FROM NORMALIZED ENTROPY VS. ENTROPY", "text": "The advantages of normalized entropy over vanilla entropy in our definition of UCE are twofold: First, the domain of the metric does not scale with the number of classes C, which helps comparing the calibration of models trained on different data sets. Second, the value of the metric is more interpretable: Rejecting test samples where H̃ < 0.2 will result in a classification error < 0.2 for the remaining samples if the model is well-calibrated (see Fig. 2). We argue that using normalized entropy as uncertainty measure is as interpretable as maxp, but avoids the pathologies of maxp when used in a calibration metric." }, { "heading": "7.2 UCE VS. OTHER CALIBRATION METRICS", "text": "UCE is more reliable than ECE and MMCE because it is based on normalized entropy and incorporates the predictions of all classes. It is more robust than ACE, as it is significantly less sensitive to binning.\nReliable Uncertainty Detection ECE and MMCE can be minimized by models with an uninformative, constant output (Ovadia et al., 2019). Given a data set of two classes, with 60 % class 1 and 40 % class 2, and a degenerated model that consistently predicts the marginal probabilities p = (0.6, 0.4). This leads to 60 % correctly classified samples for class 1 leading to perfect calibration scores for\nECE and MMCE. Class 2, however, is misclassified in 100% of the cases—and not 60 % as expected. The miscalibration of class 2 is not reflected in ECE or MMCE. ACE is computed class wise and is able to capture this miscalibration. UCE is based on the normalized entropy to determine uncertainty and therefore incorporates the predictions of all classes. Fig. 3 visualizes these results.\nRobustness to Varying Number of Bins From a reliable calibration metric, we would expect a constant ranking of two models in comparison, independent of varying calibration parameters. However, ACE is highly sensitive to the number of bins and produces arbitrary rankings of models. Fig. 4 highlights this by comparing two differently calibrated random models." }, { "heading": "7.3 UCE REGULARIZATION", "text": "At optimal temperature (as suggested by Ashukha et al. (2020)), UCE and MMCE regularization considerably reduce miscalibration for all employed calibration metric outperforming entropy regularization, with UCE achieving highest accuracy on CIFAR-100, SVHN and Fashion-MNIST. (see Tab. 1). We want to stress out that UCE, in contrast to MMCE, was not specifically designed for the use as a calibration regularizer (Kumar et al., 2018).\nWhy UCE Regularization Works UCE regularization works best when computed classwise (in similar manner to ACE): UCE = 1C ∑C c=1 UCE(c), where UCE(c) is computed for predictions of class c. Consider the following binary classification example: A batch with mainly samples from class 1 and few samples from class 2 are all predicted as class 1 with high confidence. NLL further pushes the confidence of the predictions to 1.0, favoring overconfidence, whereas UCE is only reduced if the confidence of the overconfidently false predictions is reduced." }, { "heading": "ACKNOWLEDGMENTS", "text": "Acknowledgments withheld." }, { "heading": "A APPENDIX", "text": "A.1 PROOFS\nProposition 1. The normalized entropy (uncertainty) H̃(p) approaches the top-1 error in the limit of number of classes C if the model p is well-calibrated.\nProof. With Lemma 1 and p̂ = maxp we rewrite the normalized entropy as\nH̃(p) = − p̂ log p̂ logC\n− (1− p̂) log 1−p̂C−1\nlogC . (19)\nNow, in the limit of number of classes C\nlim C→∞ H̃(p) = lim C→∞\n− (1− p̂) log 1−p̂C−1\nlogC (20)\n= lim C→∞\n−(1− p̂) ( log(1− p̂)\nlogC − log(C − 1) logC\n) (21)\n= (1− p̂) (22)\nThe top-1 error equals (1− p̂) if the model is perfectly calibrated in the sense of Eq. (6).\nLemma 1. Given a softmax output p with C entries and the most likely prediction ŷ = argmaxp with likelihood p̂ = maxp. Then, the remaining entries pi,i6=ŷ are approximately uniformly distributed with probability 1−p̂C−1 .\nProof. This assumption is approximately correct (1) if p̂→ 1 or (2) if C →∞. Let p̃j = pi ∀i 6= ŷ and q̃j = (1−p̂) C−1 . Note that p̃ and q̃ are not proper probability distributions as ∑ p̃j = ∑ q̃j = (1− p̂).\n(1) Consider KL[p̃‖q̃] as p̂ approaches 1:\nlim p̂→1 KL [p̃ ‖ q̃] = lim p̂→1 C−1∑ j=1 p̃j log p̃j q̃j\n(23)\n= lim p̂→1 C−1∑ j=1 p̃j log p̃j − C−1∑ j=1 p̃j log q̃j (24)\n= lim p̂→1 C−1∑ j=1 p̃j log p̃j − (1− p̂) log (1− p̂) C − 1\n(25)\n= 0 (26)\n(2) Let zi be the logits of a model trained with L2 regularization. The magnitude of the logits |zi| cannot become arbitrary large and due to the normalizing nature of softmax\nlim C→∞ exp zi∑C j=1 exp zj = lim C→∞ 1 C . (27)\nAlternatively, let z ∈ AC and z′ ∈ BK be two logit vectors with C < K. If both models have been trained with L2 regularization, the magnitude of the logits |zi|, |z′i| cannot become arbitrary large. More specifically, A = B ⊂ R. Due to the normalizing nature of softmax, z′ corresponds to a lower softmax temperature and as the temperature decreases with increasing number of classes, softmax approaches a uniform distribution (Jang et al., 2017).\nA.2 BAYESIAN DEEP LEARNING METHODS\nIn the following, we briefly describe common approximately Bayesian methods which we use in our experiments to obtain weight uncertainty.\nMonte Carlo Dropout One practical approximation of the posterior is variational inference with Monte Carlo (MC) dropout (Gal & Ghahramani, 2016). To determine model uncertainty, dropout variational inference is performed by training the model fw with dropout (Srivastava et al., 2014) and using dropout at test time to sample from the approximate posterior distribution by performing N stochastic forward passes per test sample (Gal & Ghahramani, 2016; Kendall & Gal, 2017). This is also referred to as MC dropout. In MC dropout, the final probability vector of the predictive distribution is computed by MC integration:\np(x) = 1\nN N∑ i=1 σSM (fwi(x)) . (28)\nGaussian Dropout Gaussian dropout was first proposed by Wang & Manning (2013) and linked to variational inference by Kingma et al. (2015). Dropout introduces Bernoulli noise during optimization and reduces overfitting of the training data. The resulting output y of a layer with dropout is a weighted sum of Bernoulli random variables. Then, the central limit theorem states, that y is approximately normally distributed. Instead of sampling from the weights and computing the resulting output, we can directly sample from the implicit Gaussian distribution of dropout\ny ∼ N (µy, σ2y) (29)\nwith\nµy = E[yk] = ∑ j wj,kxj , (30)\nσ2y = Var[yk] = p/(1− p) ∑ j w2j,kx 2 j , (31)\nusing the reparameterization trick (Kingma et al., 2015)\nyj = µj + σjεj with εj ∼ N (0, 1) . (32)\nGaussian dropout is a continuous approximation to Bernoulli dropout, and in comparison it will better approximate the true posterior distribution and is expected to provide improved uncertainty estimates (Louizos & Welling, 2017). To obtain the final probability vector p(x), we again use MC integration with N stochastic forward passes.\nThe dropout rate p is now a learnable parameter and does not need to be chosen carefully by hand. In fact, p could be optimized w.r.t. uncertainty calibration, scaling the variance of the implicit Gaussian of dropout. A similar approach was presented by Gal et al. (2017) using the Concrete distribution (Maddison et al., 2016; Jang et al., 2017). However, we focus on metrics for measuring calibration and therefore fix p in our subsequent experiments.\nBayes by Backprop Blundell et al. (2015) assume a Gaussian distribution with diagonal covariance matrix as variational posterior q(w|θ), parameterized by mean µ and standard deviation σ, where θ = {µ,σ}. A sample of the weights can be obtained by sampling a multivariate unit Gaussian and shift it by µ and scale it by σ. Then, the network is trained by minimizing\nL(θ) = KL[q(w|θ)‖p(w)]− Eq[log p(D|w)] . (33)\nIn case of a zero mean Gaussian prior, the first term can be implemented by weight decay. In contrast to Gaussian dropout, which operates on the implicit distribution of the activations, Bayes by Backprop (BBB) directly operates on the weights. This doubles the number of trainable parameters in practice. MC integration is used to obtain the final probability vector p(x).\nSWA-Gaussian Stochastic weight averaging (SWA) uses stochastic gradient descent steps around a local loss optimum of a trained network and averages the weights wSWA = 1T ∑T i=1wi of the model from each step i (Izmailov et al., 2018). This explores the loss landscape and averaging helps to find a better weight estimate than converging to a single local optimum. SWA-Gaussian (SWAG) is closely related to Bayes by Backprop (Maddox et al., 2019). It assumes a Gaussian distribution with diagonal covariance matrix as approximate variational posterior. Instead of using backpropagation to directly optimize µ and σ, it fits a Gaussian by using µ = wSWA and\nΣdiag = diag(w2 −w2SWA), w2 = 1\nT T∑ i=1 w2i . (34)\nThis doubles the number of parameters at test time. The approximate Gaussian posterior results to N (wSWA,Σdiag) and MC integration with samples wi ∼ N (wSWA,Σdiag) is used to compute the final probability vector p(x).\nDeep Ensembles Training multiple randomly initialized copies of a deep network by performing maximum posterior estimation and ensembling them to get multiple predictions for a single input is not a variational inference method. However, they have been reported to produce surprisingly useful uncertainty estimates in practice that are better calibrated (Lakshminarayanan et al., 2017). Deep ensembles considerably increase the number of parameters at train and test time. We use deep ensembles as non Bayesian baseline for uncertainty estimation.\nA.3 TRAINING SETTING\nThe base model implementations from PyTorch 1.6 (Paszke et al., 2019) are used and trained with following settings:\n• Adam optimizer with initial learn rate of 3e-4 and β1 = 0.9, β2 = 0.999 and mini-batch size of 256 (Kingma & Ba, 2015)\n• weight decay of 1e-6 • negative-log likelihood (cross entropy) loss • reduce-on-plateau learn rate scheduler (patience of 20 epochs) with factor of 0.1 • additional validation set is randomly extracted from the training set (5, 000 samples) • ResNet-34 for CIFAR-10 and ResNet-50 for CIFAR-100 experiments • only the last linear layer is implemented in a Bayesian manner for MC dropout, Gaussian\ndropout, BayesByBackprop and SWAG • the deep ensemble comprises 3 fully individually trained networks • N = 25 forward passes were used Monte Carlo integration • in MC dropout and Gaussian dropout, a dropout rate of p = 0.2 was used • in SWAG, a learn rate of 3e-6 was used during weight averaging\nOur code is available at: https://github.com/link-withheld.\nA.4 ADDITIONAL RESULTS\nA.5 ADDITIONAL FIGURES" } ]
2,020
null
SP:92d112388a1eac20c2208f0596cdfcdcca685c8f
[ "This paper presents a model and a corresponding training approach for multi-scale invertible models. The presented model is defined on multiple scales with information on finer scales being conditioned on coarser scales. Data generation is hence done sequentially from a coarser to finer scale. The authors argue that this multi-scale sampling helps in addressing the curse of dimensionality problem by allowing to sample from high density regions more efficiently." ]
High-dimensional Bayesian inference problems cast a long-standing challenge in generating samples, especially when the posterior has multiple modes. For a wide class of Bayesian inference problems equipped with the multiscale structure that low-dimensional (coarse-scale) surrogate can approximate the original highdimensional (fine-scale) problem well, we propose to train a Multiscale Invertible Generative Network (MsIGN) for sample generation. A novel prior conditioning layer is designed to bridge networks at different resolutions, enabling coarse-tofine multi-stage training. Jeffreys divergence is adopted as the training objective to avoid mode dropping. On two high-dimensional Bayesian inverse problems, MsIGN approximates the posterior accurately and clearly captures multiple modes, showing superior performance compared with previous deep generative network approaches. On the natural image synthesis task, MsIGN achieves the superior performance in bits-per-dimension compared with our baseline models and yields great interpret-ability of its neurons in intermediate layers.
[]
[ { "authors": [ "Lynton Ardizzone", "Jakob Kruse", "Sebastian Wirkert", "Daniel Rahner", "Eric W Pellegrini", "Ralf S Klessen", "Lena Maier-Hein", "Carsten Rother", "Ullrich Köthe" ], "title": "Analyzing inverse problems with invertible neural networks", "venue": "arXiv preprint arXiv:1808.04730,", "year": 2018 }, { "authors": [ "Lynton Ardizzone", "Carsten Lüth", "Jakob Kruse", "Carsten Rother", "Ullrich Köthe" ], "title": "Guided image generation with conditional invertible neural networks", "venue": "arXiv preprint arXiv:1907.02392,", "year": 2019 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky TQ Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alexandros Beskos", "Gareth Roberts", "Andrew Stuart", "Jochen Voss" ], "title": "Mcmc methods for diffusion bridges", "venue": "Stochastics and Dynamics,", "year": 2008 }, { "authors": [ "Bo Chang", "Lili Meng", "Eldad Haber", "Frederick Tung", "David Begert" ], "title": "Multi-level residual networks from dynamical systems view", "venue": "arXiv preprint arXiv:1710.10348,", "year": 2017 }, { "authors": [ "Changyou Chen", "Nan Ding", "Lawrence Carin" ], "title": "On the convergence of stochastic gradient mcmc algorithms with high-order integrators", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Changyou Chen", "Ruiyi Zhang", "Wenlin Wang", "Bai Li", "Liqun Chen" ], "title": "A unified particleoptimization framework for scalable bayesian sampling", "venue": "arXiv preprint arXiv:1805.11659,", "year": 2018 }, { "authors": [ "Peng Chen", "Keyi Wu", "Joshua Chen", "Tom O’Leary-Roseberry", "Omar Ghattas" ], "title": "Projected stein variational newton: A fast and scalable bayesian inference method in high dimensions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tian Qi Chen", "Jens Behrmann", "David K Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tianqi Chen", "Emily Fox", "Carlos Guestrin" ], "title": "Stochastic gradient hamiltonian monte carlo", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Alexandre J Chorin", "Xuemin Tu" ], "title": "Implicit sampling for particle filters", "venue": "Proceedings of the National Academy of Sciences,", "year": 2009 }, { "authors": [ "Tiangang Cui", "Kody JH Law", "Youssef M Marzouk" ], "title": "Dimension-independent likelihood-informed mcmc", "venue": "Journal of Computational Physics,", "year": 2016 }, { "authors": [ "Gustavo Deco", "Wilfried Brauer" ], "title": "Nonlinear higher-order statistical decorrelation by volumeconserving neural architectures", "venue": "Neural Networks,", "year": 1995 }, { "authors": [ "Emily L Denton", "Soumith Chintala", "Rob Fergus" ], "title": "Deep generative image models using a laplacian pyramid of adversarial networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Tarek A El Moselhy", "Youssef M Marzouk" ], "title": "Bayesian inference with optimal maps", "venue": "Journal of Computational Physics,", "year": 2012 }, { "authors": [ "Yihao Feng", "Dilin Wang", "Qiang Liu" ], "title": "Learning to draw samples with amortized stein variational gradient descent", "venue": "arXiv preprint arXiv:1707.06626,", "year": 2017 }, { "authors": [ "Michael B Giles" ], "title": "Multilevel monte carlo path simulation", "venue": "Operations research,", "year": 2008 }, { "authors": [ "Will Grathwohl", "Ricky TQ Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "arXiv preprint arXiv:1810.01367,", "year": 2018 }, { "authors": [ "Eldad Haber", "Lars Ruthotto", "Elliot Holtham", "Seong-Hwan Jun" ], "title": "Learning across scales— multiscale methods for convolution neural networks", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": null, "year": 1902 }, { "authors": [ "Thomas Y Hou", "Ka Chun Lam", "Pengchuan Zhang", "Shumao Zhang" ], "title": "Solving bayesian inverse problems from the perspective of deep generative networks", "venue": "Computational Mechanics,", "year": 2019 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jakob Kruse", "Lynton Ardizzone", "Carsten Rother", "Ullrich Köthe" ], "title": "Benchmarking invertible architectures on inverse problems", "venue": null, "year": 2019 }, { "authors": [ "Chang Liu", "Jun Zhu" ], "title": "Riemannian stein variational gradient descent for bayesian inference", "venue": "In Thirty-second aaai conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Qiang Liu" ], "title": "Stein variational gradient descent as gradient flow", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Qiang Liu", "Dilin Wang" ], "title": "Stein variational gradient descent: A general purpose bayesian inference algorithm", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Hermann G Matthies", "Elmar Zander", "Bojana V Rosić", "Alexander Litvinenko", "Oliver Pajonk" ], "title": "Inverse problems in a bayesian setting", "venue": "In Computational Methods for Solids and Fluids,", "year": 2016 }, { "authors": [ "Matthias Morzfeld", "Xuemin Tu", "Ethan Atkins", "Alexandre J" ], "title": "Chorin. A random map implementation of implicit filters", "venue": "Journal of Computational Physics,", "year": 2012 }, { "authors": [ "Radford M Neal" ], "title": "Mcmc using hamiltonian dynamics", "venue": "Handbook of Markov Chain Monte Carlo,", "year": 2011 }, { "authors": [ "Frank Nielsen", "Richard Nock" ], "title": "Sided and symmetrized bregman centroids", "venue": "IEEE transactions on Information Theory,", "year": 2009 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In International conference on machine learning,", "year": 2017 }, { "authors": [ "Matthew Parno", "Tarek Moselhy", "Youssef Marzouk" ], "title": "A multiscale strategy for bayesian inference using transport", "venue": "maps. SIAM/ASA Journal on Uncertainty Quantification,", "year": 2016 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": null, "year": 2015 }, { "authors": [ "Alessio Spantini" ], "title": "On the low-dimensional structure of Bayesian inference", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 2017 }, { "authors": [ "Alessio Spantini", "Daniele Bigoni", "Youssef Marzouk" ], "title": "Inference via low-dimensional couplings", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Tao Xu", "Pengchuan Zhang", "Qiuyuan Huang", "Han Zhang", "Zhe Gan", "Xiaolei Huang", "Xiaodong He" ], "title": "Attngan: Fine-grained text to image generation with attentional generative adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Bayesian inference provides a powerful framework to blend prior knowledge, data generation process and (possibly small) data for statistical inference. With some prior knowledge ⇢ (distribution) for the quantity of interest x 2 Rd, and some (noisy) measurement y 2 Rdy , it casts on x a posterior\nq(x|y) / ⇢(x)L(y|x) , where L(y|x) = N (y F(x);0, \") . (1) where L(y|x) is the likelihood that compares the data y with system prediction F(x) from the candidate x, here F denotes the forward process. We can use different distributions to model the mismatch \" = y F(x), and for illustration simplicity, we assume Gaussian in Equation 1. For example, Bayesian deep learning generates model predicted logits F(x) from model parameters x, and compares it with discrete labels y through binomial or multinomial distribution.\nSampling or inferring from q is a long-standing challenge, especially for high-dimensional (high-d) cases. An arbitrary high-d posterior can have its importance regions (also called “modes”) anywhere in the high-d space, and finding these modes requires computational cost that grows exponentially with the dimension d. This intrinsic difficulty is the consequence of “the curse of dimensionality”, which all existing Bayesian inference methods suffer from, e.g., MCMC-based methods (Neal et al., 2011; Welling & Teh, 2011; Cui et al., 2016), SVGD-type methods (Liu & Wang, 2016; Chen et al., 2018; 2019a), and generative modeling (Morzfeld et al., 2012; Parno et al., 2016; Hou et al., 2019).\nIn this paper, we focus on Bayesian inference problems with multiscale structure and exploit this structure to sample from a high-d posterior. While the original problem has a high spatial resolution (fine-scale), its low resolution (coarse-scale) analogy is computationally attractive because it lies in a low-dimension (low-d) space. A problem has the multiscale structure if such coarse-scale low-d surrogate exists and gives good approximation to the fine-scale high-d problem, see Section 2.1. Such multiscale property is very common in high-d Bayesian inference problems. For example, inferring 3-D permeability field of subsurface at the scale of meters is a reasonable approximation of itself at the scale of centimeters, while the problem dimension is 106-times fewer.\nWe propose a Multiscale Invertible Generative Network (MsIGN) to sample from high-d Bayesian inference problems with multiscale structure. MsIGN is a flow-based generative network that can both generate samples and give density evaluation. It consists of multiple scales that recursively\nlifts up samples to a finer-scale (higher-resolution), except that the coarsest scale directly samples from a low-d (low resolution) distribution. At each scale, a fixed prior conditioning layer combines coarse-scale samples with some random noise according to the prior to enhance the resolution, and then an invertible flow modifies the samples for better accuracy, see Figure 1. The architecture of MsIGN makes it fully invertible between the final sample and random noise at all scales.\nMsIGN undergoes a multi-stage training that learns a hierarchy of distributions with dimensions growing from the lowest to the highest (the target posterior). Each stage gives a good initialization to the next stage thanks to the multiscale property. To capture multiple modes, we choose Jeffreys divergence DJ(pkq) as the training objective at each stage, which is defined as\nDJ(pkq) = DKL(pkq) +DKL(qkp) = Ex⇠p [log (p(x)/q(x))] + Ex⇠q [log (q(x)/p(x))] . (2)\nJeffreys divergence removes bad local minima of single-sided Kullback-Leibler (KL) divergence to avoid mode missing. We build an unbiased estimation of it by leveraging prior conditioning layer in importance sampling. Proper loss function and good initialization from multi-stage training solve the non-convex optimization stably and capture multi-modes of the high-d distribution.\nIn summary, we claim four contributions in this work. First, we propose a Multiscale Invertible deep Generative Network (MsIGN) with a novel prior conditioning layer, which can be trained in a coarse-to-fine scale manner. Second, Jeffreys divergence is used as the objective function to avoid mode collapse, and is estimated by importance sampling based on the prior conditioning layer. Third, when applied to two Bayesian inverse problems, MsIGN clearly captures multiple modes in the high-d posterior and approximates the posterior accurately, demonstrating its superior performance compared with previous methods via the generative modeling approach. Fourth, we also apply MsIGN to image synthesis tasks, where it achieves superior performance in bits-perdimension among our baseline models, like Glow (Kingma & Dhariwal, 2018), FFJORD (Grathwohl et al., 2018), Flow++ (Ho et al., 2019), i-ResNet (Behrmann et al., 2019), and Residual Flow (Chen et al., 2019b). MsIGN also yields great interpret-ability of its neurons in intermediate layers." }, { "heading": "2 METHOLOGY", "text": "We will abbreviate q(x|y) in Equation 1 as q(x) for simplicity in the following context, because y only plays the role of defining the target distribution q(x) in MsIGN. In Section 2.1, we discuss the multiscale structure in detail of the posterior q(x) and derive a scale decoupling that can be utilized to divide and conquer the high-d challenge of Bayesian inference.\nAs a flow-based generative model like in Dinh et al. (2016), MsIGN models a bijective that maps Gaussian noise z to a sample x whose distribution is denoted as p✓(x), where ✓ is the network parameters. MsIGN allows fast generation of samples x and density evaluation p✓(x), so we train our working distribution p✓(x) to approximate the target distribution q(x). We present the architecture of MsIGN in Section 2.2 and the training algorithm in Section 2.3." }, { "heading": "2.1 MULTISCALE STRUCTURE AND SCALE DECOUPLING", "text": "We say a Bayesian inference problem has multiscale structure if the associated coarse-scale likelihood Lc approximates the original likelihood L well:\nL(y|x) ⇡ Lc(y|xc) , where Lc(y|xc) := N (y Fc(xc);0, \") . (3)\nHere xc 2 Rdc is a coarse-scale version of the fine-scale quantity x 2 Rd (dc < d), given by a deterministic pooling operator A : xc = A(x). The map Fc : Rdc ! Rdy is a forward process that gives system prediction based on the coarse-scale information xc. A popular case of the multiscale structure is when A is the average pooling operator, and F(x) ⇡ Fc(xc), meaning that the system prediction mainly depends on the lower-resolution information xc. Equation 3 motivates us to define a surrogate distribution q̃(x) / ⇢(x)Lc(y|A(x)) that approximates the target posterior q(x) well1:\nq̃(x) = ⇢(x)Lc(y|A(x)) = ⇢(x)Lc(y|xc) ⇡ ⇢(x)L(y|x) = q(x) . (4)\nWe also notice that the prior ⇢ allows an exact scale decoupling. To generate a sample x from ⇢, one can first sample its coarse-scale version xc = A(x), and then replenish missing fine-scale details without changing the coarse-scale structure by sampling from the conditional distribution ⇢(x|xc) = ⇢(x|A(x) = xc). Using ⇢c to denote the distribution of xc = A(x), the conditional probability calculation summarizes this scale decoupling process as ⇢(x) = ⇢(x|xc)⇢c(xc). Combining the scale effect in the likelihood and the scale decoupling in the prior, we decouple the surrogate q̃(x) = ⇢(x)Lc(y|A(x)) into the prior conditional distribution ⇢(x|xc) and a coarse-scale posterior, defined as qc(xc) := ⇢c(xc)L(y|xc). The decoupling goes as\nq̃(x) = ⇢(x)Lc(y|xc) = ⇢(x|xc)⇢c(xc)Lc(y|xc) = ⇢(x|xc)qc(xc) , (5)\nThe prior conditional distribution ⇢(x|xc) bridges the coarse-scale posterior qc(xc) and the surrogate q̃(x), which in turn approximates the original fine-scale posterior q(x). Parno et al. (2016) proposed a similar scale decoupling relation, and we leave the discussion and comparison to Appendix A.\nFigure 1 shows the integrated sampling strategy. To sample an x from q, we start with an xc from qc. The prior conditioning layer then performs random upsampling from the prior conditional distribution ⇢(·|xc), and the output will be a sample x̃ of the surrogate q̃. Due to the approximation q̃ ⇡ q from Equation 4, we stack multiple invertible blocks for the invertible flow F to modify the sample x̃ ⇠ q̃ to a sample x ⇠ q: x = F (x̃). F is initialized as an identity map in training. Finally, to obtain the xc from qc, we apply the above procedure recursively until the dimension of the coarsest scale is small enough so that qc can be easily sampled by a standard method." }, { "heading": "2.2 MULTISCALE INVERTIBLE GENERATIVE NETWORK: ARCHITECTURE", "text": "Our proposed MsIGN has multiple levels to recursively apply the above strategy. We denote L the number of levels, xl 2 Rdl the sample at level l, and Al : Rdl ! Rdl 1 the pooling operator from level l to l 1: xl 1 = Al(xl). Following the idea in Section 2.1, we can define the l-th level target ql(xl) and surrogate q̃l(x̃l), and the last-level target qL is our original target q in Equation 1. The l-th level of MsIGN uses a prior conditioning layer PCl and an inverse transform Fl to capture ql.\nPrior conditioning layer. The prior conditioning layer PCl at level l lifts a coarse-scale sample xl 1 2 Rdl 1 up to a random fine-scale one xl 2 Rdl following the conditional distribution ⇢(xl|xl 1). The difference in dimension is compensated by a Gaussian noise zl 2 Rdl dl 1 , which is the source of randomness: xl = PCl(xl 1, zl). PCl depends only on the prior conditional distribution ⇢(xl|xl 1), and thus can be pre-computed independently for different levels regardless of the likelihood L. When the prior is Gaussian and the pooling operators are linear (e.g., average pooling), the prior conditional distribution is still Gaussian with moments specified as follows.\nLemma 2.1 Suppose that ⇢(xl) = N (xl;0,⌃l), and Al(xl) = Alxl for some Al 2 Rdl 1⇥dl , then with Ul 1 := ⌃lATl (Al⌃lA T l ) 1 and ⌃l|l 1 := ⌃l ⌃lATl (Al⌃lATl ) 1Al⌃l, we have\n⇢(xl|xl 1 = Alxl) = N (xl;Ul 1xl 1,⌃l|l 1) . 1We omit normalizing constants. Equivalence and approximation are up to normalization in the following.\nWith the Cholesky decomposition (or eigen-decomposition) ⌃l|l 1 = BlBTl , we design the prior conditioning layer PCl as below, which is invertible between xl and (xl 1, zl):\nxl = PCl(xl 1, zl) := Ul 1xl 1 +Blzl , zl ⇠ N (0, Idl dl 1) . (6)\nWe refer readers to Appendix B for proof of Lemma 2.1 and the invertibility in Equation 6.\nWhen the prior is non-Gaussian or the pooling operators are nonlinear, there exists a nonlinear invertible prior conditioning operator xl = PCl(xl 1, zl) such that xl follows the prior conditional distribution ⇢(xl|xl 1) given xl 1 and zl ⇠ N (0, Idl dl 1). We can pre-train an invertible network to approximate this sampling process, and fix it as the prior conditioning layer.\nInvertible flow. The invertible flow Fl at level l modifies the surrogate q̃l towards the target ql. The more accurate the multiscale structure in Equation 3 is, the better q̃l approximates ql, and the closer Fl is to the identity map. Therefore, we parameterize Fl by some flow-based generative model and initialize it as an identity map. In practice, we utilize the invertible block of Glow (Kingma & Dhariwal, 2018), which consists of actnorm, invertible 1⇥ 1 convolution, and affine coupling layer, and stack several blocks as the inverse flow Fl in MsIGN.\nOverall model. MsIGN is a bijective map between random noise inputs at different scales {zl}Ll=1 and the finest-scale sample xL. The forward direction of MsIGN maps {zl}Ll=1 to xL as below:\nx1 = F1(z1) ,\nx̃l = PCl(xl 1, zl) , xl = Fl(x̃l) , 2 l L . (7)\nAs a flow-based generative model, sample generation as in Equation 7 and density evaluation p✓(x) by the change-of-variable rule is accessible and fast for MsIGN. In scenarios when certain bound needs enforcing to the output, we can append element-wise output activations at the end of MsIGN. For example, image synthesis can use the sigmoid function so that pixel values lie in [0, 1]. Such activations should be bijective to keep the invertible relation between random noise to the sample." }, { "heading": "2.3 MULTISCALE INVERTIBLE GENERATIVE NETWORK: TRAINING", "text": "Since the prior conditioning layer PC is pre-computed and the output activation G is fixed, only the inverse flow F contains trainable parameters in MsIGN. We train MsIGN with the following strategy so that the distribution p✓ of its output samples, where ✓ is the network parameter, can approximate the target distribution q defined in Equation 1 well.\nMulti-stage training and interpret-ability. The multiscale strategy in construction of MsIGN enables a coarse-to-fine multi-stage training. At stage l, we target at capturing ql, and only train invertible flows before or at this level: Fl0 , l0 l. Equation 4 implies that ql can be well approximated by the surogate q̃l, which is the conditional upsampling from ql 1 as in Equation 5. So we use q̃l to initialize our model by setting Fl0 , l0 < l as the trained model at stage l 1 and setting Fl as the identity map. Our experiments demonstrate such multi-stage strategy significantly stabilizes training and improves final performance.\nFigure 1 and Equation 7 imply that intermediate activations, i.e., x̃l and xl, who are samples of predefined posterior distributions at the coarse scales (see Equation 5), are semantically meaningful and interpret-able. This is different from Glow (Kingma & Dhariwal, 2018), whose intermediate activations are not interpret-able due to the loss of spatial relation.\nJeffreys divergence and importance sampling with the surrogate. The KL divergence is easy to compute, and thus is widely used as the training objective. However, its landscape could admit local minima that don’t favor the optimization. Nielsen & Nock (2009) suggests that DKL(p✓kq) is zero-forcing, meaning that it enforces p✓ be small whenever q is small. As a consequence, mode missing can still be a local minimum, see Appendix C. Therefore, we turn to the Jeffreys divergence defined in Equation 2 which penalizes mode missing much and can remove such local minima.\nEstimating the Jeffreys divergence requires computing an expectation with respect to the target q, which is normally prohibited. Since MsIGN constructs a good approximation q̃ to q, and q̃ can be constructed from coarser levels in multi-stage training, we do importance sampling with the\nsurrogate q̃ for the Jeffreys diveregence and its derivative (see Appendix D for detailed derivation):\nDJ(p✓kq) = Ex⇠p✓ log p✓(x)\nq(x)\n+ Ex⇠q̃\n q(x)\nq̃(x) log\nq(x)\np✓(x)\n. (8)\n@\n@✓ DJ(p✓kq) = Ex⇠p✓\n✓ 1 + log p✓(x)\nq(x)\n◆ @ log p✓(x)\n@✓\nEx⇠q̃\n q(x)\nq̃(x)\n@ log p✓(x)\n@✓\n. (9)\nWith the derivative estimate given above, we optimize the Jeffreys divergence by stochastic gradient descent. We remark that @ log p✓(x)/@✓ is computed by the backward propagation of MsIGN." }, { "heading": "3 RELATED WORK", "text": "Invertible generative models (Deco & Brauer, 1995) are powerful exact likelihood models with efficient sampling and inference. They have achieved great success in natural image synthesis, see, e.g., Dinh et al. (2016); Kingma & Dhariwal (2018); Grathwohl et al. (2018); Ho et al. (2019); Chen et al. (2019b), and variational inference in providing a tight evidence lower bound (ELBO), see, e.g, Rezende & Mohamed (2015). In this paper, we propose a new multiscale invertible generative network (MsIGN) structure, which utilizes the invertible block in Glow (Kingma & Dhariwal, 2018) as building piece for the invertible flow at each scale. The Glow block can be replaced by any other invertible blocks, without any algorithmic changes. Different from Glow, different scales of MsIGN can be trained separately, and thus features in its intermediate layers can be interpreted as lowresolution approximation of the final high-resolution output. This novel multiscale structure enables better explain-ability of its hidden neurons and makes training much more stable.\nDifferent from the image synthesis task where large amount of samples from target distribution are available, in Bayesian inference problems only an unnormalized density is available and i.i.d. samples from the posterior are the target. This paper’s main goal is to train MsIGN to approximate certain high-d Bayesian posteriors. Various kinds of parametric distributions have been proposed to approximate posteriors before, such as polynomials (El Moselhy & Marzouk, 2012), non-invertible generative networks (Feng et al., 2017; Hou et al., 2019), invertible networks (Rezende & Mohamed, 2015; Ardizzone et al., 2018; Kruse et al., 2019) and certain implicit maps (Chorin & Tu, 2009; Morzfeld et al., 2012). Generative modeling approach has the advantage that i.i.d. samples can be efficiently obtained by evaluating the model in the inference stage. However, due to the tricky non-convex optimization problem, this approach for both invertible (Chorin & Tu, 2009; Kruse et al., 2019) and non-invertible (Hou et al., 2019) generative models becomes increasingly challenging as the dimension grows. To overcome this difficulty, we propose (1) to use the Jeffreys divergence as loss function, which has fewer shallow local minima and better landscape compared with the commonly-used KL divergence (see Appendix C for a concrete example), and (2) to train MsIGN in a coarse-to-fine manner with coarse-scale solution serving as an initialization to fine-scale optimization problem. In Kruse et al. (2019), authors list some recent models for low-d inverse problems. We remark that their formulation of posterior assumes no observation or model error in Equation 1, and is different from ours. See Appendix J for detailed discussion and experimental comparison.\nOther than the generative modeling, various Markov Chain Monte Carlo (MCMC) methods have been the most popular in Bayesian inference, see, e.g., Beskos et al. (2008); Neal et al. (2011); Welling & Teh (2011); Chen et al. (2014; 2015); Cui et al. (2016). Particle-optimization-based sampling is a recently developed effective sampling technique with Stein variational gradient descent (SVGD) (Liu & Wang, 2016)) and many related works, e.g., Liu (2017); Liu & Zhu (2018); Chen et al. (2018; 2019a). The intrinsic difficulty of Bayesian inference displays itself as highly correlated samples, leading to undesired low sample efficiency, especially in high-d cases. The multiscale structure and multi-stage strategy proposed in this paper can also benefit these particle-based methods, as we can observe that they benefit the amortized-SVGD (Feng et al., 2017; Hou et al., 2019) in Section 4.1.3. We leave a more thorough study of this topic as a future work.\nWorks in Parno et al. (2016); Matthies et al. (2016) utilize the multiscale structure in Bayesian inference and build generative models with polynomials. They suffer from exponential growth of parameter number for high-d polynomial basis. The Markov property (Spantini et al., 2018) is used to alleviate this exponential growth. Different from these works, we leverage the great capacity of invertible generative networks to parametrize the high-d distribution, and we design novel network architecture to make use of the multiscale structure. The multiscale structure is a more general\nstructure than commonly-used intrinsic low-d structure (Spantini, 2017; Cui et al., 2016; Chen et al., 2019a), which assumes that the density of high-d posterior concentrates in a low-d subspace.\nIn the image synthesis task, this multiscale idea incorporates with various generative models. For example, Denton et al. (2015); Odena et al. (2017); Karras et al. (2017); Xu et al. (2018) uses it in generative adversarial networks (GANs) to grow a high-resolution image from low-resolution ones. But the lack of invertibility in these models makes it difficult for them to apply to Bayesian inference problems. Invertible generative models like Dinh et al. (2016); Kingma & Dhariwal (2018); Ardizzone et al. (2019) adopted this multiscale idea, but their multiscale strategy is not in the spatial sense: the intermediate neurons are not semantically interpret-able, as we show in Figure 6." }, { "heading": "4 EXPERIMENT", "text": "We study two high-d Bayesian inverse problems (BIPs) known to have at least two equally important modes in Section 4.1 as test beds for distribution approximation and multi-mode capture: one with true samples available in Section 4.1.1; one without true samples but close to real-world applications in subsurface flow in Section 4.1.2. We also report the ablation study of MsIGN in Section 4.1.3. In addition, we apply MsIGN to the image synthesis task to benchmark with flow-based generative models and demonstrate its interpret-ability in Section 4.2. We adopt the invertible block in Glow (Kingma & Dhariwal, 2018) as the building piece, and stack several of them to build our invertible flow F . We utilize average pooling with kernel size 2 and stride 2 as our pooling operator A." }, { "heading": "4.1 BAYESIAN INVERSE PROBLEMS", "text": "Sample x of our target posterior distribution q is a vector on a 2-D uniform 64 ⇥ 64 lattice, which means the problem dimension d is 4096. Every x is equivalent to a piece-wise constant function on the unit disk: x(s) for s 2 ⌦ = [0, 1]2, and we don’t distinguish between them thereafter. We place a centered Gaussian with a Laplacian-type covariance as the prior: N 0, 2( ) 1 ↵ , which is very popular in geophysics and electric tomography. See Appendix E for problem settings in detail.\nThe key to guarantee the multi-modality of our posteriors is the symmetry. Combining properties of the prior defined above and the likelihood defined afterwards, the posterior is mirror-symmetric: q(x(s1, s2)) = q(x(s1, 1 s2)). We carefully select the prior and the likelihood so that our posterior q has at least two modes. They are mirror-symmetric to each other and possess equal importance.\nAs in Figure 1, we plan to learn our 4096-D posteriors at the end of L = 6 levels, and set problem dimension at each level as dl = 2l ⇤ 2l = 4l. The training follows our multi-stage strategy, and the first stage l = 1 is initialized by minimizing the Jeffreys divergence without importance sampling, because samples to q1 is available since d1 = 4 is relatively small. See Appendix E for details.\nWe compare MsIGN with representatives of major approaches: amortized-SVGD (short as ASVGD) (Feng et al., 2017) and Hamilton Monte Carlo (short as HMC) (Neal et al., 2011), for high-d BIPs, see our discussion in Section 3. We measure the computational cost by the number of forward simulations (nFSs), because running the forward simulation F occupies most training time, especially in Section 4.1.2. We budget a same nFS for all methods for fair comparison." }, { "heading": "4.1.1 SYNTHETIC BAYESIAN INVERSE PROBLEMS", "text": "This problem allows access to ground-truth samples so the comparison is clear and solid. The forward process is given by F(x) = h',xi2 = ( R ⌦ '(s)x(s)ds)\n2, where '(s) = sin(⇡s1) sin(2⇡s2). Together with the prior, our posterior can be factorized into one-dimensional sub-distributions, namely q(x) = Qd k=1 qk(hwk,xi) for some orthonormal basis {wk}dk=1. This property gives us access to true samples via inversion cumulative function sampling along each direction wk. Furthermore, these 1-D sub-distributions are all single modal except that there’s one, denoted as qk⇤ , with two symmetric modes. In other words, the marginal distribution along wk⇤ is double-model and the rest are uni-model. This confirms our construction of two equally important modes. See Appendix E for more details in problem settings. The computation budget is fixed at 8⇥ 105 nFSs. Multi-mode capture. To visualize mode capture, we plot the marginal distribution of generated samples along the critical direction wk⇤ , which by construction is the source of double-modality of the posterior. The (visually) worst one in three independent experiments is shown in Figure 2(a).\nDistribution approximation. To measure distribution approximation, we report the error of mean, variance and correlation at or between all sub-distributions, as well as the Jeffreys divergence. Thanks to the factorization property, we compare the mean, variance and correlation estimate with theoretical groundtruths, and report the root mean square of error at all\ndimensions in Figure 2(b). For MsIGN and A-SVGD that gives access to not only samples but also density, we also report the Monte Carlo estimates of the Jeffreys divergence with the target posterior in Table 1. We can see that MsIGN has superior accuracy in approximating the target distribution." }, { "heading": "4.1.2 ELLIPTIC BAYESIAN INVERSE PROBLEMS", "text": "This problem originates from geophysics and fluid dynamics. The forward model is given by linear measurement of the solution to an elliptic partial differential equation associated with x. We define\nF(x) = ⇥R ⌦ '1(s)u(s)ds R ⌦ '2(s)u(s)ds . . . R ⌦ 'm(s)u(s)ds ⇤T ,\nwhere 'k are fixed measurement functions, and u(s) is the solution of\nr · ⇣ e x(s)ru(s) ⌘ = f(s) , s 2 ⌦ , with boundary condition u(s) = 0 , s 2 @⌦ . (10)\nThis model appears frequently in real applications. For example, x, u can be seen as permeability field and pressure in geophysics. However, there is no known access to true samples of q. Again the trick of symmetry introduced in Section 4.1 and explained in Appendix E guarantees at least two equally important modes in the posterior. We put a 5⇥ 105-nFS budget on our computation cost.\nMulti-mode capture. Due to lack of true samples, we check the marginal distribution of the posterior along eigen-vectors of the prior, and pick a particular one to demonstrate that we can capture double modes in Figure 3(a). We also confirm the capture of multiple modes by embedding samples\nby Principle Component Analysis (PCA) to a 2-D space. We report the clustering (by K-means) result and means of each cluster in Figure 3(b), where we can see that A-SVGD failed to capture the two symmetric modes, while MsIGN has a more balanced capture of the symmetric posterior." }, { "heading": "4.1.3 ABLATION STUDY OF ARCHITECTURE DESIGN AND TRAINING STRATEGY", "text": "We run extensive experiments to study the effectiveness of the network architecture and training stragtegy of MsIGN, see Figure 4. We refer to Appendix G for details in setting and more results.\nNetwork architecture. We replace the prior conditioning layer by two direct alternatives: a stochastic nearest-neighbor upsampling layer (model denoted as “MsIGN-SNN”), or the split and squeeze layer in Glow design (now the model is essentially Glow, so we also denote it as “Glow”).\nFigure 4(a) shows that the prior conditioning layer design is crucial to the performance of MsIGN on both problems, because neither “MsIGN-SNN” nor “Glow” has a successful mode capture.\nTraining strategy. We study the effectiveness of the Jeffreys divergence objective and multi-stage training. We try substituting the Jeffreys divergence objective (no extra marks) with the KL divergence (model denoted with a string “-KL”) or kernelized Stein discrepancy (which resumes A-SVGD algorithm, model denoted with a string “-AS”), and switching between multi-stage (no extra marks) or single-stage training (model denoted with a string “-S”). We remark that single-stage training using Jeffreys divergence is infeasible because of the difficulty to estimate DKL(qkp✓). Figure 4(b) and (c) show that, all models trained in the single-stage manner (“MsIGN-KL-S”, “MsIGN-AS-S”) will face mode collapse. We also observe that our multi-stage training strategy can benefit training with other objectives, see “MsIGN-KL” and “MsIGN-AS”.\nWe also notice that the Jeffreys divergence leads to a more balanced samples for these symmetric problems, especially for the complicated elliptic BIP in Section 4.1.2." }, { "heading": "4.2 IMAGE SYNTHESIS TASK", "text": "We train our MsIGN architecture with maximum likelihood estimation to benchmark with other flow-based generative models. The prior conditional distribution ⇢(x|xc) is modeled by a simple Gaussian with a scalar matrix as its covariance and is learned from a training set. We refer readers to Appendix H for more experimental details, and to Appendix I for additional results.\nWe report the bits-per-dimension value with our baseline models of flow-based generative networks in Table 2. Our MsIGN is superior in number and also is more efficient in parameter size: for example, MsIGN uses 24.4% fewer parameters than Glow for CelebA 64, and uses 37.4% fewer parameters than Residual Flow for ImageNet 64.\nIn Figure 5, we show synthesized images of MsIGN from CelebA 64 dataset, and linear interpolation of real images in the latent feature space. In Figure 6, we visualize internal activations at checkpoints of the invertible flow at different scales which demonstrates the interpret-ability of MsIGN." }, { "heading": "5 CONCLUSION", "text": "For high-dimensional Bayesian inference problems with multiscale structure, we propose Multiscale Invertible Generative Networks (MsIGN) and associated training algorithms to approximate the high-dimensional posterior. In this paper, we demonstrate the capability of this approach in highdimensional (up to 4096 dimensions) Bayesian inference problems with spatial multiscale structure, leaving several important directions as future work. The network architecture also achieves the state-of-the-art performance in various image synthesis tasks. We plan to apply this methodology to other Bayesian inference problems, for example, Bayesian deep learning with multiscale structure in model width or depth (e.g., Chang et al. (2017); Haber et al. (2018)) and data assimilation problem with multiscale structure in the temporal variation (e.g., Giles (2008)). We also plan to develop some theoretical guarantee of the posterior approximation performance for MsIGN." } ]
2,020
null
SP:077926a214f87b9fdcd5a5f9d818d6313437cd90
[ "This study is presented clearly, and the core idea is interesting. However, the presented novelty is limited to a globally (for all tasks) and locally (task-specific) learning paradigm using a framework inspired by (Badirli et al., 2020). The authors have presented experimental results for both regression and classification setups, which are interesting." ]
Meta-optimization is an effective approach that learns a shared set of parameters across tasks for parameter initialization in meta-learning. A key challenge for metaoptimization based approaches is to determine whether an initialization condition can be generalized to tasks with diverse distributions to accelerate learning. To address this issue, we design a meta-gradient boosting framework that uses a base learner to learn shared information across tasks and a series of gradient-boosted modules to capture task-specific information to fit diverse distributions. We evaluate the proposed model on both regression and classification tasks with multi-mode distributions. The results demonstrate both the effectiveness of our model in modulating task-specific meta-learned priors and its advantages on multi-mode distributions.
[]
[ { "authors": [ "Ferran Alet", "Tomás Lozano-Pérez", "Leslie P Kaelbling" ], "title": "Modular meta-learning", "venue": "arXiv preprint arXiv:1806.10166,", "year": 2018 }, { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Neural module networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Sarkhan Badirli", "Xuanqing Liu", "Zhengming Xing", "Avradeep Bhowmik", "Sathiya S Keerthi" ], "title": "Gradient boosting neural networks: Grownet", "venue": "arXiv preprint arXiv:2002.07971,", "year": 2020 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": "In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Chrisantha Fernando", "Jakub Sygnowski", "Simon Osindero", "Jane Wang", "Tom Schaul", "Denis Teplyashin", "Pablo Sprechmann", "Alexander Pritzel", "Andrei Rusu" ], "title": "Meta-learning by the baldwin effect", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference Companion,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic model-agnostic meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jerome H Friedman" ], "title": "Greedy function approximation: a gradient boosting machine", "venue": "Annals of statistics,", "year": 2001 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Yoshua Bengio", "Paolo Frasconi", "Jürgen Schmidhuber" ], "title": "Gradient flow in recurrent nets: the difficulty of learning long-term dependencies", "venue": null, "year": 2001 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-learning in neural networks: A survey", "venue": "arXiv preprint arXiv:2004.05439,", "year": 2020 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Max Jaderberg", "Wojciech M Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castaneda", "Charles Beattie", "Neil C Rabinowitz", "Ari S Morcos", "Avraham Ruderman" ], "title": "Humanlevel performance in 3d multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "stat, 1050:1,", "year": 2014 }, { "authors": [ "Brenden Lake", "Ruslan Salakhutdinov", "Jason Gross", "Joshua Tenenbaum" ], "title": "One shot learning of simple visual concepts", "venue": "In Proceedings of the annual meeting of the cognitive science society,", "year": 2011 }, { "authors": [ "Yoonho Lee", "Seungjin Choi" ], "title": "Gradient-based meta-learning with learned layerwise metric and subspace", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-sgd: Learning to learn quickly for few-shot learning", "venue": "arXiv preprint arXiv:1707.09835,", "year": 2017 }, { "authors": [ "Andrew L Maas", "Awni Y Hannun", "Andrew Y Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing,", "year": 2013 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive metalearner", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alex Nichol", "John Schulman" ], "title": "Reptile: a scalable metalearning algorithm", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Matthew Olson", "Abraham Wyner", "Richard Berk" ], "title": "Modern neural networks generalize on small data sets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Boris Oreshkin", "Pau Rodriguez Lopez", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Siyuan Qiao", "Chenxi Liu", "Wei Shen", "Alan L Yuille" ], "title": "Few-shot image recognition by predicting parameters from activations", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": null, "year": 2019 }, { "authors": [ "Qianru Sun", "Yaoyao Liu", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Philip Tannor", "Lior Rokach" ], "title": "Augboost: gradient boosting enhanced with step-wise feature augmentation", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Andreas Veit", "Michael J Wilber", "Serge Belongie" ], "title": "Residual networks behave like ensembles of relatively shallow networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ricardo Vilalta", "Youssef Drissi" ], "title": "A perspective view and survey of meta-learning", "venue": "Artificial intelligence review,", "year": 2002 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Risto Vuorio", "Shao-Hua Sun", "Hexiang Hu", "Joseph J Lim" ], "title": "Multimodal model-agnostic metalearning via task-aware modulation", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yaqing Wang", "Quanming Yao", "James T Kwok", "Lionel M Ni" ], "title": "Generalizing from a few examples: A survey on few-shot learning", "venue": "ACM Computing Surveys (CSUR),", "year": 2020 }, { "authors": [ "Huaxiu Yao", "Ying Wei", "Junzhou Huang", "Zhenhui Li" ], "title": "Hierarchically structured meta-learning", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jaesik Yoon", "Taesup Kim", "Ousmane Dia", "Sungwoong Kim", "Yoshua Bengio", "Sungjin Ahn" ], "title": "Bayesian model-agnostic meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yabin Zhang", "Hui Tang", "Kui Jia" ], "title": "Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data", "venue": "In Proceedings of the european conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Yu Zhang", "Qiang Yang" ], "title": "A survey on multi-task learning", "venue": "arXiv preprint arXiv:1707.08114,", "year": 2017 }, { "authors": [ "Finn" ], "title": "2017), we use two fully-connected layers of size", "venue": null, "year": 2017 }, { "authors": [ "shkin" ], "title": "2018), we divide 100 classes into 60 training classes, 20 validation classes, and 20 testing", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "While humans can learn quickly with a few samples with prior knowledge and experiences, artificial intelligent algorithms face challenges in dealing with such situations. Learning to learn (or metalearning) (Vilalta & Drissi, 2002) emerges as the common practice to address the challenge by leveraging transferable knowledge learned from previous tasks to improve learning on new tasks (Hospedales et al., 2020).\nAn important direction in meta-learning research is meta-optimization frameworks (Lee & Choi, 2018; Nichol & Schulman, 2018; Rusu et al., 2019), a.k.a., model-agnostic meta-learning (MAML) (Finn et al., 2017). Such frameworks learn initial model parameters from similar tasks and commit to achieving superior performance on new tasks that conform to the same distribution through fast adaptation. They offer excellent flexibility in model choices and demonstrate appealing performance in various domains, such as image classification (Li et al., 2017; Finn et al., 2017), language modeling (Vinyals et al., 2016), and reinforcement learning (Fernando et al., 2018; Jaderberg et al., 2019).\nGenerally, such frameworks define a target model Fθ and a meta-learnerM. The learning tasks T = {T train, T test} are divided into training and testing tasks, where T are generated from the meta-datasetD, i.e., T ∼ P (D). Each task contains a support set DS and a query set DQ for training and evaluating a local model. The initialization of the model parameter θ is learned by the meta learner, i.e., θ ←M(T train). We denote the meta-learned parameter as φ so that θ ← φ. For each task, the model obtains locally optimal parameter θ̂ by minimizing the loss L(Fθ(DS)). The meta parameter φ will be updated across all training tasks by minimizing the loss ΣT∈T train(L(Fθ̂(D\nQ))). Generally, it takes only a small number of epochs to learn locally optimal parameters across training tasks so that meta-learned parameter φ can quickly converge to an optimal parameter for new tasks.\nMost methods assume some transferable knowledge across all tasks and rely on a single shared meta parameter. However, the success of the meta-learners are limited within similar task families, and the single shared meta parameter cannot well support fast learning on diverse tasks (e.g., a large meta-dataset) or task distributions (e.g., T are generated from multiple meta-datasets) due to conflicting gradients for those tasks (Hospedales et al., 2020). Recent efforts have studied multiple initial conditions to solve the above challenges. Some employ probabilistic models (Rusu et al., 2019; Finn et al., 2018; Yoon et al., 2018) while others incorporate task-specific information (Lee & Choi, 2018; Vuorio et al., 2019; Alet et al., 2018). The former learns to obtain an approximate posterior of an unseen task yet needs sufficient samples to get reliable data distributions; the latter conducts task-specific parameter initialization using multiple meta-learners yet requires expensive computation and cannot transfer knowledge across different modes of task distributions.\nIn this work, we aim to resolve the above challenges from a novel perspective by proposing a meta gradient boosting framework. Gradient boosting (Friedman, 2001) aims to build a new learner towards the residuals of the previous prediction result for each step. We call the learner for each step as weak learner and make predictions based on summing up the weak learners. Recent research (Badirli et al., 2020; Olson et al., 2018) has demonstrated the potential of decomposing deep neural nets into an ensemble of sub-networks with each achieving low training errors. We propose to use the first or first few weak learners as the base learner, followed by a series of gradient boosting modules to cope with a diverse array of tasks—the base learner is responsible for inferring transferable knowledge by learning across all tasks; the gradient-boosting modules are designed to make task-specific updates to the base learner. Compare with existing work, which uses multiple initial conditions, our approach does not require specifying a set of initialization conditions and thus has better flexibility in dealing with multi-mode tasks. Our proposed framework is also more efficient than its counterparts as it does not require a large number of gradient boosting modules. We evaluate the proposed framework on few-shot learning scenarios for both regression and classification tasks. The experimental results show the well performance of the proposed framework, which demonstrates the model’s ability in learning with very few cases." }, { "heading": "2 RELATED WORK", "text": "Meta-learning has the potential of replicating human ability to learn new concepts from one or very few instances. It has recently drawn increasing attention, given its broad applicability to different fields (Hospedales et al., 2020). Pioneers (Finn et al., 2017; Nichol & Schulman, 2018) in this topic propose optimization algorithms with learned parameters to automate the exploitation to the structures of learning problems. However, most of them initialize the same set of model parameters for all tasks, which may have different distributions, thus resulting in over-fitting.\nRecent studies either model the mixture of multiple initial conditions via probabilistic modeling (Finn et al., 2018; Yoon et al., 2018) or incorporate task-specific knowledge (Lee & Choi, 2018; Alet et al., 2018), to address the above issues. Yoon et al. (2018) and Finn et al. (2018) use variational approximation to enable probabilistic extensions to MAML. But it is unclear how to extend MAML for a wide range of task distributions. Rusu et al. (2019) consider multiple conditions by borrowing the idea of variational autoencoders (Kingma & Welling, 2014), which encodes inputs to a low-dimensional latent embedding space and then decodes the learned latent code to generalize taskspecific parameters. Another line of research defines a set of initialization modules and incorporate task-specific information to select task-specific modules; this way, it can identify the mode of tasks sampled from a multimodal task distribution and adapt quickly through gradient updates (Vuorio et al., 2019). Yao et al. (2019) propose a Hierarchically Structured Meta-Learning (HSML) framework to perform soft clustering on tasks. HSML first learns the inputs and then obtains clustering results by the hierarchical clustering structure. HSML tailors the globally shared parameter initialization for each cluster via a parameter gate to initialize all tasks within the clusters. The above approaches have common limitations in 1) requiring sufficient data samples to generalize task distribution thus may fail in few-shot cases; 2) being computationally expensive, due to the globally stored initialization modules; 3) facing challenges in exhaustively listing every possible initial condition.\nTwo closely-related topics to meta-learning are modular approaches (Andreas et al., 2016) and multitask learning (Zhang & Yang, 2017). Modular approaches are similar to meta-learning in that the input signal gives relatively direct information about a good structural decomposition of the problem. For example, Alet et al. (2018) adopt the modular structure and parameter adaptation method for learning reusable modules. Multi-task learning aims to learn a good shared-parameter or make the parameter for each task as similar as possible (Wang et al., 2020). For example, Zhang et al. (2018) propose two task networks that share the first few layers for the generic information before applying different prediction layers to different tasks. These approaches differ from meta-learning in requiring fine-tuning the models over all training samples and thus cannot adapt well to new tasks.\nOur framework Meta Gradient Booting (MGB) neural network is based on the idea of gradient boosting (Friedman, 2001), which aims to build a new learner towards the residuals of the previous prediction result for each step. The learner for each step is called weak learner, and the prediction is based on the summation of weak learners. Weak learners may vary from traditional decision trees (Chen & Guestrin, 2016) to neural networks (Tannor & Rokach, 2019; Badirli et al., 2020).\nAlgorithm 1 Training of MGB 1: Randomly initialize global parameter φ 2: while Not done do 3: for T ∈ T do 4: for (x, y) ∈ DS do 5: Initialize fθ0 by θ0 ← φ 6: for k ∈ range(K) do 7: θ ← θ − βL(y,Fθ) 8: end for 9: end for 10: Get updated parameter θ̂ 11: for (x, y) ∈ DQ do 12: Calculate predictions Fθ̂(x) 13: Calculate task loss L(y,Fθ̂) 14: end for 15: end for 16: Update φ by φ← φ− γLmeta 17: end while\n...\nFeature\nTarget\nφ\nθ0\n∇Loss\n...\nFeature\nθ1\n∇Loss\nOutputs\nHid Task 1\nTask T\nBase Learner Gradient BoostModule\nFeature\n∇Loss\nFeature\nLocal Update\nG lo\nba l U\npd at\ne\n...\nFigure 1: Example of the model with only one gradient-boosting module. Green lines are for local update and red lines are for global update.\nA recent study (Badirli et al., 2020) proposes a general framework for gradient boosting on neural networks, which work for both regression and classification tasks. It uses the deep layers of neural nets as a bagging mechanism in a similar spirit to random forest classifier (Veit et al., 2016). After only slight tuning, deep neural nets can perform well on a wide range of small real-world datasets (Olson et al., 2018). These findings demonstrate the potential of decomposing deep neural nets into an ensemble of sub-networks each achieving low training errors. In our framework, we use the first weak learner or the first few weak learners as the base learner for learning the shared initialization parameter across tasks. The output for each weak learner is then aggregated to the inputs for the next step for constructing an end-to-end learning strategy until the last gradient boosting module. This way, the base learner serves as transferable knowledge, and the gradient boosting modules following it are trained for task-specific predictions." }, { "heading": "3 METHOD", "text": "We explore the problem in the context of supervised learning, where input-output pairs are available in both training and validation sets. Similar to previous meta-optimization based approaches (Finn et al., 2017; Nichol & Schulman, 2018), we assume the tasks are generated from an underlying distribution T ∼ P (D), where D is the meta-dataset, which is either a uni-mode dataset or multi-mode datasets. Given a set of tasks T = {T train, T test}, each task T ∈ T contains a support dataset DS and a query dataset DQ, both representing input-output pairs (x, y). We aim to learn a meta-learnerM to guide the initialization for a target model Fθ so that the target model can quickly adapt and perform well on a given new task. We propose a Meta Gradient Boosting (MGB) framework as the target modelFθ, which consists of several weak learners and can be represented asFθ ∼ ΣKk=0fθk . The first weak learner fθ0 or the first few weak learners are regarded as the base learner for learning the shared information across tasks; the weak learners are gradient boosting modules for capturing task-specific information. The meta learner aims to learn transferable knowledge and provides initialization details for the base learner so that the model can quickly adapt to task-specific predictions with a few gradient-boosting steps. Figure 1 shows an example of our MGB framework under K = 1, where we update the model locally for task-specific predictions and update the meta-learner globally for all tasks." }, { "heading": "3.1 LOCAL LEARNING: TASK-ADAPTIVE UPDATING VIA GRADIENT-BOOSTING", "text": "Gradient boosting machines hold out optimization in the function space (Friedman, 2001). In particular, the target model is an addition Fθ = ΣKk=0αkfθk , where K is the number of adaptions\n(gradient boosts), fθ0 is the first weak learner, which provides initial prediction of the inputs, fθk are the function increments (gradient boosting modules), and αk is the boosting rate. In each step, the new gradient boosting module is formulated in a greedy way. To start, the base-learner fθ0 minimizes a prediction loss L(y, fθ0) to predict the outputs ŷ ← fθ0(x). Then, at gradient boosting step k, the gradient boost module fθk minimizes the loss L(gk, fθk), where gk = −\n∂L(y,Fk−1θ (x)) ∂Fk−1θ (x) ,\nFkθ = Σk∗=0α∗fθ∗ denotes the ensemble of functions at gradient step k, and gk is the negative gradient along with the observed data. Traditional boosting frameworks learn each weak learner greedily; therefore, only parameters of k-th weak learner are updated at boosting step k while all the parameters of previous k − 1 weak learners remain unchanged. This together with the single shared meta parameter make it easy for the model to stuck in a local minimum. The fixed boosting rate αk further aggravates the issue. In response, we construct the gradient boosting neural network in a cascading structure (He et al., 2016). Similar to Badirli et al. (2020), at each step k, we take the concatenation of the inputs x and the hidden layer of the previous weak learner hk−1 = σθk−1(x) as the inputs for the current step gradient boost module, i.e., gk ← fθk([hk−1, x]). But our approach differs in optimizing the gradient boosting neural networks in a macroscopic view—for each step k, we learn the ensemble of the weak learners Fkθ by minimizing the loss function\narg min θ L(y,Fkθ )→ arg min θ L(y, α0fθ0(x) + ΣKk=1αkfθk(hk−1, x)), (1)\nWe update parameters of both weak learners and gradient boost module via back-propagation. Generally, the parameter θ of Fθ is locally updated via θ ← θ − βL(y,Fθ), where β is the task learning rate. The boosting rate αk can be set in various forms—in the simplest case, we can use an increasing or decreasing boosting rate, e.g. αk = αk−1/c (c is a constant), to decrease or increase the contribution of the base learner. We will discuss model performance under different settings of the boosting rate later. Both the boosting rate and the number of gradient boost modules affect the sharing ability and prediction performance of the base learner. Hochreiter et al. (2001) found that the gradient for the earlier weak learners decays with the increasing number of gradient boost modules. On balance, we use the base learner of our proposed gradient boosting framework as a weak learner and a series of gradient boosting modules as strong learners for a specific task." }, { "heading": "3.2 GLOBAL LEARNING: META-OPTIMIZATION FOR TRANSFERABLE KNOWLEDGE LEARNING", "text": "The learning and updating strategy of the gradient boosting framework ensure a weak base learner. Since the base learner could be the first weak learner or the first few weak learners, we use fθ0 to represent for both conditions for ease of illustration. We take the meta-optimization approach for initializing the base learner so that the model can provide an initial guess for the prediction based on the transferable knowledge over a wide arrange of tasks. Specifically, we learn a global sharing parameter φ s.t. θ0 ← φ. Similar to other MAML-based approaches (Finn et al., 2017; Lee & Choi, 2018), we minimize the expected loss on the local query set DQ for tasks T ∈ T train in meta optimization. Since the meta-gradient may involve higher-order derivatives, which are computationally expensive for deep neural nets, MAML Finn et al. (2017) takes one-step gradient descent for meta-optimization. Following the above, we obtain updated model parameters θ̂ after updating the target model Fθ for K steps on the local support set DS . Global learning aims at minimizing the loss\narg min φ Lmeta → arg min φ ΣT∈T trainΣ(x,y)∈DQL(y,Fθ̂) (2)\nThe global sharing parameter φ is updated via φ← φ− γLmeta, where γ is the global learning rate. The pseudocode of the training process is described in Algorithm 1." }, { "heading": "4 EXPERIMENTS", "text": "We test our proposed framework on few-shot learning scenarios and compare it with three other meta-learning approaches: MAML (Finn et al., 2017), Multimodal Model-Agnostic Meta-Learning (MMAML) (Vuorio et al., 2019), and Meta-learning with Latent Embedding Optimization (LEO) Rusu et al. (2019). Both MMAML and LEO model a mixture of multiple initial conditions. MMAML modulates its meta-learned prior parameters according to the identified mode to enable more efficient\nadaptation; LEO solves the problem via probabilistic modeling that learns a stochastic latent space with an information bottleneck, conditioned on the input data, from which the high-dimensional parameters are generated. We compare the results on both regression and classification tasks." }, { "heading": "4.1 REGRESSION TASKS", "text": "Setups We adopt the simple regression task with similar settings to Finn et al. (2018). In the 1D regression problem, we assume different tasks correspond to different underlying functions, and we aim to predict the outputs for new tasks by learning on very few samples. To construct a multi-mode task distribution, we set up four functions (sinusoidal, linear, quadratic, and abstract value functions) and treat them as discrete task modes. The function settings are detailed in Appendix A.1. Uni-mode training tasks are generated from a single function. For each task, input-output pairs are generated from the given function with randomly initialized parameters (e.g., the slope for the linear function). We generate multi-mode training tasks by first selecting the function type for each task and then initializing random parameters for the function to generate samples. Similar to the settings in Finn et al. (2017), we use two hidden layers of size 40, followed by ReLU as the activation function.\nLearning from multi-mode distributions (e.g., the above four distributions) is challenging. First, the function values may vary across those distributions. For example, the output of the quadratic function can range from zero to dozens (see Figure 2) while the other three functions are more likely to produce outputs within [-10,10]. Second, the few-shot samples that sit on a line might be generated from non-linear functions, which make it difficult to learn the real modality from such samples. Updating more steps for task-specific models could solve the first challenge yet may cause over-fitting. The second challenge can be mitigated by producing a set of models for task learning. Our proposed MGB can well handle the two challenges by a series of task-specific gradient boost modules and providing flexibility in updating the framework.\nResults We use Mean Absolute Error (MAE) as the evaluation metric to evaluate models’ performance in training on one-mode (sinusoidal function), two-mode (sinusoidal function and linear function) or four-mode (all the four functions) tasks. To ensure a fair comparison, the same basic settings of the network structure are configured for all the compared methods, and the results are learned within certain training epochs. Detailed settings are in A.3. Overall, the proposed MGB framework shows competitive and stable performance on multi-mode regression tasks. The results (shown in Table 1) reveal that it is more difficult to capture task-specific patterns on multi-mode tasks—the MAE is larger when more modes are considered. This makes sense considering the randomness in choosing functions and function parameters for all models. The results also show that incorporating task identities can significantly improve the performance of multi-mode learning. MAML has the highest error in all settings, which suggests that MAML does not perform as well on multi-mode tasks as on uni-mode tasks. Our model shows more stable performance in all settings when compared with LEO and MMAML. Performance comparison of our model with one, two, and five gradient boosting modules (corresponding to MGB-1, MGB-2, and MGB-5, respectively) suggests that the performance improves when more gradient boosting modules are used; however, the effect decreases as the number gradient boosting modules increases." }, { "heading": "4.2 CLASSIFICATION TASKS", "text": "Setups We evaluate the proposed framework (Finn et al., 2017) on n-way few-shot image classification tasks. We use four datasets to constitute multi-mode tasks: Omniglot Lake et al. (2011), miniImageNet Ravi & Larochelle (2016), FC100 Oreshkin et al. (2018), and CUB Wah et al. (2011). Details about those datasets can be found in the supplementary material A.2. We follow train-test splitting methods as described in Finn et al. (2017); Vuorio et al. (2019) and train models on tasks with two modes (Omniglot and miniImageNet), three modes (Omniglot, miniImageNet, and FC100), and four modes (all four datasets) tasks. The weak-learner uses CNN modules for learning the image embedding and fully-connected layers for classification. Similar to previous work Finn et al. (2017); Vuorio et al. (2019), we configure each CNN module with 3 × 3 convolutions, followed by 2 × 2 max-pooling and batch normalization (Ioffe & Szegedy, 2015). Since the embedding module can significantly affect classification results (Sun et al., 2019), to ensure a fair comparison, we use the same embedding module for all compared methods. The detailed settings are described in A.3.\nResults We consider 1-shot and 5-shot learning for 5-way classification and 1-shot learning for 20-way classification. We evaluate the performance of our MGB framework with one (MGB-1), two (MGB-2), or five (MGB-5) gradient boosting modules. The resulting performance is shown in table 2. Overall, the proposed MGB performs well on multi-mode tasks. Compared with MMAML, our method works better on most scenarios except on 1-shot 20-way classifications, where MMAML can store more parameters. Similar to the regression tasks, MGB with more gradient boosting modules shows better performance while MGB-1 can make only a slight improvement over MAML because images contain more complex information than real numbers. More modes of tasks increase the performance gap between MAML and the other methods, which suggest the other methods (which consider multiple conditions) can handle multi-mode tasks better than MAML. Under the same experimental settings, i.e., with the same image embedding modules, LEO does not perform well on tasks with more modes, partially because it is largely impacted by the quality of the learned image embedding—first, LEO’s learning strategy (Rusu et al., 2019) pretrains the dataset-specific image embedding (Qiao et al., 2018) before meta-learning; then, LEO uses an encoder-decoder structure to generate parameters for the classifier from the learned embedding." }, { "heading": "5 DISCUSSION", "text": "Our experimental results on both regression and classification tasks suggest our method can adapt to the optimal results with few gradient boosting modules. In this section, we take a further step to discuss i) the configuration of gradient boosting modules and ii) the sharing ability of the base learner during the back-propagation through meta-gradient boosting modules. The results are presented for 5-way 1-shot image classification with 4-mode tasks." }, { "heading": "5.1 CONFIGURATION OF GRADIENT BOOSTING MODULES", "text": "Settings for the weak learner Our MGB framework consists of a series of weak learners, where the first or the first few weak learners serve as the base learner to be shared among tasks. Type and dimension of the weak learner are two key factors that may affect the final results. Vuorio et al. (2019) find that LSTM models perform better than linear models in regression tasks. Sun et al. (2019) show the choice of feature extractor for images has a strong correlation with the final results, and using pre-trained network or network structure can improve the results significantly. For example, it improves the accuracy by about 6% (Mishra et al., 2018; Oreshkin et al., 2018) to use ResNet-12 as the feature extractor. Here, we use four convolutional layers for learning from images to ensure a fair comparison. The dimension of the weak learner includes the number of hidden layers and the number of neurons in each layer. Figure 3 (a) shows the performance of the model trained on a 4-modes classification dataset under different settings of the two parameters above. The results show a deeper model or a larger neuron size gives better performance, while the network may need more time to learn with a larger neuron size.\nSettings for gradient boosting modules The number of gradient boosting modules and the updating strategy for the gradient boosting modules are two vital factors that may affect the final results. Since the prediction is based on the summation of a series of weak learners, more gradient boosting modules will reduce the contribution of each weak learner. According to the results shown in Figure 3 (b), the model with only one gradient boost module cannot well handle multi-mode tasks; more gradient boost modules can help improve the results while taking more time to learn the task information. The traditional gradient boost module greedily updates the whole framework at each step, but we can let the added weak learner grasp more information from the inputs by updating several times in each step. We can further adjust the contribution of the base learner and gradient boosting modules to the final results by allowing them to update for different times in each step. Intuitively, if we fine-tune on the base learner (i.e., updating multiple times on the base learner), the model may get stuck in a local optimal and decrease the impact of gradient boosting modules on the model performance; conversely, if we conduct more updates on the gradient boosting modules, the model may need more training epochs from all training tasks to grasp the general sharing information that helps the model in fast-adaptation. The results under different settings of the updating times for base learner and gradient boosting modules are shown in Figure 3 (c). We can see the updating strategy significantly affects model performance, and updating more steps on gradient modules improves the model’s robustness. The performance tends to become unstable if we conduct more updates for the base learner (i.e., the yellow line in Figure 3 (c)), which observation aligns with our analysis above." }, { "heading": "5.2 SHARING ABILITY OF THE BASE LEARNER", "text": "The base learner of our MGB framework is shared across tasks for capturing the general sharing knowledge. Three factors may affect the sharing ability of the base learner: 1) which part to share; 2) how to share; 3) how the shared base learner contributes to MGB. We discuss these three components as follows.\nSingle weak learner v.s. Multiple weak learners Instead of choosing a single weak learner as the base learner, we can choose the first few weak learners as the base learner. We present the results of MGB with one, two, or three weak learners as the base learner and one gradient boosting module in Figure 4 (a). Generally, when more weak learners are used as the base learner (which is more than the number of gradient boost modules), MGB faces difficulties in capturing multi-mode patterns and thus achieves degraded generalization performance.\nStatic base learner v.s. Dynamic base learner The base learner is initialized using the global sharing parameter φ. It can be either static (if we keep its parameter unchanged) or dynamic (if we update the base learner during the training of gradient boost modules). We compare the versions of MGB that use a static base learner and a dynamic base learner, respectively. In both versions, we append one gradient boost module to the base learner and update it multiple times during each step. The results (shown in Figure 4 (b)) reveal that keeping the shared information (i.e. using static base learner) can improve the stability of the model.\nBoosting rate α The boosting rate (α) is probably the most vital component for the MGB framework because it directly indicates the contribution of each weak learner to the final prediction. We test the performance of MGB under various settings of the boosting rate α, where the rate is either decayed (i.e. αk = αk−1/c, where c is a constant), automatically learned, or equally contributed (i.e. α∗ = 0 for all base learners), respectively. The result suggests that using the automatically learned α or equally contributed α leads to more stable performance, while a decayed α results in more time in task learning. This supports our analysis that our gradient boosting modules help learn task information." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose a novel direction for solving the problem faced by previous metaoptimization approaches, i.e., using the same initialization for diverse tasks. We present a meta gradient boosting framework that contains a series of weak learners to make predictions, using a base learner to grasp shared information across all tasks. We have conducted extensive experiments to evaluate the model’s ability to capture meta information and the task-specific information on regression and classification tasks. Our experimental results show the effectiveness of our framework in learning multi-mode tasks. Our results reveal the necessity of selecting the weak learner carefully according to task types. An example is that CNNs outperform simple fully connected (FC) layers on image classification problems while FC layers can perform better on regression tasks. In future work, we will extend our framework by considering multi-modal problems, e.g., learning from images, texts and numerical values, and study how to choose appropriate weak learners for specific applications." } ]
2,020
null
SP:2969ff98eb93abe37242a962df458541311090ff
[ "The paper explores adversarial robustness in a new setting of test-time adaptation. It shows this new problem of “test-time-adapted adversarial robustness” is strictly weaker than the “traditional adversarial robustness” when assuming the training data is available for the “test-time-adapted adversarial robustness”. The gap between the two problems is demonstrated by the simple DANN solution which has good “test-time-adapted adversarial robustness” but bad “traditional adversarial robustness”. The paper also explores the subcase of “test-time-adapted adversarial robustness” when assuming the training data is not available and provide some initial result. " ]
This paper studies test-time adaptation in the context of adversarial robustness. We formulate an adversarial threat model for test-time adaptation, where the defender may have a unique advantage as the adversarial game becomes a maximin game, instead of a minimax game as in the classic adversarial robustness threat model. We then study whether the maximin threat model admits more “good solutions” than the minimax threat model, and is thus strictly weaker. For this purpose, we first present a provable separation between the two threat models in a natural Gaussian data model. For deep learning, while we do not have a proof, we propose a candidate, Domain Adversarial Neural Networks (DANN), an algorithm designed for unsupervised domain adaptation, by showing that it provides nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive attacks. This is somewhat surprising since DANN is not designed specifically for adversarial robustness (e.g., against norm-based attacks), and provides no robustness in the minimax model. Complementing these results, we show that recent data-oblivious test-time adaptations can be easily attacked even with simple transfer attacks. We conclude the paper with various future directions of studying adversarially robust test-time adaptation.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David A. Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman" ], "title": "Learning bounds for domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "John C. Duchi", "Percy Liang" ], "title": "Unlabeled data improves adversarial robustness", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "John C Duchi", "Percy S Liang" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nicolas Courty", "Rémi Flamary", "Amaury Habrard", "Alain Rakotomamonjy" ], "title": "Joint distribution optimal transportation for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Constantinos Daskalakis", "Stratis Skoulakis", "Manolis Zampetakis" ], "title": "The complexity of constrained min-max optimization, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor S. Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "In Gabriela Csurka (ed.), Domain Adaptation in Computer Vision Applications, Advances in Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Shafi Goldwasser", "Adam Tauman Kalai", "Yael Tauman Kalai", "Omar Montasser" ], "title": "Beyond perturbations: Learning guarantees with arbitrary adversarial test examples", "venue": "CoRR, abs/2007.05145,", "year": 2020 }, { "authors": [ "Ian J. Goodfellow" ], "title": "Defense against the dark arts: An overview of adversarial example security research and future research directions", "venue": "CoRR, abs/1806.04169,", "year": 2018 }, { "authors": [ "Ian J. Goodfellow" ], "title": "A research agenda: Dynamic models to defend against correlated attacks", "venue": "CoRR, abs/1903.06293,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Thorsten Joachims" ], "title": "Transductive inference for text classification using support vector machines", "venue": "Proceedings of the Sixteenth International Conference on Machine Learning (ICML", "year": 1999 }, { "authors": [ "Guy Katz", "Clark W. Barrett", "David L. Dill", "Kyle Julian", "Mykel J. Kochenderfer" ], "title": "Reluplex: An efficient SMT solver for verifying deep neural networks", "venue": "Computer Aided Verification - 29th International Conference,", "year": 2017 }, { "authors": [ "Daniel Kifer", "Shai Ben-David", "Johannes Gehrke" ], "title": "Detecting change in data streams", "venue": "In VLDB,", "year": 2004 }, { "authors": [ "Mingsheng Long", "Jianmin Wang", "Guiguang Ding", "Jiaguang Sun", "Philip S Yu" ], "title": "Transfer joint matching for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Mingsheng Long", "Yue Cao", "Jianmin Wang", "Michael I Jordan" ], "title": "Learning transferable features with deep adaptation networks", "venue": "arXiv preprint arXiv:1502.02791,", "year": 2015 }, { "authors": [ "Jonathan Lorraine", "David Duvenaud" ], "title": "Stochastic hyperparameter optimization through hypernetworks", "venue": "CoRR, abs/1802.09419,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Multiple source adaptation and the rényi divergence", "venue": "In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence,", "year": 2009 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: A regularization method for supervised and semi-supervised learning", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2019 }, { "authors": [ "Zachary Nado", "Shreyas Padhy", "D Sculley", "Alexander D’Amour", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Evaluating prediction-time batch normalization for robustness under covariate shift", "venue": "arXiv preprint arXiv:2006.10963,", "year": 2020 }, { "authors": [ "Zhongyi Pei", "Zhangjie Cao", "Mingsheng Long", "Jianmin Wang" ], "title": "Multi-adversarial domain adaptation", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ludwig Schmidt", "Shibani Santurkar", "Dimitris Tsipras", "Kunal Talwar", "Aleksander Madry" ], "title": "Adversarially robust generalization requires more data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jian Shen", "Yanru Qu", "Weinan Zhang", "Yong Yu" ], "title": "Wasserstein distance guided representation learning for domain adaptation", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Rui Shu", "Hung H. Bui", "Hirokazu Narui", "Stefano Ermon" ], "title": "A DIRT-T approach to unsupervised domain adaptation", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John C. Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yu Sun", "Xiaolong Wang", "Liu Zhuang", "John Miller", "Moritz Hardt", "Alexei A. Efros" ], "title": "Test-time training with self-supervision for generalization under distribution shifts", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Jonathan Uesato", "Jean-Baptiste Alayrac", "Po-Sen Huang", "Alhussein Fawzi", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Are labels required for improving adversarial robustness", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Vladimir Vapnik" ], "title": "Statistical learning theory", "venue": "ISBN", "year": 1998 }, { "authors": [ "Dequan Wang", "Evan Shelhamer", "Shaoteng Liu", "Bruno A. Olshausen", "Trevor Darrell" ], "title": "Fully test-time adaptation by entropy minimization", "venue": null, "year": 2006 }, { "authors": [ "Han Zhao", "Shanghang Zhang", "Guanhang Wu", "José MF Moura", "Joao P Costeira", "Geoffrey J Gordon" ], "title": "Adversarial multiple source domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Sinha" ], "title": "By far a standard approach is to do adversarial training directly for the attack type Madry et al", "venue": "Carmon et al. (2019a); Uesato et al", "year": 2018 }, { "authors": [ "Sun" ], "title": "The recent proposals of test-time", "venue": null, "year": 2020 }, { "authors": [ "H-divergence Kifer" ], "title": "That theoretical framework is the basis for a line of methods that uses adversarial training with neural networks to learn representations that are indistinguishable between source and target domain, in particular domain adversarial neural network (DANN", "venue": "Ajakan et al", "year": 2010 }, { "authors": [ "Long" ], "title": "2018), and Rényi divergence Mansour et al", "venue": null, "year": 2014 }, { "authors": [ "Ganin" ], "title": "2017) for more details. We remark that this objective with DANN gives evidence that: (1) Defenses using test-time adaptation are significantly different from test-time defenses discussed in Athalye et al. (2018)", "venue": null, "year": 2018 }, { "authors": [ "return Uk" ], "title": "Solving the bilevel optimization", "venue": null, "year": 2018 }, { "authors": [ "Ganin" ], "title": "2017) with slight modifications (e.g. adding batch normalization and dropout layers). For CIFAR10, we use preactivation ResNets from He et al. (2016) for the prediction branch of DANN, and for the domain prediction branch we use the architecture from Sun et al. (2020) (convolution layer)", "venue": "Models. For MNIST,", "year": 2020 }, { "authors": [ "Schmidt" ], "title": "The only difference of our setting from their setting is that we additionally have unlabeled data U for the algorithm", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "There is a surge of interest to study test-time adaptation to help generalization to unseen domains (e.g., recent work by Sun et al. (2020); Wang et al. (2020); Nado et al. (2020)). At the high level, a generic test-time adaptation can be modeled as an algorithm Γ which accepts an (optional) labeled training dataset D, an (optional) model F trained on D (usually used as a starting point), and an unlabeled test feature set U , outputs a model F̃ = Γ(F,D,U), in order to achieve high test accuracy on U . For large test set U , test-time adaptation can be viewed as a form of transductive learning (Joachims (1999); Vapnik (1998)) (i.e., using D,U to train a model to predict specific instances in U ), which is argued to be easier than more traditional inductive learning.\nThis paper studies test-time adaptation in the context of adversarial robustness (i.e., there is an active agent who tries to fool the test-time adaptation by perturbing the input so that F̃ gives wrong predictions). There are several motivations in pursuing this direction. First, this question is of practical interest: Many practical ML pipelines run in a batch mode1, where they first collect a set of unlabelled data points, and then send them to a model (e.g. Nado et al. (2020)). In such cases, data in the batch may have been adversarially perturbed, and it is a natural question whether we can leverage the large batch size and test-time adaptation to enhance adversarial robustness. Second, from a purely theoretical point of view, since test-time adaptation is a form of transductive learning, it is intriguing to ask whether transductive adversarial learning can be easier, given that traditional adversarial robustness is formulated in the inductive learning setting (e.g. Madry et al. (2018)). To this end, a recent work by Goldwasser et al. (2020) shows that, with transductive learning, one can achieve nontrivial guarantees for classes of bounded VC dimension with arbitrary train and test distributions. The current work complements their paper in the setting of deep learning.\nTo study these questions, we formalize a threat model, which we call (test-time) maximin threat model, for the adversarial robustness of test-time adaptation. Recall that the classic adversarial\n1For example,Instagram collects a large batch of photos before sending them to a model to tag people.\nrobustness game is a minimax game minF EV [maxṼ L(F, Ṽ )], where V is random sampled data, Ṽ is the perturbed data generated from V by the adversary, and L(F, Ṽ ) is the loss of the model F on Ṽ . By contrast, in the maximin threat model, we allow V to be sampled from a different domain, and the game is maximin: EV [maxU minF̃ L(F̃ , Ṽ )] (where U is the perturbed features of V , subject to the attack type, and Ṽ is the labeled perturbed data, see Definition 2). By the maximin inequality, it follows that this threat model is no harder than the minimax model (to allow source and target domains to be different, we need to generalize the classic minimax model, see Definition 3).\nWe then move on to the focus of this work: Whether the maximin threat model is “strictly weaker” than the minimax threat model. We note that any good defender solution (a robust model) in the minimax game induces a good defender solution in the maximin game (an adaptation algorithm that outputs that robust model), thus intuitively, the good defender solutions of the minimax model is a subset of the good defender solutions of the maximin threat model. We ask whether such a containment is proper: That is, whether there exists a defender solution that is good in the maximin threat model, but is bad in the minimax threat model. The existence of such a defender will demonstrate that the maximin threat model admits more good solutions. Besides theoretical interest, this question is also of practical importance since these “new” solutions may possess desirable properties that good solutions in the minimax threat model may lack. For example, one such property is that the defender solution is attack agnostic (Goodfellow (2018) (pp.30)): That is, the solution is not to directly optimize the performance measure for a particular type of perturbation2.\nTo this end, we first present a provable separation between the maximin and minimax threat models in a natural Gaussian data model. In fact, the separation holds even when U only contains a single point, indicating the power of transductive learning. We then move to deep learning. While we do not have provable guarantees, we empirically examine Domain Adverarial Neural Networks (DANN) (Ganin et al. (2017)), an algorithm designed for unsupervised domain adaptation (UDA), as a candidate for the separation. Specifically, we demonstrate that DANN provides nontrivial testtime adversarial robustness against both transfer attacks and adaptive attacks, in both homogeneous and inhomogeneous cases. This is somewhat surprising as DANN is attack agnostic as we mentioned above, and has not been considered for adversarial robustness. Not surprisingly, as we hypothesized for a separation, the accuracy becomes very low when evaluating F̃ in the minimax model.\nComplementing the above result, we explore the maximin robustness of the recent data-oblivious adaptation algorithms (namely, the adaptation algorithms do not useD, but just the pretrained model F and unlabeled test set U ). Specifically, we consider Test-Time Training (TTT) by Sun et al. (2020)3. We show that TTT can be easily attacked using simple transfer attacks. While this is not surprising as authors of Sun et al. (2020) have cautioned that TTT is not designed for adversarial robustness, the situation is in sharp contrast to our results with DANN.\nThe rest of the paper is organized as follows: Section 2 presents the setup. In Section 3 we define threat models. In Section 4 we present theoretical results about separation, and examine DANN as a candidate separation in the deep learning. Finally, Section 5 explores the maximin robustness of oblivious test-time adaptation, and concludes the paper with future directions." }, { "heading": "2 PRELIMINARIES", "text": "Let F be a model, for a data point (x, y) ∈ X ×Y , a loss function `(F ;x, y) give the loss of F on x given the true label y. Let V be a set of labeled data points. We use the notation L(F, V ) = 1 |V | ∑\n(x,y)∈V `(F ;x, y) to denote the empirical loss of F on V . For example, if we use binary loss `0,1(F ;x, y) = 1[F (x) 6= y], this gives the test error of F on V . We use the notation V |X to denote the projection of V to its features, that is {(xi, yi)}mi=1 7→ {x1, . . . , xm}. Threat model for classic adversarial robustness. To formulate the threat model for test-time adaptation, we first present a threat model for the classic adversarial robustness. Although the classic adversarial robustness can be written down succinctly as a minimax objective, namely\n2Another consideration, which is beyond the scope of this paper, is the computational feasibility of finding a good solution, given the hardness of minimax optimization Katz et al. (2017); Daskalakis et al. (2020).\n3While TTT does not use training data D at the test time, it has a special self-training component, and the joint architecture is a Y -structure. A more domain agnostic approach is discussed in Wang et al. (2020).\nminF E(x,y)∼(X,Y ) [ maxx′∈N(x)[`(F ;x ′, y)] ] (N(x) is a neighborhood function of x, determined by the attack type), a threat model formulation will help us develop more nuanced models.\nDefinition 1 (Threat model for classic adversarial robustness). Attacker and defender agree on a particular attack type. Attacker is an algorithm A, and defender is a supervised learning algorithm T ." }, { "heading": "Before game starts", "text": "• A (labeled) training set D is sampled i.i.d. from from (X,Y )." }, { "heading": "Training time", "text": "• (Defender) Train a model F on D as F = T (D)." }, { "heading": "Test time", "text": "• A (labeled) natural test set V is sampled i.i.d. from (X,Y ). • (Attacker) On input F , D, and V , A perturbs each point (x, y) ∈ V to (x′, y) (subject to the agreed attack type), giving Ṽ = A(F,D, V )." }, { "heading": "Evaluation:", "text": "Evaluate the test loss of F on Ṽ , L(F, Ṽ ). Attacker’s goal is to maximize the test loss, while the defender’s goal is to minimize the test loss.\nWe stress that the i.i.d sampling of V is important (which is also present in the expectation in the minimax objective): Otherwise an attacker can pick any point that fools F and repeat it arbitrarily many times. (we refer readers to Goodfellow (2019) for more discussions along this line).\nNotations for models and attacks. In this paper we mainly use the PGD attacks (Projected Gradient Descent attacks) with norm-based perturbations Madry et al. (2018). For example, given a model F , we use the notation PGD(F, V ) to denote PGD attacks against F , on data V (the attack type is specified in the context). We adopt the following notations:\nNotation Meaning T A target model trained on the labeled target data V .\nAdvT An adversarially trained target model using the labeled target data V . S A source model trained on the labeled source data D.\nAdvS An adversarially trained source model using the labeled source data D. PGD(·, ·) PGD Attacks on a model and data. For example, PGD(AdvT, V ) means to use\nPGD attacks on the model AdvT and data V .\nTest-time defenses and BPDA. Various previous work have investigated test-time defenses where a pretrained model is fixed and there is a “preprocessing procedure” to preprocess an input before sending it to the model. Several such defenses were described and attacked in Athalye et al. (2018), by the BPDA technique (Backward Pass Differentiable Approximation). While syntactically one can fit these defenses into our framework, they only form some very special cases of our framework, which reuses a fixed pretrained model and focuses on input sanitization. As we will show later in the paper, for both of our provable separation and deep learning results, the adaptation algorithms train new models (beyond sanitizing inputs); and theoretically attacking these adaptations becomes a bilevel optimization. In these cases, it is unclear how to apply BPDA, and indeed it is an intriguing direction to further study attacking unsupervised domain adaptation algorithms, such as DANN." }, { "heading": "3 THREAT MODELS", "text": "" }, { "heading": "3.1 TEST-TIME MAXIMIN THREAT MODEL", "text": "The intuition behind the test-time maximin threat model is as follows: After we receive the adversarially perturbed data U to classify (at test time), the defender trains a model based on U , and we evaluate the test accuracy only for U . (i.e., for different test set U we may have different models and different test accuracy.) This perspective of training a model using labeled data D and unlabeled data U , and only test on U , is essentially the transductive learning by Vapnik (1998) (however, we consider it in an adversarial setting). Formally, we have the following definition:\nDefinition 2 (Maximin threat model for test-time adaptation). Fix an adversarial perturbation type. We have a source domain (Xs, Y s), and a target domain (Xt, Y t). Attacker is an algorithm A, and\ndefender is a pair of algorithms (T ,Γ), where T is a supervised learning algorithm, and Γ is a test-time adaptation algorithm." }, { "heading": "Before game starts", "text": "• A (labeled) training set D is sampled i.i.d. from (Xs, Y s)." }, { "heading": "Training time", "text": "• (Defender, optional) Train a model F trained on D as F = T (D)." }, { "heading": "Test time", "text": "• A (labeled) natural test set V is sampled i.i.d from (Xt, Y t), V is sent to the attacker. • (Attacker) Produce an unlabeled dataset U as follows:\n1. On input Γ, F , D, and V , A perturbs each point (x, y) ∈ V to (x′, y) (subject to the agreed attack type), giving Ṽ = A(Γ, F,D, V ).\n2. Send U = Ṽ |X (that is, the feature vectors of Ṽ ) to the defender.\n• (Defender) Produce an adapted model as F̃ = Γ(F,D,U).\nEvaluation: Evaluate the test loss of F̃ on Ṽ , L(F̃ , Ṽ ).\nOn the adversary: While the adversary knows the adaptation algorithm, Γ may use some private randomness, such as random initialization. Since defender proceeds after the attacker to apply Γ, these private randomness is assumed not to be known by the adversary.\nOn notations: If there is no need for a pretrained model, we use the notation Γ(⊥, ·, ·).\nModeling differences. (1) The first obvious distinction is that source domain may differ from the target domain. (2) This is called a test-time threat model since F̃ is trained based on U = Ṽ |X , and the test error is only evaluated on Ṽ (the perturbed test set). This is in sharp contrast with the classic minimax threat model, where the model F is trained on D, which is independent from V and so Ṽ . (3) This threat model enables us to study whether a large test set (i.e. large |U |) can benefit a defender for adversarial robustness. This is in sharp contrast with the classic minimax threat model, where the attacker is granted much power, and can pick points in a one-by-one, or “online”, fashion (i.e., sample a point from the nature, perturb it, and send to the defender). (4) This is called a maximin threat model since the adversary must move first to submit the challenge set U , and the defender then moves based on it. In fact, this can be written as a maximin game EV [ maxU minF̃ test err(F̃ , Ṽ ) ] , different from the minimax game in the classic threat model.\nSame security goal. However, in the case where these two domains coincide, the maximin threat model is still nontrivial. In fact, from the security perspective, both threat models are about the same end-to-end security goal: Correct predictions on adversarially perturbed data.\nAttacking a model vs. Attacking an algorithm. In the classic threat model, a model F is fixed after training, and at the test time the adversary attacks the model. However, in the maximin threat model, the adversary must attack an adaptation algorithm Γ.\nRestrictions of the threat model. Astute readers may realize that we have intentionally left the definition of the test-time adaptation Γ to have a lot of freedom. For example, Γ can leverage the labeled training data D, which may be practically not available or computationally infeasible for test-time computations. We consider the following possibilities of restrictions:\nHomogeneity. If the source and target domains equal, we call it the homogeneous maximin model. This setting is directly related to the classic threat model (Definition 1), where we have a single domain (X,Y ), but an adversary can perturb the test instances. From a security perspective, maximin threat model gives a relaxed threat model, but the same end-to-end security guarantees. (i.e., correct predictions against adversarially perturbed input)\nData-obliviousness. In this case, the adaptation algorithm Γ cannot leverage the labeled source data D. In other words, the agent can only leverage the pretrained model F and the unlabeled data set U ." }, { "heading": "3.2 ADVERSARIAL SEMI-SUPERVISED MINIMAX THREAT MODEL", "text": "An unsatisfying aspect of the theory thus far is that the classic minimax threat model is only defined in the homogeneous case, but the maximin threat model can be inhomogeneous, which makes the\ndirect comparison difficult. To bridge the gap, we provide a generalization of the classic minimax threat model that lies between the classic minimax threat model (Definition 1) and the test-time maximin threat model (Definition 2).\nDefinition 3 (Adversarial semi-supervised minimax threat model). Fix an adversarial perturbation type. We have a source domain (Xs, Y s), and a target domain (Xt, Y t). The attacker is a pair of algorithms (A0,A1), and the defender is a pair of algorithms (T ,Γ), where T is a supervised learning algorithm, and Γ is a semi-supervised learning algorithm." }, { "heading": "Before game starts", "text": "• A training set D is sampled i.i.d. from (Xs, Y s). • A semi-supervision set V is sampled i.i.d from (Xt, Y t), V is sent to the attacker." }, { "heading": "Training time", "text": "• (Defender, optional) Train a model F trained on D as F = T (D). • (Attacker) Produce an unlabeled dataset U as follows:\n1. On input Γ, F , D, and V , A0 perturbs each point (x, y) ∈ V to (x′, y) (subject to the agreed attack type), giving Ṽ = A0(Γ, F,D, V ). 2. Send U = Ṽ |X (that is, the feature vectors of Ṽ ) to the defender.\n• (Defender) Produce a final model as Γ : F̃ = Γ(F,D,U)." }, { "heading": "Test time", "text": "• A (labeled) natural test set V ′ is sampled i.i.d. from (Xt, Y t). • (Attacker) On input F̃ , D, and V ′, A1 perturbs each point (x, y) ∈ V ′ to (x′, y) (subject to the agreed attack type), giving Ṽ ′ = A1(F̃ ,D, V ′).\nEvaluation: Evaluate the test loss of F̃ on Ṽ ′, L(F̃ , Ṽ ′). The goal of the adversary (A0,A1) is to (jointly) maximize L(F̃ , Ṽ ′).\nModeling differences. (1) This threat model is semi-supervised, because the defender (learner) receives an unlabeled data set U from the target domain. Note that the procedure to produce F̃ at the training time of this threat model is exactly the same as the procedure to produce F̃ in the test-time maximin threat model. The key difference is that, once F̃ is trained, we evaluate it on Ṽ ′, which is the adversarially perturbed data on independently sampled V ′. This is thus inductive learning, and a minimax game. (2) This threat model is adversarial, because the attacker can adversarially perturb the clean semi-supervision set V to produce U .\nClassic minimax model as a special case. The classic minimax threat model is a special case of this threat model, by putting source and target domains equal, and choosing a trivial Γ: Γ(F,D,U) = F = T (D). We list several other specializations of this threat model in Appendix B.3. Therefore, without loss of generality, for the rest of the paper, by minimax model we mean the adversarial semi-supervised minimax threat model.\nΓ: One algorithm, two interpretations. We note that for an algorithm Γ : (F,D,U) 7→ F̃ , one can have now two interpretations: In the maximin threat model, we interpret it as a adaptation algorithm, because we are only going to evaluate F̃ on Ṽ ; in the minimax threat model, we interpret it as a semi-supervised learning algorithm, because we are going to evaluate F̃ on unseen points Ṽ ′. 4 SEPARATING MAXIMIN AND MINIMAX\nWe now move on to the focus of this work: Is the maximin threat model “strictly weaker” than the minimax threat model?" }, { "heading": "4.1 VALUATION OF THE GAMES", "text": "Proposition 1 (Homogeneous maximin vs. Classic minimax threat model). Let k ≥ 1 be a natural number, and F be the hypothesis class. For a given V , the domain of Ṽ is a well-defined function of V (e.g., `∞ ball around V ). We have that: EV∼(X,Y )k [ maxU minF̃∈F{L(F̃ , Ṽ )} ] ≤\nminF̃∈F EV∼(X,Y )k [ maxṼ {L(F̃ , Ṽ )} ]\nThe proof (A.1) does not rely on the homogeneity condition, and holds verbatim to the more general semi-supervised threat model. We also note that, in fact, if the concept class has unbounded VC dimension, then good models always exist that can fit both D and V perfectly. So the valuation of the maximin game is actually always 0: Proposition 2 (Good models exist with large capacity). Consider binary classification tasks and that the hypothesis class F has infinite VC dimension. Then the valuation of the maximin game EV∼(X,Y )k [ maxU minF̃∈F{L(F̃ , Ṽ )} ] is 0. That is, perfect models always exist to fit U .\nThis thus gives a first evidence that that transductive advesarial learning is strictly easier. We remark that transductive learning here is essential (differnet models are allowed for different U ). We conclude this section by noting the following: Proposition 3 (Good minimax solution is also a good maximin solution). Suppose T ∗ is a supervised learning algorithm which trains a model F ∗ = T ∗(D), where its adversarial gain in the adversarial semi-supervised minimax model is bounded by κ (i.e. EV ′ [maxṼ ′ L(F\n∗, Ṽ )] ≤ κ.) Then in the maximin threat model, the adversarial gain of the strategy (T ∗,Γ∗), where Γ∗(F ∗, D, U) = F ∗ = T ∗(D), is also upper bounded by κ.\nHowever, clearly, the valuation of the game does not answer a key question whether there is a “real” adaptation algorithm, which can only leverage unlabeled data U , that separates between the two theat models. In other words:\nIs there a Γ such that: • As a test-time adaptation algorithm in the maximin model, it provides robustness, but • As a learning algorithm in the minimax model, the model it produces has no robustness.\nThe existence of such a Γ would provide a separation between the minimax and the maximin threat models, and indicates that the maximin threat model admits more good solutions. Despite theoretical interest, this question is of practical importance since these “new” solutions may possess desirable properties that good solutions in the minimax threat model may lack. For the rest of this section, we consider one such desirable property that the defender solution is attack agnostic (Goodfellow (2018) (pp.30)): That is, The defender strategy is not to directly optimize the performance measure. (e.g., we know the attack is `∞-norm attacks, and we directly train for it)." }, { "heading": "4.2 PROVABLE SEPARATION OF THE MINIMAX AND MAXIMIN MODELS", "text": "In this section we provide a problem instance (i.e., data distributions and number of data points) and prove that that maximin threat model is strictly easier than the minimax threat model for the problem: In the minimax model no algorithm can achieve a nontrivial error, while in the maximin model there are algorithms achieving small errors. Since the maximin model is no harder than the minimax model for all problem instances and there is a problem instance where the former is strictly easier, we thus formally establish a separation between the two models. Furthermore, the problem instance we considered is on Gaussian data. The fact that maximin is already strictly easier than minimax in this simple problem provides positive support for potentially the same phenomenon on more complicated data.\nData distributions and the learning task. We consider the homogeneous case (the source and target are the same distribution) and the `∞ attack. We consider the Gaussian data model from Schmidt et al. (2018); Carmon et al. (2019b): a binary classification task where X = Rd and Y = {+1,−1}, y uniform on Y and x|y ∼ N (yµ, σ2I) for a vector µ ∈ Rd and coordinate noise variance σ2 > 0. In words, this is a mixture of two Gaussians, one with label +1, and one with label −1. We will consider the following parameters. First, fix an integer n0 > 0, and an ∈ (0, 1/2), then set the following parameter values:\n‖µ‖22 = d, σ2 = √ dn0. (1)\nFor both threat models, the datasets D = {(xi, yi)}ni=1 and V = {(x, y)}. In particular, V only has one data point. In the maximin threat model, we let x′ denote the perturbed input obtained from x by the l∞ attack with bounded norm > 0, i.e., x′ = x+ ν with ‖ν‖∞ ≤ . Put Ṽ = {(x′, y)} and U = {x′}. We prove the following:\nTheorem 1 (Separation of Maximin and Minimax). Let K be a (sufficiently large) constant (which is used for control classification error). Consider the Gaussian data model above with\nn0 ≥ K, √ d/n0 ≥ 32K log d 2 , and 2Kn0 ≤ n ≤ n0 · 2 √ d/n0\n16 log d (the second holds for all sufficiently large d, which then determines all n that falls into the range). Then the following are true.\n(1) In the semi-supervised minimax threat model, the learned model F̃ = Γ(T (D), D, U) by any algorithms T and Γ must have a large error: E { L(F̃ , Ṽ ′) } ≥ 12 (1− d\n−1), where the expectation is over the randomness of D,V and possible algorithm randomness. (2) In the maximin threat model, there exist attack-agnostic algorithms T and Γ such that for some absolute constant c > 0, the adapted model F̃ = Γ(T (D), D, U) has a small error: E { L(F̃ , Ṽ ) } ≤ e−cK .\n4.3 SEPARATION IN DEEP LEARNING: A CANDIDATE ALGORITHM OF DANN\nWe now consider deep learning. While we are not able to prove the existence of such a defender solution in this setting, we present a somewhat surprising connection between transductive adversarial robustness and unsupervised domain adaptation. In particular, we propose Domain Adversarial Neural Networks (DANN) as a candidate algorithm for the separation, and provide empirical evidence that it provides the desired separation. The experiment design is as follows:\n(1) We use the DANN algorithm with random initialization (RI), that is, DANN(RI, ·, ·), as a test-time adaptation. There are several motivations of choosing DANN: (A) DANN fits our framework as it is designed for unsupervised domain adaptation. (B) DANN is however not designed for adversarial robustness. Thus is it a very interesting question whether DANN can provide test-time adversarial robustness against attacks (e.g. norm-based attacks) that it is not specifically designed for. (C) DANN algorithm leverages source data D, which could benefit the maximin robustness.\n(2) We generate adversarially perturbed Ṽ , and check whether DANN can provide robustness.\n(3) Note that DANN(RI, ·, ·) can be viewed as a semi-supervised learning algorithm in the adversarial semi-supervised minimax threat model. Therefore, we check whether the adapted model F̃ = DANN(RI, D, U) is robust in the minimax model.\n(4) If (2) and (3) show that DANN(RI, ·, ·) is significantly more robust in the test-time maximin threat model than in the minimax model, then the experiment goal is achieved.\nAttacks. We use `∞ attacks in this section (note that DANN is not designed for `∞ attacks). We report more results about `2 attacks in Appendix E.3. We consider two categories of attacks:\nTransfer attacks. Transfer attacks are a common class of attacks, where we transfer attacks on one model to the target model (in our context, produced by a maximin defender strategy). In this paper we will mainly apply PGD attacks on adversarially trained models to generate transfer attacks.\nAdaptive attacks. We also consider adaptive attacks4, where an adversary can leverage the knowledge of the adaptation algorithm. To this end, we notice that test-time adaptation is typically an optimization objective Ltta(F̃ , F,D,U), which gives rise to the following bilevel optimization objective for the attacker:\nmaximize Ṽ ∈N(V )\nL(F̃ ∗, F,D, Ṽ )\nsubject to: F̃ ∗ ∈ arg min F̃\nLtta(F̃ , F,D,U = Ṽ |X) (2)\nTo solve this bilevel optimization, we generalize the work of Lorraine & Duvenaud (2018) to an algorithmic framework, called Fixed Point Alternating Method (Algorithm 1 and 2). We consider two instantiations in this section: (1) FPAMLLDANN , which is a standard instantiation, where the outer objective is L(F̃ , Ṽ ), and (2) J-FPAMLDANNLDANN , where the outer objective is the exact same DANN objective LDANN. This is also a natural instantiation where the adversary sets out to fail the DANN\n4Please refer to Section D for derivations.\noptimization. Note that in this case one can naturally apply the more traditional alternating optimization J-FPAM, because the objective becomes a more traditional minimax objective.\nDatasets. For the homogeneous case, we consider MNIST and CIFAR10 (i.e., for both source and target domains). The homogeneous case represents the security problem considered in the classic threat model. For the inhomogeneous case, we consider MNIST→MNIST-M (Ganin et al., 2017), and CIFAR10→CIFAR10c-fog (Hendrycks & Dietterich, 2019). MNIST-M is a standard dataset in unsupervised domain adaptation, and CIFAR10c is a recent benchmark for evaluating neural network robustness against common corruptions and perturbations. For CIFAR10c-fog, all 5 levels are combined and we perform an 80-20 train-test split, which gives a training set of size 40000 and test set of size 10000. (for space reasons, we combine all corruption levels for experiments in this section, results where we separately study different levels are reported in Section E.2).\nModels. For MNIST, we use the original DANN architecture from Ganin et al. (2017) with slight modifications (e.g. adding batch normalization and dropout layers). For CIFAR10, we use preactivation ResNets from He et al. (2016) for the prediction branch of DANN, and for the domain prediction branch we use the architecture from Sun et al. (2020) (convolution layer). This architecture is slightly different from the the typical DANN architecture used for MNIST, which assumes vectorized feature and fully connected domain prediction branch.\nExperiment results. (A) As a sanity check first, we evaluate the accuracy of the adapted DANN models in the minimax threat model. Not surprisingly, the accuracy becomes very low (close to 0%),\nwhich shows that DANN provides no robustness in the minimax threat model. (B) Then in the maximin threat model, Table 1 summarizes the results under transfer attacks. In the homogeneous case, the adversarial accuracy DANN provided in the maximin model is comparable to the adversarial accuracy an adversarially trained model provided in the minimax model. And the adversarial accuracy becomes significantly higher in the inhomogeneous case (compared to using an adversarially trained source model). (C) In the maximin threat model, Table 2 summarizes the results under FPAMLLDANN attacks. Similar to the transfer attack case, DANN provides noticeable robustness. Note that since the defender moves after the attacker, he or she always applies adaptation “one more time”. (D) In the maximin threat model, Table 3 summarizes the results under J-FPAMLLDANN attacks. This is by far our most effective attacks against DANN, which decreases the robustness to∼ 40% in the homogeneous case on CIFAR10, and to ∼ 30% in the inhomogenous case on CIFAR10→CIFAR10c-fog. Nevertheless, DANN still provides nontrivial robustness, and thus provides positive evidence to our hypothesis that DANN separates maximin and minimax threat models. (E) Finally, Figure 1 gives the robustness results under different target size. As we can see, the robustness of DANN degrades as the target size decreases, confirming our intuition that large target size benefits test-time robustness." }, { "heading": "5 ROBUSTNESS OF OBLIVIOUS ADAPTATION AND FUTURE DIRECTIONS", "text": "We briefly explore the adversarial robustness of the recent data-oblivious test-time adaptation algorithms. Specifically, we focus on the TTT algorithm by Sun et al. (2020). Recall that a test-time adaptation algorithm is data-oblivious, if it does not use the labeled training data D at the test time. TTT algorithm uses a specially trained (with self training) pretrained model, which we denote as PrTTT. Similar to our DANN experiment, we conduct experiments on CIFAR10→CIFAR10c-fog: (1) We found that transfer attacks against PrTTT can already break TTT algorithm (close to 0% accuracy). While this is not surprising as authors of Sun et al. (2020) have cautioned that TTT is not designed for adversarial robustness, this is in sharp contrast to our results with DANN. (2) We then use an adversarially trained model as the pretrained model for TTT. We found that this indeed increases the maximin-robustness of TTT. However, the robustness is roughly the same (about 1% difference) as directly using the pretrained model. This indicates that the robustness mainly comes from the adversarially trained model, and TTT algorithm provides no robustness. This is again in sharp contrast to DANN. We list several implications as future directions:\n• Is transductive adversarial deep learning easier? Both the current work and the work of Goldwasser et al. (2020) have given indication that adversarial robustness may be easier in the transductive learning setting. To this end, we ask: Can we devise more effective algorithms against DANN, or we can prove its robustness in the maximin threat model? To this end, either confirming (theoretically, for example) or refuting the maximin robustness of DANN is intriguing. To make this concrete, we propose the following (informal, qualitative) conjecture: Conjecture 1 (Transductive robustness of DANN (informal, qualitative)). If DANN can successfully adapt from source domain (Xs, Y s)) to (Xt, Y t) (no adversary), then DANN can provide test-time maximin robustness for bounded norm attacks on (Xt, Y t).\nSpecifically, for a successful refutation, one must provide two (natural) domains where DANN can successfully adapt from the source to the target , but DANN fails to provide nontrivial transductive robustness for bounded norm attacks on the target domain.\n• The surprising power of distribution matching for robustness. It is somewhat surprising that a distribution matching algorithm, such as DANN, can provide adversarial robustness (albeit in a weaker threat model), even when the target domain is already a corrupted domain such as CIFAR10c-fog. This means that for practical applications where we run an ML model in a batch mode on some collected unlabeled data, one can consider first apply UDA to produce an adapted model and then classify the batch, and this may provide adversarial robustness.\n• Does robust test-time adaptation always need to access source data? We note that while DANN provides maximin robustness, it needs to access the labeled training data D at test time, and while TTT does not needD, it provides no maximin robustness. Can we achieve the best of both of worlds and get adversarially robust oblivious test-time adaptation algorithms?" }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PROOF OF PROPOSITION 1", "text": "Let F be the family of models F̃ we can choose from. From the maximin inequality, we have that\nmax U min F̃∈F {L(F̃ , Ṽ )} ≤min F̃∈F max Ṽ {L(F̃ , Ṽ )}\nNote that for the minimax, the max over Ṽ is also constrained to perturb features (as we want to find adversarial examples). If we take expectation over V , we have then\nE V [ max U min F̃∈F {L(F̃ , Ṽ )} ] ≤ E V [ min F̃∈F max Ṽ {L(F̃ , Ṽ )} ] Note that\nE V [ min F̃∈F max Ṽ {L(F̃ , Ṽ )} ] ≤ min F̃∈F E V [ max Ṽ {L(F̃ , Ṽ )} ] ,\nwhich completes the proof." }, { "heading": "A.2 PROOF OF PROPOSITION 3", "text": "In the maximin threat model, let us take the strategy (T ∗,Γ∗) as defined. Therefore the maximin adversarial gain is\nE V [ max Ṽ L(Γ∗(T ∗(D), D, Ṽ |X), Ṽ ) ] = E V [ max Ṽ L(T ∗(D), Ṽ ) ] = E V [ max Ṽ L(F ∗, Ṽ ) ] which, by assumption, is bounded by κ." }, { "heading": "B MORE ON THREAT MODELS", "text": "" }, { "heading": "B.1 ON THE MODELING", "text": "Why is the adversary constrained to attack each point in Definition 2 and Definition 3? This is set up as the rule of the threat model, because if not, an adversary can pick any point that fails the model and repeats it arbitrary number of times, which trivializes the model. In fact, such one-by-one attack is implicit in the original minimax objective, and is also the common practice of evaluating adversarial robustness (i.e., typical evaluation is to take a test set, attack the points one by one, and then evaluate adversarial accuracy). Our threat model formulation makes this hidden assumption explicit. We refer readers to Goodfellow (2019) for more discussion if one wants to break the i.i.d. sampling assumption.\nWhy in the minimax modeling, we do not assume A in Definition 1, or A1 in Definition 3 to know the training algorithms? For example, T is not in the input of A in Definition 1. This is the case because in the inner most expression L(F, Ṽ ) only depends on F , and F is fixed after the training. Therefore, bringing in the training algorithm gives no additional information (note that this is also the typical practice of evaluating adversarial robustness in the classic setting, where we just attack the model). By contrast, this is not the case for the maximin threat model because attacker moves first and the model is not fixed yet, so in that case, a strong white-box attacker may need to take into consideration the adaptation algorithm, not just the pretrained model.\nWhy do we need to have two algorithms A0, A1 in Definition 3? A0 attacks the semi-supervised learning algorithm, and thus has Γ as input. On the other hand, A1 attacks a model, so neither Γ nor T is in the input. Therefore, these two algorithms are different and are separated in the definition. Note that the joint goal ofA0,A1 is to fail the final model on Ṽ ′, soA0 can be totally different from A1 in order to fail the training. In this respect,A0 is like data poisoning the unlabeled data set in the semi-supervised learning. Unlike data poisoning though, we have the constraint that it must attack each point in V .\nThere are two algorithms A0,A1 in Definition 3, wouldn’t A0 help the defender because it leaks information about the target distribution. We have shown that the defender can choose to ignore the unlabeled data set, and this degenerates to the classic threat model in the homogeneous case. Otherwise, the semi-supervised learning setting may indeed help the defender, because the information about the target domain (or more unlabeled data if we are in the homogeneous case) is leaked to the defender. This is, however, exactly why in these cases that this threat model lies between the classic threat model and the maximin threat model. In the maximin threat model, the attacker leaks U to the defender, and the defender is only required to predict U . In the semisupervised threat model, things becomes more challenging, because finally we want to evaluate the model on independent samples.\nIn the experiments, we essentially instantiated the semi-supervised threat model, for example, as (A0 = FPA,A1 = PGD). Is that too weak? A strong adversary of course may want to use a strong attack A0 to fail the training, and thus fail the final model, and choosing FPA may indeed be a weak choice. However, we note that our goal in Step (3) of the experiment design is to show that we can break the adapted model, and (FPA,PGD) already full-fills this purpose. As another note, in the maximin threat model, currently FPA is the most natural adaptive attack we can think of. Exploring stronger attacks is an intriguing future direction." }, { "heading": "B.2 MORE EXTENSIONS AND RESTRICTIONS FOR MAXIMIN THREAT MODEL", "text": "Singleton. In the standard model, we allow U to be a batch which consists of many point. In the singleton restriction, U is restricted to have only one point (for example, the test-time training algorithm by Sun et al. (2020) works in this setting).\nDefinition 4 (Online version). In the online maximin model, we assume that we will sequentially receive batches of test samples, one after another, and we allow Γ to leverage the previous samples received. We assume, for simplicity of this work however, that the batches are sampled independently from each other.\nFor the online maximin model, data obliviousness means that the agent cannot directly leverage the previous unlabeled samples he or she received. However, the agent can save the previous model parameters and use that." }, { "heading": "B.3 SPECIALIZATION OF THE ADVERSARIAL SEMI-SUPERVISED MINIMAX THREAT MODEL", "text": "Adversarial training considered in Miyato et al. (2019); Shu et al. (2018). The setting becomes a special case of our general minimax threat model by:\n• Instantiating A0 as a trivial adversary that does not do any perturbation.That is the defender receives a set of clean set U of unlabeled features.\n• The defender adaptation strategy does adversarial training on the unlabeled data, given by\nd(p(y|x), p(y|x+ r∗, θ)) where r∗ = argmax\nr:‖r‖≤ε d(p(y|x), p(y|x+ r, θ))\nUnsupervised Adversarial Training Uesato et al. (2019) This is similar to the above except that the source and target domains are equal." }, { "heading": "C RELATED WORK", "text": "Adversarial robustness. Adversarial robustness of deep learning has received significant attention in recent year. By far a standard approach is to do adversarial training directly for the attack type Madry et al. (2018); Sinha et al. (2018). Several recent works have studied adversarial training in a semi-supervised setting Carmon et al. (2019a); Uesato et al. (2019). As we have discussed, these work can be viewed as instantiations in the adversarial semi-supervised minimax threat model. Our work can be viewed as one step further to study adversarial robustness in the transductive setting for deep learning.\nTest-time adaptation and transductive learning. Transductive learning is a long line of research (for example, see the classic work of Transductive Support Vector Machine Joachims (1999)), where one attempts to leverage unlabeled data U to predict on specific instances of U Vapnik (1998). On the other hand, transductive learning has not received much attention in the deep learning setting. The recent proposals of test-time adaptation Sun et al. (2020); Wang et al. (2020); Nado et al. (2020) seems to be an reincarnation of this idea, with the additional twist of leveraging a pretrained model. Our work considers test-time adaptation in an adversarial deep learning setting, which to the best of our knowledge, is new.\nUnsupervised Domain Adaptation. A classic approach for analyzing domain adaption is based on H-divergence Kifer et al. (2004); Blitzer et al. (2008); Ben-David et al. (2010). That theoretical framework is the basis for a line of methods that uses adversarial training with neural networks to learn representations that are indistinguishable between source and target domain, in particular domain adversarial neural network (DANN) Ajakan et al. (2014); Ganin et al. (2017) and related techniques Pei et al. (2018); Zhao et al. (2018). Some other approach used different divergence notions, such as MMD Long et al. (2014; 2015), Wasserstein distance Courty et al. (2017); Shen et al. (2018), and Rényi divergence Mansour et al. (2009)." }, { "heading": "D ADAPTIVE ATTACKS AND BILEVEL OPTIMIZATION", "text": "In this section we present details of our considerations of adaptive attacks in the test-time adaptation setting. To start with, in the maximin threat model, for a fixed D (labeled training set) and V (labeled test set), the game is maxU minF̃ L(F̃ , Ṽ ) (the adversary can only modify features, namely U , without modifying the labels). We focus on norm-bounded attacks, even though the test-time defense strategy can be agnostic to the attack type (i.e., the adaptation algorithm does not explicit leverage the attack type information). More specifically, the constraint of the adversary is that he can only perturb the feature of each point in V using norm-based perturbation: Namely for each (x, y) the adversary can generate (x′, y) where ‖x − x′‖ ≤ ε. we use the notation N(V ) to denote the neighborhood of V which includes all such legitimate perturbed datasets.\nIn our settings, the adaptation algorithm Γ is typically an optimization objective (which revises the model, we will demonstrate an instantiation using DANN below). We assume that this objective is Ltta(F̃ , F,D,U), where F̃ is the we want to solve by optimization, F is a (fixed) pretrained model (which can be ⊥ if a pretrained model is not needed, see the DANN instantiation below), D is the labeled data set from source domain, andU = Ṽ |X is the test features (of Ṽ ) from the target domain. Under these conditions, the maximin game can then be written as a bilevel optimization as follows:\nmaximize Ṽ ∈N(V )\nL(F̃ ∗, Ṽ )\nsubject to: F̃ ∗ ∈ arg min F̃\nLtta(F̃ , F,D,U = Ṽ |X) (3)\nTo incorporate more objectives, such as that the adversary can attack the inner minimization as a special case, we generalize the loss function of the outer optimization to the form of L(F̃ , F,D, V ). Note that compared to Ltta we allow labeled data V for the attacker). This gives rise to the following objective (we have specifically generalized the outer maximization objective):\nmaximize Ṽ ∈N(V )\nL(F̃ ∗, F,D, Ṽ )\nsubject to: F̃ ∗ ∈ arg min F̃\nLtta(F̃ , F,D,U = Ṽ |X) (4)\nIn words, the outer optimization is the adversarial game where the adversary tries to find Ṽ to fool the defense, while the inner minimization solves the test-time adaptation objective Ltta, leveraging the information of U = Ṽ |X , in order to derive a revised model F̃ ∗ for predictions. Example: An objective of maximin game with DANN as an defender. If the inner test-time defense DANN, then we know that F̃ can be written as f ◦φ, and we can write the following bilevel optimization objective, where the inner minimization is the DANN objective Ganin et al. (2017):\nmaximize Ṽ ∈N(V ) L(F̃ , Ṽ )\nsubject to: F̃ ∈ arg min φ,f\n{ L(f ◦ φ,D) + d ( φ(D), φ(Ṽ |X) )} where d is a distance function, which is realized as a domain discriminator (network). We refer readers to Ganin et al. (2017) for more details. We remark that this objective with DANN gives evidence that: (1) Defenses using test-time adaptation are significantly different from test-time defenses discussed in Athalye et al. (2018). (2) The game is harder for the adversary because he or she needs to solve a harder optimization problem. To this end, we note that the test-time defenses discussed in Athalye et al. (2018) do not amount to bilevel optimizations. This is because that those defenses are merely about sanitizing input x given a fixed pretrained model, which is known to the adversary at the attack time and can thus be differentiated. On the other hand, the use of DANN as a test-time adaptation algorithm makes bilevel optimization essential to the adversarial game.\nAlgorithm 1 FIXED POINT ALTERNATING METHOD FPAMLLtta [k, F ] Require: A training dataset D, a natural test set V , an (optional) pretrained model F for test-time adaptation,\nand an integer parameter k ≥ 0 (the number of rounds). 1: If the pretrained model F equals⊥, set U0 = V |X . Otherwise, attack the pretrained model F on V , by fix-\ning F and solves the objective maxṼ ∈N(V ) L(F, Ṽ ) (i.e., the standard test loss of F ), to get adversarially perturbed examples V0. Set U0 = V0|X .\n2: for i = 1, 2, . . . , k do 3: Solve the inner minimization objective Fi = arg minF̃ Ltta(F̃ , F,D,Ui−1). 4: Solve the outer maximization objective Vi = argmaxṼ ∈N(Vi−1) L(Fi, F,D, Ṽ ). Set Ui = Vi|X . 5: end for 6: return Uk.\nSolving the bilevel optimization (4). To solve the bilevel optimization, we propose two algorithm frameworks. Fixed Point Alternating Method (FPAM, Algorithm 1), and Joint Fixed Point Alternating Method (J-FPAM, Algorithm 2). These two algorithms generalize the work of Lorraine & Duvenaud (2018) (specifically, their Algorithm 2, “optimization of hypernetwork, then hyperparameters”), to solve (4). Specifically, the joint optimization can be effective in the case where the outer and inner objectives are similar to each other, in which case the objective degnerates to a more traditional minimax objective for optimization.\nAlgorithm 2 JOINT FIXED POINT ALTERNATING METHOD J-FPAMLLtta [k, F ] Require: A training dataset D, a natural test set V , an (optional) pretrained model F for test-time adaptation,\nand an integer parameter k ≥ 0 (the number of rounds). 1: If the pretrained model F equals⊥, set U0 = V |X . Otherwise, attack the pretrained model F on V , by fix-\ning F and solves the objective maxṼ ∈N(V ) L(F, Ṽ ) (i.e., the standard test loss of F ), to get adversarially perturbed examples V0. Set U0 = V0|X .\n2: for i = 1, 2, . . . , k do 3: for minibatch VB ⊂ V do 4: Perform PGD on the outer maximization objective: Ṽ = argmaxṼ ∈N(Vi−1) L(Fi, F,D, Ṽ ). 5: Set Ũ = Ṽ |X . 6: Perform an SGD step on the inner minimization objective: F = F − α∇Ltta(F̃ , F,D, Ũ). 7: end for 8: end for 9: return Uk.\nInstantiations of FPAMLLtta with DANN. We now instantiate this framework by considering Ltta as the DANN objective (with random initialization). In this case, there is no pretrained model F , so the inner test-time adaptation objective simplifies to LDANN(F̃ ,D, U). We consider two instantiations of the outer maximization:\nFPAMLLDANN: Outer objective is L(F̃ , Ṽ ). This is the most standard instantiation, where for the outer maximization, the adversary directly search for F̃ to maximize the test loss of F̃ on Ṽ .\nJ-FPAMLDANNLDANN: Outer objective is LDANN, the same DANN objective. This is a natural and standard instantiation where the adversary uses the same DANN objective for both inner and outer objectives. In this case, J-FPAM can be naturally applied as an alternating optimization based solver." }, { "heading": "E MORE EXPERIMENTS", "text": "" }, { "heading": "E.1 BASELINES: ACCURACY OF ADVERSARIALLY TRAINED MODELS.", "text": "See Table 4 for the results." }, { "heading": "E.2 TRANSFER ATTACKS ON DIFFERENT CORRUPTION LEVELS", "text": "See Table 5 for the results." }, { "heading": "E.3 TRANSFER ATTACKS WITH `2 ATTACKS", "text": "See Table 6 for the results.\nPlots of fixed point attacks See Figure 2 for the results." }, { "heading": "E.4 DETAILS ON EXPERIMENT", "text": "Models. For MNIST, we use the original DANN architecture from Ganin et al. (2017) with slight modifications (e.g. adding batch normalization and dropout layers). For CIFAR10, we use preactivation ResNets from He et al. (2016) for the prediction branch of DANN, and for the domain prediction branch we use the architecture from Sun et al. (2020) (convolution layer). This architecture is slightly different from the the typical DANN architecture used for MNIST, which assumes vectorized feature and fully connected domain prediction branch.\nTraining Details. For our CIFAR10 and CIFAR10c experiment, we obtain both the baseline standard pretrained model and the adversarial pretrained model via the following optimization schemes: SGD with 150 epochs, multiple step learning rate [0.1, 0.01, 0.001] with milestone at epoch [75, 125], momentum 0.9, and weight decay 0.0005, and batch size 128. The baseline adversarial pretrained model is trained with 7-step PGD. In evaluating the test time robustness for transfer attacks, we generate the adversarial samples with 20-step PGD. In evaluating the test time robustness for adaptive attacks, we evaluate with 7-step PGD.\nFor our MNIST and MNIST-M experiment, we obtain both the baseline standard pretrained model and the adversarial pretrained model via the following optimization schemes: ADAM with 100 epochs, learning rate 3 × 10−4 and batch size 128. The baseline adversarial pretrained model is trained with 40-step PGD. In evaluating the test time robustness for transfer attacks, we generate the adversarial samples with 100-step PGD. In evaluating the test time robustness for adaptive attacks, we evaluate with 40-step PGD.\nThe optimization scheme for DANN during test time adaptation (both transfer and adaptive attack experiments) follows as: ADAM with learning rate 0.004, batch size 128." }, { "heading": "F PROOF OF THEOREM 1", "text": "We prove this theorem by a series of lemmas.\nLemma 1 (Part (1)). In the semi-supervised minimax threat model, the learned model F̃ = Γ(T (D), D, U), by any algorithms T and Γ, must have a large error:\nE { L(F̃ , Ṽ ′) } ≥ 1\n2 (1− d−1), (5)\nwhere the expectation is over the randomness of D,V and possible algorithm randomness.\nProof. This follows from Theorem 1 in Carmon et al. (2019b) (or equivalently Theorem 6 in Schmidt et al. (2018)). The only difference of our setting from their setting is that we additionally have unlabeled data U for the algorithm. Since the attacker can provide x′ = x, the problem reduces to a problem with at most n+ 1 data points in their setting, and thus the statement (1) follows.\nTransductive learning algorithms (T ,Γ): To prove the statement (2), we give concrete algorithms learning algorithms that achieves small test error on x′.\nHigh-level structure of the learning algorithms. At the high level, the learning algorithms work as follows: At the training time we use part of the training data (denoted as D2 to train a pretrained model θ̄), and part of the training data (denoted as D1, is reserved to test-time adaptation). Then, at the test time, upon receiving U , we use U to tune θ̄, and get two large-margin classifiers, θ̄+ and θ̄−, which classify x′ as +1 and −1, respectively. Finally, we check these two large margin classifiers on D1 (that’s where D1 is used), and the one that generates smaller error wins and we classify x′ into the winner class.\nDetailed description. More specifically, the learning algorithms (T ,Γ) work as follows:\n1. Before game starts. Let m′ = Kn0,m = 10n0. We split the training set D into two subsets: D1 := {(xi, yi)}m ′\ni=1 and D2 := {(xm′+i, ym′+i)}mi=1. D2 will be used to train a pretrained model at the training time, and D1 will be used at the test time for adaptation. 2. Training time. T uses the second part D2 to compute a pretrained model, that is, a parameter vector:\nθ̂m = 1\nm m∑ i=1 ym′+ixm′+i, θ̄ = θ̂m ‖θ̂m‖2 . (6)\n3. Test time. On input U,Γ uses D1 and U to perform adaptation. At the high level, it adapts the pre-trained θ̄ along the direction of x′, such that it also has a large margin on x′, and also it makes correct predictions on D1 with large margins. More specifically:\n(a) First, Γ constructs two classifiers, θ+ and θ−, such that θ+ classifies x′ to be +1 with a large margin, and θ− classifies x′ to be −1 with a large margin. Specifically:\nx̄′ := x′/‖x′‖2, γ := ‖x′‖2/2, (7)\nη+ := γ − (θ̄)>x′\n‖x′‖2 , θ+ = θ̄ + η+x̄′, θ̄+ = θ+/‖θ+‖2, (8)\nη− := −γ − (θ̄)>x′\n‖x′‖2 , θ− = θ̄ + η−x̄′, θ̄− = θ−/‖θ−‖2. (9)\nwhere θ+ and θ− are viewed as the parameter vectors for linear classifiers. Note that θ+ is constructed such that θ>+x\n′/‖x′‖2 = γ/‖x′‖2 = 1/2, and θ− is such that θ>−x\n′/‖x′‖2 = −γ/‖x′‖2 = −1/2. (b) Finally, Γ checks their large margin errors on D1. Formally, let\nt := σ (√ n0 d + n0 m )−1/2 , (10)\nerrt(θ) := E(x,y)I[yθ>x ≤ t], (11)\nêrrt(θ) := 1\nm′ m′∑ i=1 I[yiθ>xi ≤ t]. (12)\nIf êrrt(θ̄+) ≤ êrrt(θ̄−), then Γ sets F̃ (x) := sgn(θ̄>+x) and classifies x′ to +1; otherwise, it sets F̃ (x) := sgn(θ̄>−x) and classifies x\n′ to −1. Lemma 2 (Part (2)). In the maximin threat model, there is an absolute constant c > 0, such that for the T and Γ described above, the adapted model F̃ = Γ(T (D), D, U) has a small error:\nE { L(F̃ , Ṽ ) } ≤ e−cK . (13)\nProof. Now, we have specified the algorithms and are ready to prove that w.h.p. F̃ (x′) is the correct label y. By Lemma 3, y(errt(θ̄−) − errt(θ̄+)) ≥ c4√n0 with probability ≥ 1 − e\n−c4K . Then by the Hoeffding’s inequality, D1 is sufficiently large to ensure y(êrrt(θ̄+) − êrrt(θ̄−)) > 0 with probability ≥ 1− 2e−c24K/2. This proves the statement (2).\nLemma 3. There exists absolute constants c4 > 0 such that with probability ≥ 1− e−c4K ,\ny(errt(θ̄−)− errt(θ̄+)) ≥ c4√ n0 . (14)" }, { "heading": "F.1 PROOF OF LEMMA 3", "text": "Without loss of generality, assume y = +1. The proof for y = −1 follows the same argument. Note that\nerrt(θ) = E(x,y)I[yθ>x ≤ t] (15) = P ( N (µ>θ, σ2‖θ‖22) ≤ t ) (16)\n= Q ( µ>θ − t σ‖θ‖2 ) , (17)\nwhere\nQ(x) := 1√ 2π ∫ +∞ x e−t 2/2dt. (18)\nFirst, consider θ̄.\nerrt(θ̄) = Q ( µ>θ̄ − t σ‖θ̄‖2 ) = Q (s) , where s := µ>θ̄ − t σ . (19)\nBy Lemma 4, we have with probability ≥ 1− e−c2(d/n0)1/4 min{m,(d/n0)1/4},\nµ>θ̄ σ‖θ̄‖2 ≤ (√ n0 d + n0 m )−1/2( 1 + c1 (n0 d )1/8) , (20)\nµ>θ̄ σ‖θ̄‖2 ≥ (√ n0 d + n0 m )−1/2( 1− c1 (n0 d )1/8) , (21)\nwhich gives\ns = µ>θ̄ − t σ‖θ̄‖2 ≤ c1 (n0 d )1/8(√n0 d + n0 m )−1/2 , (22)\ns = µ>θ̄ − t σ‖θ̄‖2 ≥ −c1 (n0 d )1/8(√n0 d + n0 m )−1/2 . (23)\nSince m = 10n0 and d n0, we have |s| = ∣∣∣∣µ>θ̄ − tσ‖θ̄‖2 ∣∣∣∣ ≤ 1. (24) Next, we have\nerrt(θ+) = Q ( µ>θ̄+ − t σ‖θ̄+‖2 ) = Q (s+) , where s+ := µ>θ̄+ − t σ , (25)\nerrt(θ−) = Q ( µ>θ̄− − t σ‖θ̄−‖2 ) = Q (s−) , where s− := µ>θ̄− − t σ . (26)\nWe now check the sizes of s+ and s−.\ns+ − s = µ>θ̄+ − t\nσ − µ >θ̄ − t σ\n(27)\n= µ>θ̄+ − µ>θ̄\nσ (28)\n= 1 σ‖θ+‖2 ( (1− ‖θ+‖2)µ>θ̄ + η+µ>x̄′ ) . (29)\nThen by definition and bounds in Claim 1,\n|s+ − s| ≤ 2\nn0 + 40 ≤ 42. (30)\nSince |s| is bounded by 1, we know |s+| is also bounded by 43. Similarly, |s− − s| and thus |s−| are also bounded by some constants. Furthermore,\ns+ − s− = 1\nσ\n( µ>θ̄+ − µ>θ̄− ) (31)\n= 1\nσ\n( µ>θ̄ + η+µ >x̄′\n‖θ+‖2 − µ\n>θ̄ + η−µ >x̄′\n‖θ−‖2\n) . (32)\nBy Claim 2, we have ‖θ−‖2 = ‖θ+‖2. So\ns+ − s− = 1 σ‖θ+‖2 ( η+µ >x̄′ − η−µ>x̄′ )\n(33)\n= 1\nσ‖θ+‖2 (η+ − η−)µ>x̄′ (34)\n= 1\nσ‖θ+‖2 µ>x̄′ (35)\n≥ √ d\n4σ2 (36)\n= 1\n4 √ n0 . (37)\nNow we are ready to bound the error difference:\nerrt(θ̄−)− errt(θ̄+) = Q(s−)−Q(s+) (38)\n= 1√ 2π ∫ s+ s− e−t 2/2dt (39)\n≥ 1√ 2π (s− − s+)×min{e−s 2 −/2, e−s 2 +/2} (40) ≥ c4√ n0 (41)\nfor some absolute constant c4 > 0." }, { "heading": "F.2 TOOLS", "text": "Claim 1. There exists a absolute constant c3 > 0, such that with probability ≥ 1− e−c3K ,\nσ √ d/4 ≤ ‖x′‖2 ≤ 2σ √ d, (42)\n1 2 σ ≤ 1 4 σ\n√ m\nn0 ≤ θ̄>µ ≤ 2σ\n√ m\nn0 ≤ 10σ, (43)\n− √ d/2 ≤ θ̄>x′ ≤ 2 √ d, (44)\nd/2 ≤ µ>x′ ≤ 3d/2, (45) 1\n2 − 8 σ ≤ η+ ≤ 1 2 + 8 σ , (46)\n−1 2 − 8 σ ≤ η− ≤ − 1 2 + 8 σ . (47)\nProof. First, since x′ = µ+ σζ + ν for ζ ∼ N (0, I), with probability ≥ 1− e−c′d for an absolute constant c′ > 0, we have:\n√ d/2 ≤ ‖ζ‖2 ≤ 3 √ d/2, (48) ‖x′‖2 ≥ σ √ d/2− ‖µ‖2 − ‖ν‖2 ≥ σ √ d/4, (49) ‖x′‖2 ≤ σ3 √ d/2 + ‖µ‖2 + ‖ν‖2 ≤ 2σ √ d. (50)\nBy Lemma 4, with probability ≥ 1− e−c2K ,\nθ̄>µ ≤ 2σ (√\nn0 d + n0 m\n)−1/2 ≤ 2σ √ m\nn0 , (51)\nθ̄>µ ≥ 1 2 σ (√ n0 d + n0 m )−1/2 ≥ σ 4 √ m n0 . (52)\nAlso, with probability 1− e−c′K ,\n|θ̄>ζ| ≤ 2Kσ. (53)\nFinally,\n|θ̄>ν| ≤ ‖θ̄‖1‖ν‖∞ ≤ √ d. (54)\nThen\nθ̄>x′ = θ̄>(µ+ σζ + ν) (55)\n≤ |θ̄>µ|+ σ|θ̄>ζ|+ |θ̄>ν| (56) ≤ 2σ √ m\nn0 + 2Kσ +\n√ d (57)\n≤ 2 √ d. (58)\nand\nθ̄>x′ = θ̄>(µ+ σζ + ν) (59)\n≥ σ/2−Kσ − √ d (60) ≥ − √ d/2. (61)\nFor µ>x′, we have with probability ≥ 1− e−c′K ,\nµ>x′ = µ>(µ+ σζ + ν) (62) µ>x′ ≤ ‖µ‖22 + 2Kσ‖µ‖2 + ‖µ‖2 √ d ≤ 3d/2, (63) µ>x′ ≥ ‖µ‖22 − 2Kσ‖µ‖2 − ‖µ‖2 √ d ≥ d/2. (64)\nBy definition:\nη+ = 1\n2 − θ̄>x′/‖x′‖2, (65)\nso 1\n2 − 8 /σ ≤ η+ ≤\n1 2 + 8 /σ. (66)\nSimilarly,\n−1 2 − 8 /σ ≤ η− ≤ − 1 2 + 8 /σ. (67)\nClaim 2.\n‖θ+‖2 = ‖θ−‖2. (68)\nProof. We have by definition:\n‖θ−‖22 = ‖θ̄ + η−x̄′‖22 (69) = 1 + η2− + 2η−θ̄ >x̄′, (70)\n‖θ+‖22 = ‖θ̄ + η+x̄′‖22 (71) = 1 + η2+ + 2η+θ̄ >x̄′. (72)\nThen\n‖θ−‖22 − ‖θ+‖22 = η2− + 2η−θ̄>x̄′ − η2+ − 2η+θ̄>x̄′ (73) = (η− − η+)(η− + η+) + 2θ̄>x̄′(η− − η+) (74) = (η− − η+)[(η− + η+) + 2θ̄>x̄′] (75) = (η− − η+)[−2θ̄>x′/‖x′‖2 + 2θ̄>x̄′] (76) = 0. (77)\nThis completes the proof. Lemma 4 (Paraphrase of Lemma 1 in Carmon et al. (2019b)). Let θ̂m = 1m ∑m i=1 yixi. There exist absolute constants c0, c1, c2 such that under parameter setting (1) and d/n0 > c0,\nσ2‖θ̂m‖22 (µ>θ̂m)2\n≥ (√\nn0 d + n0 m\n)( 1− c1 (n0 d )1/8) , (78)\nσ2‖θ̂m‖22 (µ>θ̂m)2\n≤ (√\nn0 d + n0 m\n)( 1 + c1 (n0 d )1/8) , (79)\nwith probability ≥ 1− e−c2(d/n0)1/4 min{m,(d/n0)1/4}." } ]
2,020
null
SP:b7532fd6e281d88fff5a0a89c73ae3e6651f8827
[ "This paper presents an approach to deep subspace clustering based on minimizing the correntropy induced metric (CIM), with the goal of establishing when training should be stopped and generalizing to unseen data. The main contribution over the existing S2ConfSCN method is a change from squared error loss to CIM when optimizing over the affinity matrix. A key benefit of CIM as a loss is that it does not decrease arbitrarily with training epochs, so it provides a means of estimating when training should cease without needing ground truth labels. The authors argue that CIM \"ensures a smooth decrease of the loss function that enables the use of label-free stopping criterion.\" However, this claim is only justified through a minimal empirical evaluation. The authors also include a means of enforcing block diagonal structure in the learned affinity matrix." ]
Deep subspace clustering (SC) algorithms recently gained attention due to their ability to successfully handle nonlinearities in data. However, the insufficient capability of existing SC methods to deal with data corruption of unknown (arbitrary) origin hinders their generalization ability and capability to address realworld data clustering problems. This paper proposes the robust formulation of the self-supervised convolutional subspace clustering network (SConvSCN) that incorporates the fully connected (FC) layer and, with an additional spectral clustering module, is capable of estimating the clustering error without using the ground truth. Robustness to data corruptions is achieved by using the correntropy induced metric (CIM) of the error that also enhanced the generalization capability of the network. The experimental finding showed that CIM reduces sensitivity to overfitting during the learning process and yields better clustering results. In a truly unsupervised training environment, Robust SConvSCN outperforms its baseline version by a significant amount for both seen and unseen data on four well-known datasets.
[]
[ { "authors": [ "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Mahdi Abavisani", "Vishal M Patel" ], "title": "Deep multimodal subspace clustering networks", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2018 }, { "authors": [ "Maria Brbić", "Ivica Kopriva" ], "title": "Multi-view low-rank sparse subspace clustering", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "Liangjun Chen", "Hua Qu", "Jihong Zhao", "Badong Chen", "Jose C Principe" ], "title": "Efficient and robust deep learning with correntropy-induced loss function", "venue": "Neural Computing and Applications,", "year": 2016 }, { "authors": [ "Nat Dilokthanakul", "Pedro AM Mediano", "Marta Garnelo", "Matthew CH Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Deep unsupervised clustering with gaussian mixture variational autoencoders", "venue": "arXiv preprint arXiv:1611.02648,", "year": 2016 }, { "authors": [ "Ehsan Elhamifar", "Rene Vidal" ], "title": "Sparse subspace clustering: Algorithm, theory, and applications", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Paolo Favaro", "René Vidal", "Avinash Ravichandran" ], "title": "A closed form solution to robust subspace estimation and clustering", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2011 }, { "authors": [ "Kamran Ghasedi Dizaji", "Amirhossein Herandi", "Cheng Deng", "Weidong Cai", "Heng Huang" ], "title": "Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Benjamin D. Haeffele", "Chong You", "René Vidal" ], "title": "A critique of self-expressive deep subspace clustering, 2020", "venue": null, "year": 2020 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Pan Ji", "Tong Zhang", "Hongdong Li", "Mathieu Salzmann", "Ian Reid" ], "title": "Deep subspace clustering networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Pan Ji", "Tong Zhang", "Hongdong Li", "Mathieu Salzmann", "Ian Reid" ], "title": "Deep-subspace-clusteringnetworks, commit: 396c62334c1857b0e6b5bf4c4176d83fe1f806cf", "venue": "https://github.com/ panji1990/Deep-subspace-clustering-networks,", "year": 2019 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Mohsen Kheirandishfard", "Fariba Zohrizadeh", "Farhad Kamangar" ], "title": "Multi-level representation learning for deep subspace clustering", "venue": "In The IEEE Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Kuang-Chih Lee", "Jeffrey Ho", "David J Kriegman" ], "title": "Acquiring linear subspaces for face recognition under variable lighting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2005 }, { "authors": [ "José Lezama", "Qiang Qiu", "Pablo Musé", "Guillermo Sapiro. Ole" ], "title": "Orthogonal low-rank embeddinga plug and play geometric loss for deep learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Chun-Guang Li", "Rene Vidal" ], "title": "A structured sparse plus structured low-rank framework for subspace clustering and completion", "venue": "IEEE Transactions on Signal Processing,", "year": 2016 }, { "authors": [ "Mingchen Li", "Mahdi Soltanolkotabi", "Samet Oymak" ], "title": "Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "John Lipor", "Laura Balzano" ], "title": "Clustering quality metrics for subspace clustering", "venue": "Pattern Recognition,", "year": 2020 }, { "authors": [ "Guangcan Liu", "Zhouchen Lin", "Shuicheng Yan", "Ju Sun", "Yong Yu", "Yi Ma" ], "title": "Robust recovery of subspace structures by low-rank representation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2012 }, { "authors": [ "Weifeng Liu", "Puskal P Pokharel", "José C" ], "title": "Prı́ncipe. Correntropy: Properties and applications in non-gaussian signal processing", "venue": "IEEE Transactions on Signal Processing,", "year": 2007 }, { "authors": [ "Muhammad Asad Lodhi", "Waheed U Bajwa" ], "title": "Detection theory for union of subspaces", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 }, { "authors": [ "Can-Yi Lu", "Hai Min", "Zhong-Qiu Zhao", "Lin Zhu", "De-Shuang Huang", "Shuicheng Yan" ], "title": "Robust and efficient subspace segmentation via least squares regression", "venue": "In European Conference on Computer Vision,", "year": 2012 }, { "authors": [ "Canyi Lu", "Jiashi Feng", "Zhouchen Lin", "Tao Mei", "Shuicheng Yan" ], "title": "Subspace clustering by block diagonal representation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Yue M Lu", "Minh N Do" ], "title": "A theory for sampling signals from a union of subspaces", "venue": "IEEE transactions on signal processing,", "year": 2008 }, { "authors": [ "S Nene", "Shree K Nayar", "Hiroshi Murase" ], "title": "Columbia object image library (coil-100)(tech", "venue": "rep.). Columbia University,", "year": 1996 }, { "authors": [ "Sameer A Nene", "Shree K Nayar", "Hiroshi Murase" ], "title": "Columbia object image library (coil-20)", "venue": null, "year": 1996 }, { "authors": [ "Andrew Y Ng", "Michael I Jordan", "Yair Weiss" ], "title": "On spectral clustering: Analysis and an algorithm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2002 }, { "authors": [ "Vishal M Patel", "René Vidal" ], "title": "Kernel sparse subspace clustering", "venue": "In IEEE International Conference on Image Processing,", "year": 2014 }, { "authors": [ "Vishal M Patel", "Hien Van Nguyen", "René Vidal" ], "title": "Latent space sparse subspace clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2013 }, { "authors": [ "Xi Peng", "Shijie Xiao", "Jiashi Feng", "Wei-Yun Yau", "Zhang Yi" ], "title": "Deep subspace clustering with sparsity prior", "venue": "In International Joint Conferences on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Xi Peng", "Zhiding Yu", "Zhang Yi", "Huajin Tang" ], "title": "Constructing the l2-graph for robust subspace learning and subspace clustering", "venue": "IEEE Transactions on Cybernetics,", "year": 2016 }, { "authors": [ "Xi Peng", "Jiashi Feng", "Jiwen Lu", "Wei-Yun Yau", "Zhang Yi" ], "title": "Cascade subspace clustering", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Xi Peng", "Jiashi Feng", "Shijie Xiao", "Wei-Yun Yau", "Joey Tianyi Zhou", "Songfan Yang" ], "title": "Structured autoencoders for subspace clustering", "venue": "IEEE Transactions on Image Processing,", "year": 2018 }, { "authors": [ "Fei Tian", "Bin Gao", "Qing Cui", "Enhong Chen", "Tie-Yan Liu" ], "title": "Learning deep representations for graph clustering", "venue": "In Twenty-Eighth AAAI Conference on Artificial Intelligence,", "year": 2014 }, { "authors": [ "René Vidal" ], "title": "Subspace clustering", "venue": "IEEE Signal Processing Magazine,", "year": 2011 }, { "authors": [ "Ulrike Von Luxburg" ], "title": "A tutorial on spectral clustering", "venue": "Statistics and Computing,", "year": 2007 }, { "authors": [ "Tong Wu", "Waheed U Bajwa" ], "title": "Revisiting robustness of the union-of-subspaces model for dataadaptive learning of nonlinear signal models", "venue": "In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2014 }, { "authors": [ "Tong Wu", "Waheed U Bajwa" ], "title": "Metric-constrained kernel union of subspaces", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Shijie Xiao", "Mingkui Tan", "Dong Xu", "Zhao Yang Dong" ], "title": "Robust kernel low-rank representation", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2015 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Xu Yang", "Cheng Deng", "Feng Zheng", "Junchi Yan", "Wei Liu" ], "title": "Deep spectral clustering using dual autoencoder network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Junjian Zhang", "Chun-Guang Li", "Chong You", "Xianbiao Qi", "Honggang Zhang", "Jun Guo", "Zhouchen Lin" ], "title": "Self-supervised convolutional subspace clustering network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Junjian Zhang", "Chun-Guang Li", "Tianming Du", "Honggang Zhang", "Jun Guo" ], "title": "Convolutional subspace clustering network with block diagonal prior", "venue": "IEEE Access,", "year": 2020 }, { "authors": [ "Tong Zhang", "Pan Ji", "Mehrtash Harandi", "Wenbing Huang", "Hongdong Li" ], "title": "Neural collaborative subspace clustering", "venue": "arXiv preprint arXiv:1904.10596,", "year": 2019 }, { "authors": [ "Lei Zhou", "Bai Xiao", "Xianglong Liu", "Jun Zhou", "Edwin R Hancock" ], "title": "Latent distribution preserving deep subspace clustering", "venue": "In 28th International Joint Conference on Artificial", "year": 2019 }, { "authors": [ "Pan Zhou", "Yunqing Hou", "Jiashi Feng" ], "title": "Deep adversarial subspace clustering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Subspace clustering approaches have achieved encouraging performance when compared with the clustering algorithms that rely on proximity measures between data points. The main idea behind the subspace model is that the data can be drawn from low-dimensional subspaces which are embedded in a high-dimensional ambient space (Lodhi & Bajwa, 2018). Grouping such data associated with respective subspaces is known as the subspace clustering (Vidal, 2011). That is, each low-dimensional subspace corresponds to a class or category. Up to now, two main approaches for recovering lowdimensional subspaces are developed: models that are based on the self-representation property, and non-linear generalization of subspace clustering called union of subspaces (UoS) (Lodhi & Bajwa, 2018; Lu & Do, 2008; Wu & Bajwa, 2014; 2015). UoS algorithms are out of the scope of this work. Self-representation subspace clustering is achieved in two steps: (i) learning representation matrix C from data X and building corresponding affinity matrix A = |C|+ |CT |; (ii) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix that correspond with the leading k eigenvalues. This second step is known as spectral clustering (Ng et al., 2002; Von Luxburg, 2007). Owning to the presumed subspace structure, the data points obey the self-expressiveness or self-representation property (Elhamifar & Vidal, 2013; Peng et al., 2016b; Liu et al., 2012; Li & Vidal, 2016; Favaro et al., 2011). In other words, each data point can be represented as a linear combination of other points in a dataset: X=XC.\nThe self-representation approach is facing serious limitations regarding real-world datasets. One limitation relates to the linearity assumption because in a wide range of applications samples lie in nonlinear subspaces, e.g. face images acquired under non-uniform illumination and different poses (Ji et al., 2017). Standard practice for handling data from nonlinear manifolds is to use the kernel trick on samples mapped implicitly into high dimensional space. Therein, samples better conform to linear subspaces (Patel et al., 2013; Patel & Vidal, 2014; Xiao et al., 2015; Brbić & Kopriva, 2018). However, identifying an appropriate kernel function for a given data set is quite a difficult task (Zhang et al., 2019b). The second limitation of existing deep SC methods relates to their assumption that the origin of data corruption is known, in which case the proper error model can be employed. In real-word applications origin of data corruption is unknown. That can severely harm the algorithm’s learning process if the non-robust loss function is used. Furthermore, validation\n(i.e. stopping of the learning process) in most of the deep SC methods often requires access to the ground-truth labels. That stands for violation of the basic principle of unsupervised machine learning and yields the overly-optimistic results. Dataset size is also a limitation when it comes to memory requirements. Since the self-representation subspace clustering is based on building the affinity matrix, memory complexity increases as the square of the dataset size. However, the latter limitation is not in the main focus of this work.\nMotivated by the exceptional ability of deep neural networks to capture complex underlying structures of data and learn discriminative features for clustering (Hinton & Salakhutdinov, 2006; Dilokthanakul et al., 2016; Ghasedi Dizaji et al., 2017; Tian et al., 2014; Xie et al., 2016), deep subspace clustering approaches emerged recently (Ji et al., 2017; Abavisani & Patel, 2018; Peng et al., 2016a; Yang et al., 2019; Zhou et al., 2018; Ji et al., 2019b; Peng et al., 2018; 2017; Zhou et al., 2019; Zhang et al., 2019a; Kheirandishfard et al., 2020). In particular, it is shown that convolutional neural networks (CNNs), when applied to images of different classes, can learn features that lie in a UoS (Lezama et al., 2018). Mostly, the base of the recently developed deep subspace-clustering networks is convolutional autoencoder. It is an end-to-end fully convolutional network that is based on the minimization of the reconstruction error. Together, the autoencoder and an additional self-expression (SE) module are forming a Deep subspace clustering network (DSCNet) (Ji et al., 2017). Hence, the total loss function of DSCNet is composed of reconstruction loss and SE model loss. That is, during the learning process the clustering quality is not taken into account. Self-supervised convolutional SC network (S2ConvSCN) (Zhang et al., 2019a) addressed this issue through the addition of a fully connected layer (FC) module and a spectral clustering module that, respectively, generate soft- and pseudo-labels. Dual self-supervision is achieved by forcing these two modules to converge towards consensus. Related accumulated loss, therefore, participates in enhancing the self-representation matrix and the quality of features extracted in the encoder layer. The architecture of S2ConvSCN has a possibility of direct classification once the learning process is completed. A trained encoder and the FC module can make a new network that can directly classify unseen data, also known as an out-of-sample problem. However, while this network can be validated and compared with other algorithms on a separate data set, such an ablation study was not completed. Furthermore, the main disadvantage of the DSCNet architecture, and indirectly S2ConvSCN, is that the network training is stopped when the accuracy is highest (Ji et al., 2019a). First, it is a direct violation of the unsupervised learning principle as the ground-truth labels are exposed. Second, the reported performance (Zhang et al., 2019a; Ji et al., 2017) is overly-optimistic and can not be compared to other algorithms. Also, as mentioned in (Haeffele et al., 2020), most self-expressive based deep subspace clustering models suffer from the need of post-processing the self-representation matrix. Compared to the baseline model, we significantly reduced the post-processing while maintaining the noise-free matrix.\nMentioned research problems led to three main contributions of proposed Robust S2ConvSCN:\n• robustness to errors of the unknown (arbitrary) origin is achieved by using the correntropy induced metric (CIM) in the self-expression loss,\n• the network is trained using the early-stopping method while monitoring only the accumulated loss,\n• thanks to correntropy based loss function the training process is less sensitive to data corruptions which enables the network to generalize better.\nThis study has, also, three side-contributions:\n• the performance of models is estimated using the unseen (out-of-sample) data,\n• block-diagonal regularization of self-representation matrix is integrated into the gradient descent learning process,\n• post-processing of self-representation matrix is reduced to a significant extent.\nA complete head to head comparison of the baseline S2ConvSCN model and our robust approach can be seen in Figure 1." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 MAIN NOTATIONS AND DEFINITIONS", "text": "Throughout this paper, matrices are represented with bold capital symbols and vectors with bold lower-case symbols. X ∈ Rd×N represents data matrix comprised of N data samples with dimen-\nsionality d. { H\n(l) i }m(l) i=1 represent feature maps produced at the output of layer l−1. Thus, H(0) = X\nand H(L) = X̂. X̂ represents the output of the decoder and L represents number of convolutional layers in the autoencoder. { w\n(l) i }m(l) i=1 stand for a set of filters with associated biases { b (l) i }m(l) i=1\nthat form a convolutional layer l = 1, . . . , L. zn = [ h (L/2) 1 (:) . . .h (L/2) m(L/2) (:) ]T ∈ Rd̂×1 stands for\nfeature vector comprised of vectorized and concatenated feature maps, with d̂ extracted features, in the top layer L2 (encoder output) representing input sample xn, n = 1, . . . , N . C ∈ R\nN×N stands for representation matrix in self-expressive model Z = ZC. A = |C|+ |CT | is the affinity matrix and L = D− 1 2AD 1 2 is corresponding graph Laplacian matrix. D is diagonal degree matrix such\nthat Dii = ∑N j=1 Aij . ‖X‖F = √∑N i,j=1 x 2 ij is the Frobenius norm of matrix X. `p(x) = ‖x‖p =\n( ∑d i=1 ‖xi‖ p )1/p, 0 < p ≤ 1 is the `p norm of x. `0(x) = ‖x‖0 = #{xi 6= 0, i = 1, . . . , d}, where # denotes the cardinality function, is `0 quasi norm of x. The Sp, 0 < p ≤ 1, Schatten norms of matrix X are defined as the corresponding `p norms of the vector of singular values of X, i.e. Sp(X) = ‖σ(X)‖p where σ(X) stands for the vector of singular values of X. Depending on the context, 0 represents matrix/vector of all zeros and 1 represents matrix/vector of all ones.\nGrouping the data according to the linear subspaces they are drawn from is known as subspace clustering (Vidal, 2011). The problem is formally defined in:\nDefinition 1. Let X = [X1, . . . ,Xk] be a set of sample vectors drawn from a union of k subspaces in Rd, ∪ki=1 {Si}, of dimensions di min {d,N} , for i = 1, . . . , k. Let Xi be a collection of Ni samples drawn from subspace Si, N = ∑k i=1Ni. The problem of subspace clustering is to segment samples into the subspaces they are drawn from. Throughout this paper, as it is the case in the majority of other papers, we have assumed that number of clusters k is known a priori." }, { "heading": "2.2 APPROACHES TO SUBSPACE CLUSTERING", "text": "Usually, processes that operate in different modes generate data in real-world scenarios. Each mode models such data as lying on a subspace, while the whole process, thus, generates data lying on a union of subspaces (UoS) (Lodhi & Bajwa, 2018). The alternative to the UoS model is the selfrepresentation based subspace model. It implies that every sample from the dataset can be represented as a linear combination of other samples from the same cluster. While shallow models\ndirectly optimize such a self-representation matrix, their deep counterparts train the whole network to better extract features from the raw data and achieve representation linearity.\nMany approaches to deep subspace clustering are based on the introduction of the self-representation in the feature space (Abavisani & Patel, 2018; Ji et al., 2017; Peng et al., 2016a; Zhou et al., 2018; 2019; Zhang et al., 2019a; Kheirandishfard et al., 2020; Zhang et al., 2020). However, one weakness of self-expressive deep subspace clustering models is that their perfomance mainly depends on the self-representation matrix. Thus, elimination of the noise is done by post-processing (Haeffele et al., 2020). It appears in many cases that from the final performance point of view the post-processing matters more than depth of the network. By the virtue of self-representation property, improvements of the shallow subspace clustering methods are of direct relevance to their deep counterparts. The subspace clustering task is accomplished through (i) learning the representation matrix C from data X, and (ii) clustering the data into k clusters by grouping the eigenvectors of the graph Laplacian matrix L that correspond with the k leading eigenvalues. This second step is known as spectral clustering (Ng et al., 2002; Von Luxburg, 2007). Low-rank (Liu et al., 2012; Favaro et al., 2011) and sparse models (Elhamifar & Vidal, 2013) are one of the commonly used algorithms to solve SC clustering problem. They aim to learn the low-rank and sparse representation matrix by solving the following optimization problem (Li & Vidal, 2016):\nmin C\nλ ‖C‖pSp + τ ‖C‖ p p s.t. Z = ZC, diag(C) = 0 (1)\nwhere λ and τ are nonnegative regularization constants. If number of layers L = 0 problem (1) is related to shallow subspace clustering. Constraint diag(C) = 0 is necessary to prevent sparseness regularized optimization algorithms to converge towards trivial solution where each data point represents itself. This constraint is not necessary for problem constrained only by low-rank. When data samples are contaminated with additive white Gaussian noise (AWGN) problem (1) becomes:\nmin C ‖E‖2F + λ ‖C‖ p Sp + τ ‖C‖pp s.t. diag(C) = 0 (2)\nwhere E stands for the modelling error (noise):\nE = Z− ZC. (3)\nAlternatively, square of the Frobenius norm of C is used for regularization (Lu et al., 2012):\nmin C ‖E‖2F + λ ‖C‖ 2 F (4)\nObjective (4) is used also in the self-expression module of the S2ConvSCN in (Zhang et al., 2019a). As seen from (2) and (4), the MSE measure for discrepancy between Z and its self-representation ZC is justified only for the contamination by the AWGN. For sample-specific corruptions (outliers) the proper norm is ‖E‖2,1 while for large random corruptions the proper choice is ‖E‖1 (Liu et al., 2012). However, errors in real world data have different origins and magnitude and may not follow specific probabilistic model. Sometimes, it is hard to know the true origin of corruptions present in data. Thus, to obtain method robust to arbitrary corruption we propose to introduce the CIM of the error. Rationale behind introduction of any regularization on C is to reflect its structural property of block-diagonality. Even though ‖C‖Sp and ‖C‖p, 0 ≤ p ≤ 1 in principle satisfy the enforced block-diagonality condition, their approximation of the BD structure of C is indirect (Lu et al., 2018). Hence, for comparison, this study proposes introduction of loss function with gradient-based BD regularization on representation matrix C." }, { "heading": "2.3 BASELINE MODEL DESCRIPTION", "text": "The base of the DSCNet (Ji et al., 2017) is a fully convolutional autoencoder. Following the flattened latent code of the autoencoder is an additional self-expression module which forms the full architecture of the DSCNet. While it is producing representation matrix C, the clustering algorithm can only cluster the in-sample data. Thus, clustering error is not part of the learning process and, therefore, does not influence the quality of learned features and representation matrix. According to\nthe training procedure given in (Ji et al., 2019a), the performance of the DSCNet strongly depends on observing true labels every epoch. Although presented as an unsupervised method, DSCNet is using the minimum of clustering error as an early stopping criterion which is in contradiction with the principles of unsupervised learning and arguably leads to the overfitting and overly-optimistic performance estimation. As can be seen in Figure 2, much better representation matrix C can be learned from a model by ignoring loss and stopping when the accuracy is highest. Another problematic part is that the DSCNet algorithm is a post-processing matrix C with a hyper-parameter optimized using the ground truth labels. Post-processing of C, as done in DSCNet (Ji et al., 2019a), has three steps. First, it is thresholded by keeping only δ largest, in terms of magnitude, elements in a row. The accepted number of largest elements depends on their sum - Sδ . If the sum Sδ exceeds λ ∑N i=1 cji, i ∈ {1, . . . , N}, where λ stands for an empirically set thresholding constant and N represents the number of elements in a row, other elements (not included in the sum Sδ) are set to zero. This first step is skipped in robust model. Second, knowing the dimensionality of the dataset d, after SVD decomposition of thresholded representation matrix C only d largest eigenvalues with related eigenvectors were kept meaning the remaining eigenvectors span the noise subspace. Third, resulting values of Laplacian matrix are further suppressed by an exponentiation. The reconstructed representation matrix serves as an input to the spectral clustering algorithm which produces pseudolabels.\nFurthermore, DSCNet architecture is a base for the S2ConvSCN model (Zhang et al., 2019a), which is why S2ConvSCN suffers from the majority of problems as DSCNet.\nA novelty that is presented in S2ConvSCN is an FC layer, attached to the latent code, that is upgrading the DSCNet model. On its output, FC has softmax which can directly classify the input data. Upon finishing the learning procedure, the encoder and FC layer form a new model that can be used for evaluating the learning procedure on the independent unseen set. While authors of the S2ConvSCN model (Zhang et al., 2019a) provide a tool for dealing with the out-of-sample data, the actual performance on unseen data has not been tested. The second novelty of S2ConvSCN is training the FC layer from pseudo-labels. When training the network, the spectral clustering module clusters the data according to the affinity matrix every E epochs and updates the pseudo-labels. It is important to note that the mentioned affinity matrix is constructed from the matrix C learned in the self-expression layer and it is changing every epoch during the training. Pseudo-labels assigned to the data are used in two ways. The first is to train the FC layer for soft classification, and the second is to suppress the C matrix values for samples that do not belong to the same cluster. Both ways are known as self-supervision. Keeping in mind that: (i) the S2ConvSCN is upgraded DSCNet model, (ii) it is using the pretrained DSCNet parts of the network, and (iii) it has not been tested on the independent dataset, it is reasonable to conclude that the label-leakage also appeared in S2ConvSCN. Regardless of having the self-supervision modules, the first pseudo-labels generated from DSCNet contain leaked knowledge about the group affiliations of the data.\nAuthors of both (Ji et al., 2017) and (Zhang et al., 2019a) suggest that to converge, parts of the model should be pretrained before training the whole model. However, there is no guarantee that the model will keep converging after attaching final layers or modules. For example, if a pretrained DSCNet yields reasonably good pseudo-labels, it is possible that after attaching the FC layer and performing self-supervision in S2ConvSCN, the matrix C will get worse and the whole model will diverge from the optimal solution. Thus, it is important to tune constants associated with the loss functions of the mentioned modules. As the S2ConvSCN has many loss functions regarding different layers or modules, a smaller learning rate could be beneficial in the sense that the overall loss can reach a better minimum in the error space. However, decreasing the learning rate on a plateau impacts the less-contributing losses. That can spoil the goodness of representation matrix C while the training process tries to reach the loss minimum." }, { "heading": "3 ROBUST SELF-SUPERVISED CONVOLUTIONAL SUBSPACE CLUSTERING NETWORK", "text": "Motivated by discussion in previous section, we propose two new objective functions LCIM for the self-expression module of the S2ConvSCN architecture:\nmin C\nCIM2(E) + γ ‖C‖[k] , (5)\nmin C\nCIM2(E) + γ ‖C‖2 . (6)\nwhere E is defined in Eq. (3). ‖C‖[k] denotes BD regularization and ‖C‖2 denotes `2 regularization of representation matrix C. γ represents a trade-off constant. The CIM loss is defined in Appendix A.2. Objectives (5) and (6) ensure a smooth decrease of the loss function that enables the use of label-free stopping criterion. For the sake of completeness, other loss functions of Robust S2ConvSCN are stated. Auto-encoder reconstruction loss, where X represents the input data and X̂ represents the output of the decoder is defined as:\nLREC = 1\n2N N∑ j=1 ‖xj − x̂j‖22 = 1 2N ∥∥∥X− X̂∥∥∥2 F . (7)\nPseudo-labels form a matrix Q where element qij is set to 1 if samples i and j have the same pseudo-label. Otherwise, qij is set to 0. When Q is calculated, ‖C‖Q loss is defined as:\nLCQ = ∑ i,j |cij | ‖qi − qj‖22 2 := ‖C‖Q (8)\nwhere qi and qj represent one-hot encoded pseudo-labels for i-th and j-th sample. |cij | stands for the absolute representation value for the corresponding samples in the representation matrix C. Cross-entropy and center loss of FC layer is:\nLCE = 1\nN N∑ j=1 ln(1 + eŷ T j qj ), LCNT = 1 N N∑ j=1 ∥∥yj − µπ(yj)∥∥22 (9) where ŷTj stands for a softmax normalized output and yj represents the logits of the FC layer for j-th sample. µ represents a centroid of a cluster π taken from the spectral clustering output for a given sample j (Zhang et al., 2019a). Additionally, in order to yield a stable and unique solution, representation matrix C is forced to be symmetrical (Lu et al., 2018). A novelty introduced in this robust version of the network is a symmetric loss which is defined as:\nLSYM = 1\n2 N∑ j=1 ‖cj − aj‖22 = 1 2 ‖C−A‖2 (10)\nwhere A represents symmetric affinity matrix. Every loss function has an assigned trade-off regularization constant λ1 to λ5 to regulate its importance. Thus, the total loss function, with novelties bolded, is defined as follows:\nLT = LREC + λ1LCIM + λ2LCQ + λ3LCE + λ4LCNT + λ5LSYM. (11)\nFurthermore, to address the overfitting issue, we present results both on the train sets and on the independent test sets. The latter will give better insights into the capability of used algorithms. Also, C matrix post-processing is reduced by skipping the first out of three steps described in Subsection 2.3. A more informative illustration of the complete architecture (shared by the baseline S2ConvSCN and robust model) is shown in Figure 3." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "In this section, we compare the clustering performance of Robust S2ConvSCN with state-of-the-art self-supervised S2ConvSCN model on four well-known datasets. Because such comparison would be unfair to the Robust S2ConvSCN model, performance is not compared to the performances of shallow and recent deep subspace clustering methods reported in the literature. Shallow models use labels to optimize hyperparameters and then evaluate the performance based on these labels. Our approach used only the in-sample labels for hyperparameter-tuning, while out-of-sample labels were kept for evaluation. A worth-noting alternative approach of hyperparameter optimization that does not require labels was presented in (Lipor & Balzano, 2020). However, their approach is not based on raw data but rather on extracting features using scattering network. Regarding the label-leakage issue from pretrained parts of the network (see Figure 2), this study avoids this problem by learning from scratch within defined standards. The performance is evaluated in term of accuracy (Acc):\nAcc(r̂, r) = max π∈Πk\n( 1\nN N∑ i=1 {π(r̂i) = ri}\n) (12)\nwhere Πk stands for the permutation space of [k] defined as all possible orderings of the k-element set {1, 2, . . . , k}. We compare the performance of Robust S2ConvSCN with state-of-the-art selfsupervised deep subspace clustering algorithm (Zhang et al., 2019a). ADAM optimizer (Kingma & Ba, 2014), an extension to stochastic gradient descent is used in the proposed learning procedure. For a full implementation details, see Appendix B. Hyperparameter settings and regularization constants are shown in Tables 3 and 4 in Appendix B." }, { "heading": "4.1 COMPARISON ON FULL DATASETS", "text": "As previously discussed, learning and evaluating on the same data yields an overly optimistic estimation of the model’s performance. Due to the fair comparison, we implemented and compared our model to the different versions of the S2ConvSCN model in the same manner as reported in (Zhang et al., 2019a). However, we did not use accuracy as a stopping criterion. Instead, a fixed number of epochs, learning rate decay, and early stopping based on loss decrease only were used. Same settings are used for the baseline and robust version (see Table 3 in Appendix B). Thus, the obtained results differ significantly from (Zhang et al., 2019a) report. For S2ConvSCN, optimal regularization constants were transferred from (Zhang et al., 2019a). In Table 1, a comparison of S2ConvSCN and Robust S2ConvSCN with different C matrix regularization strategies can be seen. When comparing Figure 2 and Figure 4, it can be seen that CIM prevents overtraining of the network and that loss can be used for early stopping criterion.\nTable 1: Accuracy comparison of S2ConvSCN and Robust S2ConvSCN trained and tested on full COIL20, COIL100, and Extended YaleB datasets. BD and L2 stand for block diagonal and `2 regularization of representation matrix, respectively.\nCOIL20 COIL100 EYaleB S2ConvSCN + BD 0.81111 0.30375 0.56867 S2ConvSCN + L2 (Zhang et al., 2019a) 0.63333 0.55319 0.75247 RS2ConvSCN + BD 0.76250 0.50528 0.41159 RS2ConvSCN + L2 0.88403 0.68805 0.81661\nFigure 4: The figure illustrates the change of accuracy and loss over epochs for a Robust DSCNet model with CIM in the self-expressive model. In comparison with Figure 2, it can be seen that Robust DSCNet can use the change in loss as CIM prevents overfitting." }, { "heading": "4.2 CLASSIFICATION PERFORMANCE ON UNSEEN DATA", "text": "Having several independent observations is crucial for the model’s performance validation. Thus, according to Table 3 in Appendix B, datasets were split to stratified folds. For MNIST, as there is enough data, each fold was additionally divided into a train and test set, 70% and 30%, respectively. For other datasets, the remaining folds were all used for testing as these datasets have a very limited number of data per label, i.e. additional splitting as for MNIST could not be possible. In the context of (Li et al., 2020), the obtained results in Table 2 support the claim that the network is robust to noise when early-stopping methods are applied." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "As it could be seen in Table 2, Robust S2ConvSCN outperforms S2ConvSCN on all datasets regardless of regularization imposed on representation matrix C. Moreover, in the case of the MNIST dataset, Robust S2ConvSCN with BD regularization leads to the best results. The CIM loss, indeed, handles the corruptions in the data better than MSE, especially the possible amplification of corruptions due to their propagation through deep layers. The noise robustness can be explained by Chen et al. (2016) where is shown that learning using the correntropy-loss function generalizes better. However, training of deep learning algorithms is an especially time-consuming task. Thus, settings in Appendix Table 3 and regularization constants could possibly be improved in further testing. As discussed in earlier sections, we assume that (Zhang et al., 2019a) approach leads to an overly optimistic estimation of the model’s performance. Nevertheless, Robust S2ConvSCN outperformed S2ConvSCN showing its superiority.\nUsing datasets for which the dimensionality d is unknown could be a challenging task. Without that information, it is difficult to post-process the representation matrix to eliminate the noise (Haeffele et al., 2020). Also, there is a problem with pseudo-labels refinement frequency. If it occurs too often it could lead to divergence of the whole model. If the refinement occurs too rarely, the model could get stuck in the poor minimum. Finding a more robust pseudo-label refinement strategy and elimination of the need for the prior knowledge of d is still an open question. Also, more research towards identifying which loss has higher significance during the training is needed as there are many non-equally important loss functions in Robust S2ConvSCN. In a sense of memory constraints, minibatch training of deep subspace clustering models could cross the barrier of memory complexity and offer more approachable learning.\nTo conclude, among finding a more robust model with better early stopping criterion, this ablation study aimed to set up a new, more transparent way for the evaluation of deep subspace clustering models. As baseline algorithms experience early commitment problem, i.e. they depend on weighttransfer from pretrained models, true labels are leaked during the training process. For that, we propose an evaluation of the algorithm on independent data to have a proper estimate of model performance. As can be seen from Table 2, measured performances of S2ConvSCN significantly differ from the optimistic one presented in (Zhang et al., 2019a). The combination of gradientbased learning with CIM loss and early-stopping strategy (Chen et al., 2016; Li et al., 2020) did, indeed, improve the robustness to the unknown errors in data. Additionally, the presented model which incorporates label-free learning and the robust correntropy loss can easily be extended to multi-modal and multi-view data. We aim to address that in our further research." }, { "heading": "A APPENDIX - PRELIMINARIES", "text": "A.1 BLOCK DIAGONAL REGULARIZATION\nTo introduce BD regularization we state the proposition 4 from (Von Luxburg, 2007) for the graph Laplacian matrix L:\nProposition 1. (Proposition 4 in (Von Luxburg, 2007): Number of connected components and spectra of L). Let G be an undirected graph with nonnegative weights. Then the multiplicity k of the eigenvalues 0 of L equals the number of connected components in the graph. Thus, based on Proposition 1 we define the BD regularization of C as the sum of the k smallest eigenvalues of L:\n‖C‖[k] = N∑\ni=N−k+1\nλi(L) (13)\nwhere λi(L), λ1 ≥ λ2 ≥ · · · ≥ λ(N−k+1) ≥ · · · ≥ λN , stands for the i-th eigenvalue of L.\nA.2 CORRENTROPY\nHere we briefly introduce the correntropy and its properties that qualify it as loss function robust to data corruptions. Let S = [s1, . . . , sN ] ∈ Rd×N and T = [t1, . . . , tN ] ∈ Rd×N be two realizations of the corresponding random variables. The empirical correntropy is estimated from data as:\nV̂ (S,T) = 1\nN N∑ i=1 κσ(si, ti), κσ(si, ti) = exp ( −‖si − ti‖ 2σ2 ) (14)\nwhere κσ(si, ti) is the Gaussian kernel. Herein, we present from (Liu et al., 2007) two (out of ten) properties of correntropy that justify its use as robust error measure.\nProperty 1. (Property 3 in (Liu et al., 2007)). Correntropy involves all the even moments of the random variable ε = T− S:\nVσ(S,T) = 1√ 2πσ ∞∑ n=0 (−1)n 2nn! E [ (S−T)2n σ2n ] (15)\nwhere E denotes mathematical expectation. As σ increases, the high-order moments decay faster; so, the second-order moment tends to dominate, and the correntropy approaches correlation.\nWhile the MSE involves only second-order moments and is, thus, optimal measure for error distributed normally, correntropy is the optimal measure for error with the arbitrary (non-Gaussian) distribution. Furthermore, as can be seen from Equation (14), correntropy is data-driven.\nProperty 2 (Property 8 in (Liu et al., 2007)). The function:\nCIM(S,T) = ( κ(0,0)− V̂ (S,T) ) 1 2\n(16)\ndefines a CIM in sample space. Since its limes is final, the CIM function is robust to large random errors." }, { "heading": "B APPENDIX - EXPERIMENTAL SETUP", "text": "Robust S2ConvSCN model is implemented in Keras (Chollet et al., 2015) and Tensorflow (Abadi et al., 2015). First, the random seed is fixed for reproducibility. To have a reliable performance estimate on the test set and multiple independent train and test folds for each dataset, a stratified kfold splitting was performed. Depending on the dataset constraints, the train-test split was performed on every fold or one fold served as a train set while the rest k − 1 folds served as a test set. Hence, independent k observations of algorithms’ performances for each dataset have been produced.\nAs discussed in Section 3, instead of monitoring accuracy every epoch, and consequently supervising the learning using ground truth labels, we use only the loss for reducing learning rate and early\nstopping. The training is stopped either after reaching the early stopping criterion or after reaching the maximum number of epochs. As in (Zhang et al., 2019a), pretraining was also applied. Firstly, the autoencoder is pretrained to replicate the input. Secondly, the self-expression layer together with the pretrained autoencoder (DSCNet (Ji et al., 2017)) was trained to reach a limited number of epochs or the early stopping criterion. After that, the FC layer and self-supervision modules were added to the pretrained DSCNet. This procedure forms a starting point for both S2ConvSCN and Robust S2ConvSCN. Values of hyper-parameters λ1 . . . λ5 found in Eq. 11 are shown in Table 4.\nDuring the warming-up phase, pseudo-labels were not refined because the newly introduced layers and modules could affect the convergence process (see Section 3 for the discussion). For the sake of fair comparison, different versions of S2ConvSCN were trained using the same settings as Robust S2ConvSCN. Table 3 shows settings for MINST (LeCun et al., 2010), COIL-20 (Nene et al., 1996b), COIL-100 (Nene et al., 1996a), and Extended Yale B (Lee et al., 2005). Detailed settings for the baseline and robust model can be seen in Table 3." }, { "heading": "C APPENDIX - COMPLEXITY AND CONVERGENCE", "text": "Dataset size and choice of representation matrix regularization have a direct impact on time and memory complexity. As the model must learn in batch mode, the number of samples in the dataset N determines the number of parameters in the self-expression layer -N2. Thus, dataset size impacts memory and time complexity. Also, regularization imposed on the representation matrix affects the computational time of the algorithm. As presented in Section A, BD regularization is defined as the sum of k smallest eigenvalues of Laplacian matrix L. Thus, if the BD regularization is chosen, the model will have higher time complexity due to the usage of the singular value decomposition algorithm. Although it is not directly associated with the time complexity per epoch, the spectral clustering module refines pseudo-labels every T0 epochs which adds to overall training time.\nTo properly emphasize individual loss functions, all regularization constants in the overall loss function must be tuned (discussed in Section 3). As a good starting point, we used regularization constants from (Zhang et al., 2019a) and refined only those associated with the error measure term in\nthe self-expression layer and representation matrix C. Depending on the fold size and regularization used, training of one fold on one Nvidia Quadro P6000 GPU takes 2-6 hours. Testing on one fold is within several seconds." } ]
2,020
null
SP:f0e0d909df518f25eb9243837939225d7db1196e
[ "The authors present a new Algorithm for performing unsupervised anomaly detection in diverse applications such as visual, audio and text data. They propose a two-step method in which first they utilise contrastive learning in order to find a semantically dense map of the data onto the unit-hypersphere. Then, they classify neighbouring pairs of test examples as in- or out-of- distribution based on the amount of the shared semantic information. Finally, they show that in several anomaly detection problems in the field of visual data their proposed method outperforms several existing methods." ]
In this paper we present SemSAD, a simple and generic framework for detecting examples that lie out-of-distribution (OOD) for a given training set. The approach is based on learning a semantic similarity measure to find for a given test example the semantically closest example in the training set and then using a discriminator to classify whether the two examples show sufficient semantic dissimilarity such that the test example can be rejected as OOD. We are able to outperform previous approaches for anomaly, novelty, or out-of-distribution detection in the visual domain by a large margin. In particular we obtain AUROC values close to one for the challenging task of detecting examples from CIFAR-10 as out-of-distribution given CIFAR-100 as in-distribution, without making use of label information.
[]
[ { "authors": [ "Faruk Ahmed", "Aaron C. Courville" ], "title": "Detecting semantic anomalies", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Samaneh Azadi", "Catherine Olsson", "Trevor Darrell", "Ian J. Goodfellow", "Augustus Odena" ], "title": "Discriminator rejection sampling", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Liron Bergman", "Yedid Hoshen" ], "title": "Classification-based anomaly detection for general data", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Raghavendra Chalapathy", "Sanjay Chawla" ], "title": "Deep learning for anomaly detection: A survey", "venue": "CoRR, abs/1901.03407,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey E. Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": null, "year": 2002 }, { "authors": [ "Xi Chen", "Nikhil Mishra", "Mostafa Rohaninejad", "Pieter Abbeel" ], "title": "Pixelsnail: An improved autoregressive generative model", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sung-Ik Choi", "Sae-Young Chung" ], "title": "Novelty detection via blurring", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "J. Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "Terrance Devries", "Graham W. Taylor" ], "title": "Learning confidence for out-of-distribution detection", "venue": "in neural networks. ArXiv,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Izhak Golan", "Ran El-Yaniv" ], "title": "Deep anomaly detection using geometric transformations", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas G. Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R. Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Marc Masana", "Idoia Ruiz", "Joan Serrat", "Joost van de Weijer", "Antonio M. López" ], "title": "Metric learning for novelty and anomaly detection", "venue": "In British Machine Vision Conference,", "year": 2018 }, { "authors": [ "Alexander Meinke", "Matthias Hein" ], "title": "Towards neural networks that provably know when they don’t know", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sina Mohseni", "Mandar Pitale", "Jbs Yadawa", "Zhangyang Wang" ], "title": "Self-supervised learning for generalizable out-of-distribution detection", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Eric T. Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Görür", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Chandramouli Shama Sastry", "Sageev Oore" ], "title": "Detecting out-of-distribution examples with gram matrices", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Robin Tibor Schirrmeister", "Yuxuan Zhou", "Tonio Ball", "Dan Zhang" ], "title": "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features", "venue": "CoRR, abs/2006.10848,", "year": 2020 }, { "authors": [ "Gabi Shalev", "Yossi Adi", "Joseph Keshet" ], "title": "Out-of-distribution detection using multiple semantic label representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aäron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": null, "year": 2018 }, { "authors": [ "Apoorv Vyas", "Nataraj Jammalamadaka", "Xia Zhu", "Dipankar Das", "Bharat Kaul", "Theodore L. Willke" ], "title": "Out-of-distribution detection using an ensemble of self supervised leave-out classifiers", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Tongzhou Wang", "Phillip Isola" ], "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "venue": null, "year": 2005 }, { "authors": [ "J. Winkens", "R. Bunel", "Abhijit Guha Roy", "Robert Stanforth", "V. Natarajan", "Joseph R. Ledsam", "Patricia MacWilliams", "Pushmeet Kohli", "Alan Karthikesalingam", "Simon Kohl", "T. cemgil", "S. Eslami", "O. Ronneberger" ], "title": "Contrastive training for improved out-of-distribution detection", "venue": "ArXiv, abs/2007.05566,", "year": 2020 }, { "authors": [ "Zhisheng Xiao", "Q. Yan", "Y. Amit" ], "title": "Likelihood regret: An out-of-distribution detection score for variational auto-encoder", "venue": "ArXiv, abs/2003.02977,", "year": 2020 }, { "authors": [ "Hongjie Zhang", "Ang Li", "Jie Guo", "Yanwen Guo" ], "title": "Hybrid models for open set recognition", "venue": null, "year": 2003 }, { "authors": [ "Ev Zisselman", "Aviv Tamar" ], "title": "Deep residual flow for novelty detection", "venue": "CoRR, abs/2001.05419,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Anomaly detection or novelty detection aims at identifying patterns in data that are significantly different to what is expected. This problem is inherently a binary classification problem that classifies examples either as in-distribution or out-of-distribution, given a sufficiently large sample from the in-distribution (training set). A natural approach to OOD detection is to learn a density model from the training data and compute the likelihood ratio of OOD examples. However, in practice this approach frequently fails for high-dimensional data (Nalisnick et al. (2019)), where it has been shown that deep generative models can assign higher likelihood to OOD examples than to in-distribution examples. This surprising result is likely the consequence of how existing deep generative models generalise. For example, Variational Autoencoders (Kingma & Welling (2014)) generalise by superposition of examples, which is a consequence of the stochastic nature of the posterior that can map different examples to the same point in latent space. As superposition is an averaging process that reduces the information content it can be expected that examples of lower complexity than the training examples can map to high likelihood regions in latent space. Note that it is possible for a datapoint to have high likelihood under a distribution yet be nearly impossible to be sampled, a property known as asymptotic equipartition property in information theory Cover & Thomas (2001). For autoregressive generative models, such as PixelCNN (van den Oord et al. (2016)), it has been shown that the pixel-by-pixel generation process is strongly determined by the local surrounding of pixels (Chen et al. (2018)), where the fact that nearby pixels of training examples frequently share the same color can explain why mono-chromatic images are assigned a high likelihood (Nalisnick et al. (2019)). Local pixel correlations also seem to be responsible for the failure of generative models based on Normalising Flows to assign correct likelihood values to OOD examples Schirrmeister et al. (2020).\nAs a consequence, most of the current OOD detection approaches make use of a score function s(x) to classify test examples as in-distribution or OOD. In case that the examples of the training set are labelled, a simple score can be given by s(x) = maxy p(y|x), with p(y|x) the softmax probability for predicting class labels, y ∈ {1, ..,K} (Hendrycks & Gimpel (2017)). If s(x) is below a threshold the test example is classified as OOD. Labelled data allows to learn representations that are associated with the semantic information shared by the examples in the training set, which can be used for OOD detection. However, the approach suffers from the problem that the scores for in-distribution examples can be widely distributed across the interval of possible score values, s(x) ∈ [1/K, 1], especially if the number of labels are low and the classification task is hard, which strongly increases the false-positive rate. Consequently, better performance was found for approaches that use labeled data for learning a higher dimensional representation that encodes for\nsemantic information (Lee et al. (2018b)). In this representation space the in-distribution occupies just a small volume and a random feature vector would be most likely classified as OOD. Another simplification arises if the OOD detection problem is supervised, with some OOD examples labelled as such and contribute to the training set. In this case the OOD detection problem boils down to an unbalanced classification problem (Chalapathy & Chawla (2019)). In general OOD detection benefits from separating the factors of variation for the in-distribution in either relevant (e.g. object identity) or irrelevant (e.g. compression artefacts) using prior knowledge, where the relevant factors are typically those that carry salient semantic information. In line with the arguments put forward by Ahmed & Courville (2020), this separation helps an OOD model to systematically generalise, e.g. whether we are allowed to re-colour or add noise to images for data augmentation. Generalisation over the training set is necessary, as learning under insufficient inductive bias would result in misclassification of examples from an in-distribution test set as OOD. Labeled data provide this additional information, as relevant factors can be defined as those that help the classification task, with the limitation that there might be more factors involved in characterising the in-distribution than those needed to predict the labels.\nIn this work, we introduce a general framework for OOD detection problems that does not require label information. Our framework can be widely applied to OOD detection tasks, including visual, audio, and textual data with the only limitation that transformations must be a priori known that conserve the semantics of training examples, such as geometric transformations for images, proximity of time intervals for audio recordings (van den Oord et al. (2018)), or randomly masking a small fraction of words in a sentence or paragraph (Devlin et al. (2019)). For visual data we show new state-of-the-art OOD classification accuracies for standard benchmark data sets, surpassing even the accuracies that include labels as additional information. The key contributions of this work are\n• We propose a new OOD detection framework that is applicable in absence of labeled indistribution data or OOD examples that are labeled as such.\n• We show that our approach strongly improves OOD detection for challenging tasks in the visual domain\n• We find that identifying semantically close examples in the training set is central for reliable OOD detection" }, { "heading": "2 RELATED WORK", "text": "Unsupervised Methods using in-distribution labels. Many OOD detection methods make use of labels to generate scores that are either based on class prediction probabilities or on intermediate representations for an in-distribution classification task. For example, Hendrycks & Gimpel (Hendrycks & Gimpel (2017)) used the maximum of the softmax probabilities (MSP) to discriminate between OOD and in-distribution. More recent approaches Lee et al. (2018a); Winkens et al. (2020); Zhang et al. (2020) use labels to learn an intermediate representation on which a density distribution (e.g. multivariate normal distribution or deep generative network) can be fitted, which then can be used to compute the likelihood of OOD examples. As labels implicitly provide information about the semantic relation of examples in the training set, approaches using label information typically show higher accuracy than unsupervised methods. These approaches can be improved by introducing additional parameters or training strategies. For example, MSP was improved by introducing a temperature parameter (Liang et al. (2018)), alternative losses (Lee et al. (2018a); Vyas et al. (2018)), auxiliary objectives (Devries & Taylor (2018); Hendrycks et al. (2019b); Mohseni et al. (2020)), or outlier exposure (Hendrycks et al. (2019a)). Intermediate representations were improved using a multi-head network architecture (Shalev et al. (2018), contrastive learning Winkens et al. (2020), metric learning Masana et al. (2018)).\nGeneral Unsupervised Methods. If label information is absent, other means must be found to impose an inductive bias on the OOD detection model to generalise over the training set. Existing approaches can be separated in methods that learn generalisable features based on (i) self-supervised learning tasks Golan & El-Yaniv (2018), transformations that destroy semantics Choi & Chung (2020), match of encoder-decoder architectures Xiao et al. (2020), or make use of a semantically related auxiliary outlier distribution Schirrmeister et al. (2020). The work that is most related to ours is Geometric-Transformation Classification (GEOM), proposed by Golan & El-Yaniv (2018)\nand improved by Bergman & Hoshen (2020), which belongs to the class of self-supervised learning approaches (Hendrycks et al. (2019b)). The central idea of GEOM is to construct an auxiliary in-distribution classification task by transforming each image of the training set by one of 72 different combinations of geometric transformations with fixed strength, such as rotation, reflection, and translation. The task is to predict which of the 72 transformations has been applied, given a transformed image. GEOM gives examples that show high prediction uncertainty a high OOD score. The relevant features learned by this task are salient geometrical features, such as the typical orientation of an object. Our approach differs from GEOM by the fact that we define the relevant features as those that are invariant under geometric and other transformations, such as cropping and color jitter, which are chosen of moderate strength to not change the semantics of the images in the training set." }, { "heading": "3 METHOD", "text": "An intuitive approach for OOD detection is to learn a representation that densely maps the indistribution to a small region within a lower dimensional space (latent space), with the consequence that OOD examples will be found outside this region with high probability. The representation should include the salient semantic information of the training set, to ensure that test examples from the in-distribution are not misclassified as OOD, but disregard irrelevant factors of variation that would prevent dense mapping. As learning this mapping by Autoencoders is difficult, we split the OOD detection task into finding a semantically dense mapping of in-distribution onto a d-dimensional unit-hypersphere by contrastive learning, followed by classifying neighbouring examples on the unit-hypersphere as semantically close or distant." }, { "heading": "3.1 LEARNING SEMANTIC SIMILARITY", "text": "A contrastive objective can be used to align feature vectors h(x) ∈ Rd that are semantically similar and at the same time distributes examples of the training set almost uniformly over the unit-hypersphere (Wang & Isola (2020); Chen et al. (2020)). This representation allows to identify for any test example the semantically close example from the training set. The mapping h(x) = f(x)/||f(x)|| can be learned from training a deep neural network f(x) to minimise the contrastive loss\nL[h] = −E(x,x′)∼Th(x,x′)\n[ log\neh(x) Th(x′)/τ Exneg∼Th(x) [ eh(x) Th(xneg)/τ ]] , (1)\nwhere τ denotes a temperature parameter. Here, each positive pair (x, x′) is the result of sampling from a distribution of transformations Th(x, x′) that conserve semantics between x and x′, with Th(x′) the marginal of Th(x, x′). For datasets used to benchmark object recognition tasks, samples (x, x′) ∼ Th(x, x′) can be generated by picking a single example from the training set and independently apply random transformations, such as geometric transformations, colour distortions, or cropping (Appendix D). The negative pairs can be generated by applying random transformations to different training examples. We emphasise that the types of transformations and their strengths essentially define the semantics we want to encode and thus determine if, for example, the image of a black swan is classified as OOD for an in-distribution that contains only white swans. The design of transformations that capture the underlying semantics of the training dataset requires either higher level understanding of the data or extensive sampling of different combinations of transformations with evaluation on an in-distribution validation set." }, { "heading": "3.2 LEARNING SEMANTIC DIFFERENCES", "text": "As the encoder h(x) maps any example x on the unit-hypersphere, including OOD examples, we have the situation that OOD examples can be close to training examples without sharing semantic information. The obvious reason is that the contrastive objective homogeneously distributes training examples on unit-hypersphere, giving no preferred direction for OOD examples to cluster. As a consequence, the representation h(x) cannot be directly used for OOD detection. The idea is now to make use of the fact that nearby examples on unit-hypersphere share semantic information if both come from the in-distribution but don’t share semantic information if one of the two examples is\nOOD. We therefore train a score function s(x, x′) to detect semantic differences between nearby examples on the unit-hypersphere, which are determined by the readily trained encoder h(x). A statistically meaningful score can be given by the likelihood ratio between the probability Ppos of a test example xtest and its next nearest neighbour in the training set xnext = argmaxx hT (x)h(xtest) to be semantically close in relation to the likelihood Pneg that the same two examples are semantically distant\ns∗(xnext, xtest) = log Ppos(xnext, xtest) Pneg(xnext, xtest)\n(2)\nFor the distribution of positive examples we use Ppos(x, x′) = (1− z)Tp(x, x′)+ z1x′∈S(x)/|S(x)|, with z a Bernoulli distributed random variable of mean µ. We introduced with S(x) the semantic neighbourhood of x, which is defined by the k-nearest neighbours of x, using cosine similarity hT (x)h(x′) as semantic similarity measure (Fig. 2). The type of transformations are similar to Th but with reduced strength to ensure that the relevant factors of variation are conserved (Fig. 3 and Appendix D). For negative examples we takePneg(x, x′) = Tn(x′)Tn(x), with Tn(x) the marginals, which implies that x and x′ are almost always derived from two different images of the training set. The negative transformations, Tn, are allowed to include stronger and more diverse transformations than Tp (Fig. 3 and Appendix D). In principle negative examples can be constructed that are harder to classify, such as augmenting Pneg by pairs that are independent transformations of the same example (Choi & Chung (2020)). However, we found that this reduces the performance (Table 2). As shown in Appendix A, the score s∗(x, x′) maximises the training objective J [s; γ], given by\nJ [s; γ] = E(x,x′)∼Ppos [ log σ(a) ] + γE(x,x′)∼Pneg [ log ( 1− σ(a) )] (3)\nHere, we defined σ(a) = 1/(1 + e−a) and a = s(x, x′) − log(γ) and realise the score function s(x, x′) by a deep neural network. It can be shown that the optimal solution s∗(x, x′) is invariant\nto any variation in γ > 0 (Appendix A). We introduced γ as it is notoriously hard to learn ratios of probability densities in high dimensional spaces, which is a central problem of generative adversarial networks (Azadi et al. (2019)). In general, s(x, x′) learned by the objective Eq. 3 can deviate significantly from the optimal generalising likelihood ratio s∗(x, x′). This deviation is most apparent if Ppos(x, x′) is close to zero where Pneg(x, x′) is non-zero and vice versa, as shown in Fig. 4. In this case the objective can be maximised by any decision boundary that lies in the region between the distributions Ppos(x, x′) and Pneg(x, x′). To smoothen the score function s(x, x′) we sample γ at each iteration of the learning process and thereby effectively sample over an ensemble of gradients (Appendix B). Inspired by the lottery ticket hypothesis that training a deep neural network under constant objective mainly affects the weights of a small subnetwork (Frankle & Carbin (2019)), we can reason that sampling over γ affects the weights for an ensemble of overlapping subnetworks. As a consequence, s(x, x′) is the prediction from an ensemble of models, which typically results in higher prediction accuracies, less variance with respect to weight initialisation, and higher robustness to overfitting. The effect of uniform sampling of γ on stabilising the decision boundary and thus observing the train/test sets from the in-distribution within the positive score range is shown in Appendix C. Although only the difference in score values between a test example and the in-distribution test set is relevant for OOD detection, examples from the in-distribution should be sufficiently distant from Pneg(x, x′) for optimal performance. It can be further shown that for the extreme case γ → ∞ the score function learns the optimal weights to realise importance sampling for Ppos by sampling from Pneg (Appendix B)." }, { "heading": "4 TRAINING AND EVALUATION PROTOCOLS", "text": "" }, { "heading": "4.1 TRAINING", "text": "Experiments were carried out using either ResNet18 or ResNet34 neural network architectures, for both the encoder f(x) and the discriminator s(x, x′). We substituted ReLU activations for the discriminator by ELU units to reduce the overconfidence induced by ReLU (Meinke & Hein (2020)), which resulted in a strong reduction of unusually large spikes in the training loss curve for the discriminator. For contrastive learning a MLP head was attached to the last hidden layer that projects to a d = 128 dimensional feature vector, h(x), whereas for the discriminator the MLP head projects\nto a scalar output, s(x, x′). Note that the ResNets for encoder and discriminator don’t share parameters. We train the contrastive loss at batch size 2048 and the discriminator at batch size 128, using ADAM(AMSGrad) optimiser. We applied random transformations to each example in the training set before presenting them to encoder and discriminator. The transformations consist of combinations of random cropping followed by resizing to the original size, random horizontal flipping, random color jitter, grey scaling, and gaussian blurring (Appendix D). For training the encoder, h(x), we used the same transformations with the same strength as reported in Chen et al. (2020)). We set the temperature parameter to τ = 1, which is the value reported in Winkens et al. (2020)) for the same datasets used in this work. Unless otherwise specified, positive pairs for training the discriminator are two independent transformations of a single image from the training set, where the transformation strength is bounded by strong transformations (Fig. 3 and Appendix D), to make sure that we don’t transform out of the in-distribution. As pairs generated from independent transformations of the same image are typically semantically closer than any two semantically close images of the in-distribution test set (Fig. 2), the latter would be erroneously classified as OOD. To avoid mis-classification we augment the transformed positive pairs with a fraction of semantic similar pairs, with pairing partners randomly selected from the semantic neighbourhood. The strength of augmentation is determined such that train/test sets from the in-distribution reside on the positive OOD-score side yet remain inside the sensitive range of the logistic sigmoid function (Fig. 4). Unless otherwise specified, we take a semantic neighbourhood size of 4 and substitute a fraction µ = 1/32 of transformed pairs in a minibatch with semantically similar pairs from the training set. For regularisation, we use weight decay of 10−6 and uniformly sample γ ∼ U(1, 10) at each iteration of the learning process. Negative pairs are constructed by transforming two different examples from the training set, including also ’extreme’ transformations and gaussian blur." }, { "heading": "4.2 EVALUATION", "text": "We evaluate the results using Area Under the Receiver Operating Characteristic curve (AUROC), which has the advantage to be scale-invariant – measures how well predictions are ranked, rather than their absolute values – and classification-threshold-invariant – it measures how well OOD samples are separated from the training set. However, for any practical setting of OOD detection a classification threshold is needed and can be chosen such that the false positive rate of an in-distribution test set is close to some threshold, e.g. α = 0.05. We do not report values for Area Under the Precision-Recall curve (AUPR) as in this work we have no class imbalance between the OOD test set and the in-distribution test-set. As we observed significant shifts of OOD-scores for in-distribution\ntrain/test sets between training runs (Appendix C), we suggest for any practical applications to carry out a majority vote over 5 independent training runs, where after each run an example is classified as OOD if the OOD-score is significantly lower than the OOD-scores of the in-distribution test set." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "In our experiments, we focus on difficult OOD detection problems in the visual domain (Nalisnick et al. (2019)), in particular CIFAR-10/SVHN, CIFAR-10/CIFAR-100, and CIFAR-100/CIFAR-10. The main results are summarised in Table 1, where we used an identical setup (e.g. same hyperparameters, transformation strengths, and network size) for all datasets and averaged AUROC values over 5 subsequent runs using ResNet18 and 5 subsequent runs using ResNet34. We compare our unsupervised method (SemSAD) with supervised methods that use label information and/or OOD examples that are labeled as such. Surprisingly, we find that SemSAD outperforms not only all unsupervised methods on these problems but also all supervised methods we are aware of. This result is especially striking as supervised methods typically outperform unsupervised methods, as semantic representations learned on label information are typically of advantage. Our interpretation is that learning a representation for the semantic information shared between pairs of examples allows to identify a larger set of relevant features that characterise the in-distribution than from the large number of examples that share the same label. The more of the relevant features can be identified the tighter the in-distribution can be characterised, which helps OOD detection. The performance gain of our method is strong for all OOD detection problems considered in this work but most apparent for CIFAR-100 as in-distribution and OOD examples from CIFAR-10, with increase in state-of-the-art AUROC for unsupervised methods not using label information from 0.625 to 0.999. Note that the classes of CIFAR-10 and CIFAR-100 are mutually exclusive, and thus CIFAR-10 can be used as OOD test set.\nWe carried out further experiments to see the effects of hyperparameter values and the transformations used for training the discriminator (Table 2). As expected, if we destroy semantic information by using extreme transformations for generating positive (in-distribution) pairs the performance is significantly reduced, whereas Gaussian blur on negative examples has a positive effect. The experiments further show that the semantic neighbourhood size should be taken small enough to make sure that the pairs generated from semantic neighbourhoods and used for training s(x, x′) are semantically close. In general, we observed better performance if we broaden the distribution of negative pairs, e.g. by augmenting examples with gaussian blur and use extreme transformations. The per-\nformance of our method is tightly connected to the ability of the encoder h(x) and discriminator s(x) to extract or be sensitive to features that allow to generalise over the training-set and are thus specific to the in-distribution. These generalisable features are orthogonal to the features that change under the transformations we use in training. The type of transformations we use in this work, e.g. cropping, horizontal flip, and colour jitter, are generic in the sense that they are designed to conserve the semantics for images that are related to physical objects. In general, the type of transformations used must match the generalisable features of the in-distribution. For example, if all examples of in-distribution have a specific horizontal orientation, then horizontal flip must be excluded from the transformations." }, { "heading": "6 CONCLUSION", "text": "In this work we proposed SemSAD – a new OOD detection approach that is based on scoring semantically similar pairs of examples from the in-distribution. We showed that our method outperforms supervised and unsupervised methods on challenging OOD detection tasks in the visual domain. The definition of semantic similarity within our approach requires to identify transformations that are applicable to individual examples and are orthogonal to the salient semantic factors of the indistribution. Although semantic similarity can be broadly defined as ”everything that is not noise”, high predictive power can be expected if the semantic similarity score catches the higher order features that are specific to the in-distribution. In practice, there are problems where the definition of semantic similarity is challenging. For example, if genome sequence data is the input and the effect on the phenotype of the organism is the true underlying semantics. In this case, it is unclear how transformations of the genome sequence should look like that lead to the same phenotype. In contrast, for the important problem of protein folding, such transformations can be inferred from multiple sequence alignments of sequences that likely conserved the function of a protein.\nAUTHOR CONTRIBUTIONS\nACKNOWLEDGMENTS" }, { "heading": "A APPENDIX", "text": "The objective J [s; γ] = γposE(x,x′)∼Ppos [ log σ(a) ] + γnegE(x,x′)∼Pneg [ log ( 1− σ(a) )] (4)\nwith σ(a) = 1/(1 + e−a), a = s(x, x′) + log(γpos/γneg)), and γ = (γpos, γneg), has the upper bound J [s∗; γ] ≥ J [s; γ] for all γpos, γneg > 0, where s∗ is given by\ns∗(x, x′) = log Ppos(x, x′) Pneg(x, x′)\n(5)\nunder the condition that Ppos and Pneg have the same support – that is where Ppos is non-zero also Pneg is non-zero and vice versa.\nTo prove that assertion we make use of variational calculus (see e.g. C. Bishop, Patter Recognition and ML, Appendix D) to compute the functional derivative δJ/δs, which is defined by the integral over an arbitrary test function, η(x, x′),\nJ [s+ η; γ]\nd\n∣∣∣∣ =0 = ∫ δJ [s; γ] δs(y, y′) η(y, y′)dydy′ (6)\n= ∫ [ γposPpos(x, x′) ( 1− σ(a) ) − γnegPneg(x, x′)σ(a) ] η(x, x′)dxdx′ (7)\nwhere we have used that dσ(a)/ds = σ(a) ( 1 − σ(a) ) . The optimum can be computed from δJ/δs|s=s∗ = 0, which results in\nγposPpos(x, x′) ( 1− σ(a) ) − γnegPneg(x, x′)σ(a) = 0 (8)\n⇒ γposPpos(x, x ′)\nγposPpos(x, x′) + γnegPneg(x, x′) =\n1\n1 + e−s ∗(x,x′)−log(γpos/γneg)\n(9)\n⇒ s∗(x, x′) = log Ppos(x, x ′)\nPneg(x, x′) ∀γpos, γneg > 0 (10)\nNote that J [s; γ] is not bounded from below, so the optimum is a maximum." }, { "heading": "B APPENDIX", "text": "We show that optimising the objective J [s; γpos, γneg] by gradient ascent, with γpos, γneg > 0 randomly sampled in each optimisation step, leads to averaging over an ensemble of gradients. A gradient based optimisation method in its simplest form updates the parameters θ, which determine the function s(x, x′), by the rule\nθ ← θ + α∇θJ [s; γ] (11)\nwith α the learning rate and ∇θJ [s; γ] = γposE(x,x′)∼Ppos [( 1− σ(a) ) ∇θs(x, x′) ] − γnegE(x,x′)∼Pneg [ σ(a)∇θs(x, x′) ] = E(x,x′)∼Ppos [ γpos\n1 + γpos γneg\nes(x,x′) ∇θs(x, x′)\n]\n− E(x,x′)∼Pneg\n[ γneg\n1 + γneg γpos\ne−s(x,x′) ∇θs(x, x′)\n] (12)\nThis result shows that for given s(x, x′), random values of γpos, γneg > 0 weight the expected gradients for the positive examples and for the negative examples differently. As a consequence, ∇θJ [s; γ] takes different directions for each parameter update, given fixed (mini-)batch and fixed initial conditions.\nIf we consider the following limiting cases for s ≈ s∗\nlim γpos→1 γneg→∞\n∇θJ [s; γ] = E(x,x′)∼Ppos [∇θs(x, x ′)]− E(x,x′)∼Pneg\n[ es(x,x ′)∇θs(x, x′) ]\n(13)\n≈ E(x,x′)∼Ppos [∇θs(x, x ′)]− E(x,x′)∼Pneg [ Ppos(x, x′) Pneg(x, x′) ∇θs(x, x′) ] (14)\nand\nlim γpos→∞ γneg→1\n∇θJ [s; γ] = E(x,x′)∼Ppos [ e−s(x,x ′)∇θs(x, x′) ] − E(x,x′)∼Pneg [∇θs(x, x ′)] (15)\n≈ E(x,x′)∼Ppos [ Pneg(x, x′) Ppos(x, x′) ∇θs(x, x′) ] − E(x,x′)∼Pneg [∇θs(x, x ′)]\n(16)\nwe see that the objective learns the optimal importance weights of importance sampling. As in this work we have the situation that Ppos(x, x′) = 0 for some negative examples, the case γpos → ∞, γneg → 1 should not be applied, which is why we set γpos = 1 and sample uniformly from γneg ∈ [1, N ], with N > 1." }, { "heading": "C APPENDIX", "text": "" }, { "heading": "D APPENDIX", "text": "D.1 TRAINING CONTRASTIVE ENCODER\nIn order to train the contrastive encoder, we use a modified version of resnet18 with 128-dimensional projection head to make it suitable for cifar10 and cifar100 datasets with relatively small image size. In particular we remove the Maxpooling layer and subsitute the first 7 × 7 convolutional layer of stride 2 with a convolutional layer of kernel size 3 × 3 with padding and stride of 1. For the optimisation task, we use the Adam optimiser with learning rate of 3 · 10−4 and weight decay of 10−6. The network is trained for 1500 epochs at batch size 2048.\nD.2 TRAINING DISCRIMINATOR\nTo train the discriminator, we use ResNet18/34 and apply the same modifications as for the contrastive encoder. In addition, all the ReLU activation functions are replaced with ELU and the projecting head maps to a scalar value. Note that encoder and discriminator don’t share parameters. The discriminator is trained with initial learning rate of 5 · 10−5 using AMSGrad optimiser with weight decay of 10−6 on batch size of 128 samples in each iteration. The learning rate is multiplied by 0.2 and 0.1 after 200 and 500 epochs, respectively. To generate the positive pairs for training, we first find the 4 examples with the highest cosine similarity score among 10k random examples for each example in training set, from which one is randomly selected with equal chance. During the training procedure µ = 1/32 = 3.125% of each batch includes semantically similar pairs. For the γ, in each iteration a value is uniformly chosen from the range U(1, 10). The hyperparameters and their default values are shown in Table 3\nD.3 GEOMETRIC TRANSFORMATIONS\nAs data augmentation to train the contrastive encoder, we use the same transformations as in Chen et al. (2020). including, randomly chosen geometric transformation from the set {Cropping, Horizontal Flip, Color Jitter, GrayScale, Gaussian Blurring}. Pytorch snippets for encoder transformations can be found in Table 4. To train the discriminator we make two sets of transformations, one for positive samples and one for negative ones. The main intuition to shape a set of transformation for positive samples is to keep them in-distribution according to the original training samples. For the special case of cropping we make three different categories as weak, strong and extreme cases. Table 5 shows their cropping scale according to pytorch standard. To make positive pairs we randomly apply both weak and strong ranges for cropping, random horizontal flipping and color jittering on the same image from the training set and for negative pairs we apply all ranges of weak, strong and extreme cropping, horizontal flipping, color jittering, and Gaussian blurring on two randomly selected images. The details and pytorch snippet for positive and negative transformations can be found in Table 7 and 6 respectively. Note that for augmentation with semantically similar pairs from the training set there is no transformation applied on positive pairs." } ]
2,020
null
SP:7c44bf5a4a8d5e5ee1e86ee4582c42186e2df72c
[ "The paper proposes a generative method for 3D objects (voxels representation). Given an initial voxels configuration (e.g. partial shape, or even a single voxel), the method learns a local transition kernel for a Markov chain to decide how to evolve the configuration; sampling iteratively from these probabilities leads to a final model. The paper shows results on shape completion and generation, obtaining fairly good results." ]
We present a probabilistic 3D generative model, named Generative Cellular Automata, which is able to produce diverse and high quality shapes. We formulate the shape generation process as sampling from the transition kernel of a Markov chain, where the sampling chain eventually evolves to the full shape of the learned distribution. The transition kernel employs the local update rules of cellular automata, effectively reducing the search space in a high-resolution 3D grid space by exploiting the connectivity and sparsity of 3D shapes. Our progressive generation only focuses on the sparse set of occupied voxels and their neighborhood, thus enabling the utilization of an expressive sparse convolutional network. We propose an effective training scheme to obtain the local homogeneous rule of generative cellular automata with sequences that are slightly different from the sampling chain but converge to the full shapes in the training data. Extensive experiments on probabilistic shape completion and shape generation demonstrate that our method achieves competitive performance against recent methods.
[ { "affiliations": [], "name": "Dongsu Zhang" }, { "affiliations": [], "name": "Changwoon Choi" }, { "affiliations": [], "name": "Jeonghwan Kim" }, { "affiliations": [], "name": "Young Min Kim" } ]
[ { "authors": [ "Panos Achlioptas", "Olga Diamanti", "Ioannis Mitliagkas", "Leonidas Guibas" ], "title": "Learning representations and generative models for 3D point clouds", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Anirudh", "Nan Rosemary Ke", "Surya Ganguli", "Yoshua Bengio" ], "title": "Variational walkback: Learning a transition operator as a stochastic recurrent net", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Samy Bengio", "Oriol Vinyals", "Navdeep Jaitly", "Noam Shazeer" ], "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Florian Bordes", "Sina Honari", "Pascal Vincent" ], "title": "Learning to generate samples from noise through infusion training", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Ruojin Cai", "Guandao Yang", "Hadar Averbuch-Elor", "Zekun Hao", "Serge Belongie", "Noah Snavely", "Bharath Hariharan" ], "title": "Learning gradient fields for shape generation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Christopher Choy", "JunYoung Gwak", "Silvio Savarese" ], "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Angela Dai", "Charles Ruizhongtai Qi", "Matthias Nießner" ], "title": "Shape completion using 3d-encoderpredictor cnns and shape synthesis", "venue": "In Proc. Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Benjamin Graham", "Martin Engelcke", "Laurens van der Maaten" ], "title": "3d semantic segmentation with submanifold sparse convolutional networks", "venue": null, "year": 2018 }, { "authors": [ "K. He", "G. Gkioxari", "P. Dollár", "R. Girshick" ], "title": "Mask r-cnn", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Ferenc Huszár" ], "title": "How (not) to train your generative model: Scheduled sampling, likelihood, adversary", "venue": null, "year": 2015 }, { "authors": [ "Long ji Lin" ], "title": "Self-improving reactive agents based on reinforcement learning, planning and teaching", "venue": "In Machine Learning,", "year": 1992 }, { "authors": [ "Diederick P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "David Lopez-Paz", "Maxime Oquab" ], "title": "Revisiting Classifier Two-Sample Tests", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Mario Markus", "Benno Hess" ], "title": "Isotropic cellular automaton for modelling excitable media", "venue": "Natue,", "year": 1990 }, { "authors": [ "Kaichun Mo", "Shilin Zhu", "Angel X. Chang", "Li Yi", "Subarna Tripathi", "Leonidas J. Guibas", "Hao Su" ], "title": "PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Alexander Mordvintsev", "Ettore Randazzo", "Eyvind Niklasson", "Michael Levin", "Sam Greydanus" ], "title": "Thread: Differentiable self-organizing systems. Distill, 2020", "venue": "doi: 10.23915/distill.00027", "year": 2020 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "arXiv preprint arXiv:1612.00593,", "year": 2016 }, { "authors": [ "O. Ronneberger", "P.Fischer", "T. Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "URL http://lmb.informatik.unifreiburg.de/Publications/2015/RFB15a. (available on arXiv:1505.04597 [cs.CV])", "year": 2015 }, { "authors": [ "Dong Wook Shu", "Sung Woo Park", "Junseok Kwon" ], "title": "3d point cloud generative adversarial network based on tree structured graph convolutions", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Edward J. Smith", "David Meger" ], "title": "Improved adversarial systems for 3d object generation and reconstruction", "venue": "Proceedings of Machine Learning Research, pp. 87–96. PMLR,", "year": 2017 }, { "authors": [ "Jascha Sohl-Dickstein", "Eric A. Weiss", "Niru Maheswaranathan", "Surya Ganguli" ], "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "venue": "In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "Yongbin Sun", "Yue Wang", "Ziwei Liu", "Joshua Siegel", "Sanjay Sarma" ], "title": "Pointgrow: Autoregressively learned point cloud generation with self-attention", "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV),", "year": 2020 }, { "authors": [ "Diego Valsesia", "Giulia Fracastoro", "Enrico Magli" ], "title": "Learning localized representations of point clouds with graph-convolutional generative adversarial networks", "venue": "IEEE Transactions on Multimedia,", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "koray kavukcuoglu", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals", "Koray Kavukcuoglu" ], "title": "Neural discrete representation learning", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Aäron van den Oord", "Nal Kalchbrenner" ], "title": "Pixel rnn", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Stephen Wolfram" ], "title": "Cellular automata as simple self-organizing systems", "venue": null, "year": 1982 }, { "authors": [ "Jiajun Wu", "Chengkai Zhang", "Tianfan Xue", "William T. Freeman", "Joshua B. Tenenbaum" ], "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "venue": "In Proceedings of the 30th International Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Rundi Wu", "Xuelin Chen", "Yixin Zhuang", "Baoquan Chen" ], "title": "Multimodal shape completion via conditional generative adversarial networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "N.H. Wulff", "J.A. Hertz" ], "title": "Learning cellular automaton dynamics with neural networks", "venue": "In Proceedings of the 5th International Conference on Neural Information Processing Systems,", "year": 1992 }, { "authors": [ "Guandao Yang", "Xun Huang", "Zekun Hao", "Ming-Yu Liu", "Serge Belongie", "Bharath Hariharan" ], "title": "Pointflow: 3d point cloud generation with continuous normalizing flows", "venue": null, "year": 2019 }, { "authors": [ "Anirudh" ], "title": "GCA framework is successfully trained to generate a wide range of shapes even though our transition kernel is confined within its local neighborhood. Over 99% of the training data in generation and completion experiments satisfies the stopping criterion, except for lamp in PartNet-Scan dataset on shape completion, in which 97% of data satisfies the stopping criterion", "venue": null, "year": 2017 }, { "authors": [ "Sun" ], "title": "2020)) and the key to handle high-resolution 3D shapes", "venue": null, "year": 2020 }, { "authors": [ "Cai" ], "title": "BASELINES All experiments in Table 1 and Table 2 are excerpted from Wu et al", "venue": null, "year": 2017 }, { "authors": [ "Wu" ], "title": "Probabilistic Shape Completion. We employ the use of minimal matching distance (MMD), total mutual difference (TMD), and unidirectional Hausdorff distance (UHD)", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Probabilistic 3D shape generation aims to learn and sample from the distribution of diverse 3D shapes and has applications including 3D contents generation or robot interaction. Specifically, learning the distribution of shapes or scenes can automate the process of generating diverse and realistic virtual environments or new object designs. Likewise, modeling the conditional distribution of the whole scene given partial raw 3D scans can help the decision process of a robot, by informing various possible outputs of occluded space.\nThe distribution of plausible shapes in 3D space is diverse and complex, and we seek a scalable formulation of the shape generation process. Pioneering works on 3D shape generation try to regress the entire shape (Dai et al. (2017)) which often fail to recover fine details. We propose a more modular approach that progressively generates shape by a sequence of local updates. Our work takes inspiration from prior works on autoregressive models in the image domains, such as the variants of pixelCNN (van den Oord & Kalchbrenner (2016); van den Oord et al. (2016; 2017)), which have been successful in image generation. The key idea of pixelCNN (van den Oord et al. (2016)) is to order the pixels, and then learn the conditional distribution of the next pixel given all of the previous pixels. Thus generating an image becomes the task of sampling pixel-by-pixel in the predefined order. Recently, PointGrow (Sun et al. (2020)) proposes a similar approach in the field of 3D generation, replacing the RGB values of pixels with the coordinates of points and sampling point-by-point in a sequential manner. While the work proposes a promising interpretable generation process by sequentially growing a shape, the required number of sampling procedures expands linearly with the number of points, making the model hard to scale to high-resolution data.\nWe believe that a more scalable solution in 3D is to employ the local update rules of cellular automata (CA). CA, a mathematical model operating on a grid, defines a state to be a collection of cells that carries values in the grid (Wolfram (1982)). The CA repeatedly mutates its states based on the predefined homogeneous update rules only determined by the spatial neighborhood of the current cell. In contrast to the conventional CA where the rules are predefined, we employ a neural network to infer the stochastic sequential transition rule of individual cells based on Markov chain. The obtained homogeneous local rule for the individual cells constitutes the 3D generative model, named\nGenerative Cellular Automata (GCA). When the rule is distributed into the group of occupied cells of an arbitrary starting shape, the sequence of local transitions eventually evolves into an instance among the diverse shapes from the multi-modal distribution. The local update rules of CA greatly reduce the search space of voxel occupancy, exploiting the sparsity and connectivity of 3D shapes.\nWe suggest a simple, progressive training procedure to learn the distribution of local transitions of which repeated application generates the shape of the data distribution. We represent the shape in terms of surface points and store it within a 3D grid, and the transition rule is trained only on the occupied cells by employing a sparse CNN (Graham et al. (2018)). The sparse representation can capture the high-resolution context information, and yet learn the effective rule enjoying the expressive power of deep CNN as demonstrated in various computer vision tasks (Krizhevsky et al. (2012); He et al. (2017)). Inspired by Bordes et al. (2017), our model learns sequences that are slightly different from the sampling chain but converge to the full shapes in the training data. The network successfully learns the update rules of CA, such that a single inference samples from the distribution of diverse modes along the surface.\nThe contributions of the paper are highlighted as follows: (1) We propose Generative Cellular Automata (GCA), a Markov chain based 3D generative model that iteratively mends the shape to a learned distribution, generating diverse and high-fidelity shapes. (2) Our work is the first to learn the local update rules of cellular automata for 3D shape generation in voxel representation. This enables the use of an expressive sparse CNN and reduces the search space of voxel occupancy by fully exploiting sparsity and connectivity of 3D shapes. (3) Extensive experiments show that our method has competitive performance against the state-of-the-art models in probabilistic shape completion and shape generation." }, { "heading": "2 3D SHAPE GENERATION WITH GENERATIVE CELLULAR AUTOMATA", "text": "Let Zn be an n-dimensional uniform grid space, where n = 3 for a 3D voxel space. A 3D shape is represented as a state s ⊂ Z3, which is an ordered set of occupied cells c ∈ Z3 in a binary occupancy grid based on the location of the surface. Note that our voxel representation is different from the conventional occupancy grid, where 1 represents that the cell is inside the surface. Instead, we only store the cells lying on the surface. This representation can better exploit the sparsity of 3D shape than the full occupancy grid.\nThe shape generation process is presented as a sequence of state variables s0:T that is drawn from the following Markov Chain: s0 ∼ p0 st+1 ∼ pθ(st+1|st) (1) where p0 is the initial distribution and pθ is the homogeneous transition kernel parameterized by θ. We denote the sampled sequence s0 → s1 → · · · → sT as a sampling chain. Given the data set D containing 3D shapes x ∈ D, our objective is to learn the parameters θ of transition kernel pθ, such that the marginal distribution of final generated sample p(sT ) = ∑ s0:T−1 p 0(s0) ∏ 0≤t<T pθ(s t+1|st)\nis close to the data distribution. The initial state s0 can be defined differently depending on the task to solve. For the task of probabilistic shape completion, s0 is given as the partial input shape. For shape generation, we set the initial state s0 to be the most simple state we can think of, a single cell {c}. Figure 1 presents examples of sampling chains of shape generation, where the starting shape s0 is merely a single cell.\nThe GCA further splits the transition kernel pθ(st+1|st) to be the combination of local update rules on individual occupied cells ci ∈ st, as depicted in Figure 2. The cellular transition is implemented with the sparse convolution, which is translation invariant if implemented with a fully convolutional network, and outputs the distribution of local occupied cells. Then individual predictions are aggregated by cell-wise averaging, resulting in the binary probability distribution for occupancy of each cell that follows the Bernoulli distribution:\npθ(s t+1|st) = ∏ c∈Zn pθ(c|st). (2)\nThe next state st+1 is sampled independently for individual cells from the obtained distribution and the sampling chain continues to the next time step.\nFor each transition pθ(st+1|st), we limit the search space of the occupied cells by confining our predictions within the neighborhood of the occupied cells. The underlying assumption is that the occupied cells of a valid shape are connected and a successful generation is possible by progressive growing into the immediate neighborhood of the given state. Specifically, the output of the sparse convolution is the occupancy probability of neighborhood cells pi = pθ(N (ci)|st), where the neighborhood cells are those that fall within a radius r ball centered at the cell, N (ci) = {c′ ∈ Zn| d(ci, c′) ≤ r} given a distance metric d. Other cells are ignored assuming they have probability 0 of occupancy. If the input state has M occupied cells st = {c1, · · · , cM}, the sparse convolution predicts the occupancy probability of individual cells with N -dimension vectors P = {p1, · · · , pM}, where N is the number of neighborhood cells fixed by the distance threshold r within the uniform grid Zn. After the cell-wise averaging step, the aggregated probability is nonzero for coordinates in N (st) = ⋃ c∈st N (c). Then the cell-wise sampling in Eq. (2) is performed only within N (st), instead of the full grid Zn, leading to an efficient sampling procedure.\nThe stochastic local transition rule pθ(N (ci)|st) changes the state of a cell’s immediate neighborhood N(ci), but the inference is determined from a larger perception neighborhood. In contrast, classical cellular automata updates a state of a cell determined by a fixed rule given the observation of the cell’s immediate neighborhood. The large perception neighborhood of GCA is effectively handled by deep sparse convolutional network, and results in convergence to a single consistent global shape out of diverse possible output shapes as further discussed in the appendix F." }, { "heading": "3 TRAINING GENERATIVE CELLULAR AUTOMATA", "text": "We aim to learn the parameters for the local transition probability pθ(N (ci)|st), whose repetitive application generates shape that follows the complex distribution of the data set. If we have sequences of sampling chains, then all current and next states can serve as the training data. Because we only have the set of complete shapes D, we emulate the sequence of sampling chain and obtain the intermediate states.\nThe emulated sequence of a sampling chain may start from an arbitrary state, and needs to converge to the full shape x ∈ D after local transitions to the neighbor of the previous state. A naive method would be to sample the next state st from the sampling chain and maximize pθ(x ∩N (st)|st), the probability of the shape that is closest to x among reachable forms from the current state1, similar to scheduled sampling (Bengio et al. (2015)). While this approach clearly emulates the inference procedure, it leads to learning a biased estimator as pointed out in Huszár (2015). More importantly, the emulated sequence cannot consistently learn the full shape x as it is not guaranteed to visit the state s such that x ⊂ N (s). We instead employ the use of infusion training procedure suggested by Bordes et al. (2017). Specifically, the infusion chain, denoted as s̃0 → s̃1 → ...→ s̃T , is obtained as following:\ns̃0 ∼ q0(s̃0|x) qt(s̃t+1|s̃t, x) = ∏\nc̃∈N (s̃t)\n(1− αt)pθ(c̃|s̃t) + αtδx(c̃) (3)\nwhere q0 indicates the initial distribution of infusion chain, and qt is the infusion transition kernel at time step t. For probabilistic shape completion s̃0 is sampled as a subset of x, while for shape generation s̃0 is a single cell c ∈ x. The transition kernel qt is defined for cells in the neighborhood c̃ ∈ N (s̃t) as the mixture of pθ(c̃|s̃t) and δx(c̃), which are the transition kernel of the sampling chain and the infusion of the ground shape x, respectively. δx(c̃) is crucial to guarantee the sequence to ultimately reach the ground truth shape, and is formulated as the Bernoulli distribution with probability 1, if c̃ ∈ x, else 0. We set the infusion rate to increase linearly with respect to time step, i.e., αt = max(wt, 1), where w > 0 is the speed of infusion rate as in Bordes et al. (2017).\nWe can prove that the training procedure converges to the shape x in the training data if it is composed of weakly connected cells. We first define the connectivity of two states.\nDefinition 1. We call a state s̃′ to be partially connected to state x, if for any cell b ∈ x, there is a cell c0 ∈ s̃′ ∩ x, and a finite sequence of coordinates c0:Tb in x that starts with c0 and ends with cTb = b, where each subsequent element is closer than the given threshold distance r, i.e., for any b ∈ x, ∃c0:Tb , such that ci ∈ x, d(ci, ci+1) ≤ r , 0 ≤ i ≤ Tb and c0 ∈ s̃′, cTb = b.\nThe shape x is partially connected to any s̃′ 6= ∅ if we can create a sequence of coordinates between any pair of cells in x that is composed of local transitions bounded by the distance r. This is a very weak connectivity condition, and any set that overlaps with x is partially connected to x for shapes with continuous surfaces, which include shapes in the conventional dataset.\nNow assuming that the state s̃t ′\nis partially connected to the state x, we recursively create a sequence by defining s̃t+1 = N (s̃t) ∩ x, which is the next sequence of infusion chain with infusion rate 1. Since we use a linear scheduler for infusion rate, we can assume that there exists a state s̃t ′ such that infusion rate αt ′ = 1. The following proposition proves that the sequence (s̃t)t≥t′ converges to x with local transitions. Proposition 1. Let state s̃t ′\nbe partially connected to state x, where x has a finite number of occupied cells. We denote a sequence of states s̃t\n′:∞, recursively defined as s̃t+1 = N (s̃t) ∩ x. Then, there exists an integer T ′ such that s̃t = x, t ≥ T ′.\nProof. The proof is found in the appendix A.\nThe proposition states that the samples of infusion chain eventually converge to the shape x, and thus we can compute the nonzero likelihood p(x|s̃T ) during training if T is large enough. One can also interpret the training procedure as learning the sequence of states that converges to x and is close to\n1Since we defined a state as a set of occupied cells, union (∪), intersection (∩), and subset (⊂) operations can be defined as regular sets.\nFigure 3: Qualitative comparison of probabilistic shape completion.\nFigure 4: Samples from shape generation.\nthe sampling chain. We empirically observe that most of the training samples converge to the shape x before the infusion rate becomes 1.\nThe training procedure can be summarized as the following:\n1. Sample infusion chain s̃0:T from s̃0 ∼ q0(s̃0|x), s̃t+1 ∼ qt(s̃t+1|s̃t, x). 2. For each state s̃t, maximize the log-likelihood that has the closest form to x via gradient\ndescent, i.e., θ ← θ + η ∂ log pθ(x∩N (s̃ t)|s̃t)\n∂θ .\nThe full training algorithm utilizes a data buffer to de-correlate the gradients of subsequent time steps, and accelerates the process by controlling the time steps based on the current state. More details regarding the full training algorithm can be found in the appendix B." }, { "heading": "4 EXPERIMENTS", "text": "We demonstrate the ability of GCA to generate high-fidelity shapes in two tasks: probabilistic shape completion (Sec. 4.1) and shape generation (Sec. 4.2).\nState-of-the-art works on both tasks generate shapes using point cloud whereas ours uses grid structure. For comparison, we extract high-resolution points from the CAD model and map them into 643 voxel grid. Individual shapes in the dataset are centered and uniformly scaled to fit within the cube [−1, 1]3. We also present additional analysis on how the sparse representation and local transition of GCA can successfully learn to generate fine resolution shapes with continuous geometry (Sec. 4.3). Details regarding the training settings and baselines are reported in the appendix G." }, { "heading": "4.1 PROBABILISTIC SHAPE COMPLETION", "text": "Dataset and implementation details. The probabilistic shape completion is tested with PartNet (Mo et al. (2019)) and PartNet-Scan (Wu et al. (2020)) dataset, where objects in ShapeNet(Chang et al. (2015)) are annotated with instance-level parts. For each instance, we identify the input partial object by randomly choosing parts from the complete shape, and generating a diverse set of complete shapes. PartNet-Scan dataset simulates the real-world situation where partial scans suffer from part-level incompleteness by randomly selecting parts from the PartNet and virtually scanning the remaining parts. We validate our approach on chair, lamp, and table categories following Wu et al. (2020). We use neighborhood radius r = 3, T = 70 with infusion speed w = 0.005 for all datasets. Following the work of Wu et al. (2020), for each partial shape in the test set, we generate ten completion results and sample 2,048 points from occupied cells.\nWe compare the performance of probabilistic shape completion of GCA against cGAN (Wu et al. (2020)), cGAN and its variations (cGAN-im-l2z and cGAN-im-pc2z) 2 , with 3D-IWGAN (Smith\n2We would like to acknowledge that, when training, cGAN only requires the partial set and complete shape set without any explicit pairing between each of the instances in the set. However, we report cGAN as the baseline\n& Meger (2017)) The qualitative results in Figure 3 and the appendix H clearly show that our highresolution completion exhibits sophisticated geometric details compared to the state-of-the-art. Our superior performance is further verified in the quantitative results reported in Table 1 with three metrics: minimal matching distance (MMD), total mutual difference (TMD), and unidirectional Hausdorff distance (UHD). MMD and TMD are reported after generated samples are centered due to translation invariant nature of our model. The detailed description regarding the evaluation metrics are found in the appendix G.3.\nWe outperform all other methods in MMD and TMD on average, and achieve comparable or the best results in UHD. The results indicate that we can generate high fidelity (MMD) yet diverse (TMD) shapes while being loyal to the given partial geometry (UHD). We observe an extremely large variation of completion modes in the lamp dataset, which is the dataset with the most diverse structure while having a high level of incompleteness in the input partial data. This is because the lamp dataset contains fragmented small parts that could be selected as an highly incomplete input. We include further discussion about the diversity in the lamp dataset in the appendix E.\nMultiple category completion. While previous works on shape completion are trained and tested individual categories separately, we also provide the results on training with the entire category, denoted as GCA (joint) in Table 1. The performance of jointly trained transition kernel does not significantly differ to those of independently trained, demonstrating the expressiveness and high capacity of GCA. In addition, interesting behavior arises with the jointly trained GCA. As shown in the left image of Figure 5, given ambiguous input shape, the completed shape can sometimes exhibit features from categories other than that of the given shape.\nFurthermore, GCA is capable of completing a scene composed of multiple objects although it is trained to complete a single object (Figure 5, right). We believe that this is due to the translation invariance and the local transition of GCA. As discussed in Section 2, the deep network observes relatively large neighborhood, but GCA updates the local neighborhood based on multiple layers of sparse convolution which is less affected by distant voxels. The information propagation between layers of sparse convolutional network (Graham et al. (2018)) is mediated by occupied voxels, which effectively constrains the perception neighborhood to the connected voxels. As a result, the effect of separated objects are successfully controlled. We also include additional examples in the appendix D on how the GCA trained in one category completes the unseen category." }, { "heading": "4.2 SHAPE GENERATION", "text": "Dataset and implementation details. We test the performance of shape generation using the ShapeNet (Chang et al. (2015)) dataset, focusing on the categories with the most number of shapes:\nto our method, since this is the most recent probabilistic shape completion. cGAN-im-l2z and cGAN-im-pc2z are variants of cGAN that implicitly model the multi-modality for probabilistic shape completion.\nairplane, car, and chair, as presented in Yang et al. (2019). We use the same experimental setup as Cai et al. (2020), and sample 2,048 points from the occupied cells of the generate shape and center the shape, due to translation invariant aspect of our model, for a fair comparison. We use neighborhood size r = 2 with L1 distance metric, T = 100 inferences, infusion speed w = 0.005 for airplane and car dataset, and r = 3 for chair category, which tend to converge better.\nThe quantitative results of shape generation are reported in Table 2. We compare the performance of our approach against recent point cloud based methods: r-GAN (Achlioptas et al. (2018)), GCN-GAN (Valsesia et al. (2019)), Tree-GAN (Shu et al. (2019)), Pointflow (Yang et al. (2019)), ShapeGF (Cai et al. (2020)), a voxel-based method: 3D-IWGAN (Smith & Meger (2017)), and training set. The scores assigned to training set can be regarded as an optimal value for each metric. We evaluate our model on three metrics, 1-nearest-neighbor-accuracy (1-NNA), coverage (COV), and minimal matching distance (MMD) as defined in the appendix G.3.\nOur approach achieves the state-of-the-art results on 1-NNA of car category. As remarked by Yang et al. (2019), 1-NNA is superior on measuring the distributional similarity between generated set and test set from the perspective of both diversity and quality. Our method also achieves state-of-the-art results on COV of car, implying that our method is able to generate diverse results. Note we are using a homogeneous transition kernel confined within the local neighborhood of occupied cells, but can still achieve noticeable performance on generating a rich distribution of shape. MMD is a conventional metric to represent the shape fidelity, but some results achieve scores better than the training set which might be questionable (Yang et al. (2019)). We visualize our shape generation results in Figure 4 and in the appendix I." }, { "heading": "4.3 ANALYSIS ON NEIGHBORHOOD AND CONNECTIVITY", "text": "GCA makes high-resolution predictions by only handling the occupied voxels and their neighborhood, thus overcoming the limitation of memory requirement of voxel-based representation of 3D shape. In Figure 6, we empirically analyze the required search space during the shape generation process that starts from a single cell and evolves to the full shape. The occupied voxels take approximately 2% of\nthe total volume (solid lines), and their neighborhood adds up to 2-8% (dashed lines). After a certain amount of time steps, the growing speed of the search space gradually decreases, implying that our method tends to converge and maintain the generated shape.\nThe large structural variation observed in the lamp dataset leads to interesting observation on the neighborhood size of GCA and the connectivity of the shape. When r = 1, nearly 10% of the infusion chain data fails to meet the stopping condition (cover more than 95% of x, see the training detail in the appendix B) with time step T = 100. This is because GCA trained on a small neighborhood size, while enabling the transition kernel to be trained quickly, lacks the flexibility to model disconnected shapes even after large time steps. On the other hand, increased neighborhood size can capture disjoint parts at the expense of larger search space and instability. When r = 10, the trained transition kernel generates output that is noisy or unrelated to the input partial shape. We believe that the transition kernel can not be trained to cover the entire search space as the neighborhood size increases cubic to r. A few bad sampling steps can lead to a state that is far from the previous state or unobserved during training. The effects of hyperparameters are further discussed in the appendix C.\nWe also clarify the notion of connectivity in relation to the neighborhood size. While we mostly demonstrate the performance of GCA generating 3D shapes with continuous surface, GCA can surely generate disjoint parts as long as the distance between parts is within the radius r. Besides, GCA is flexible enough to learn to generate parts that are farther than r. r is merely the distance of cells between adjacent steps, and it is possible to learn the sequence of states that generates a temporary bridge to reach disconnected parts after a few steps. Figure 7 shows an observed sequence of a sampling chain that generates a temporary bridge from the shade to the body of the lamp further than the neighborhood size. After the sampling chain reaches the main body, GCA removes the bridge and successfully generates the full shape." }, { "heading": "5 RELATED WORKS", "text": "" }, { "heading": "5.1 GENERATIVE MODELS", "text": "Autoregressive models. PixelCNN and its variants (van den Oord & Kalchbrenner (2016); van den Oord et al. (2016)) sequentially sample a single pixel value at a time, conditioned on the previously generated pixels. PointGrow (Sun et al. (2020)) extends the idea to 3D by sampling the coordinates of points in a sequential manner. While these approaches lead to tractable probabilistic density estimation, inferring one pixel/point with a single inference is not scalable to high resolution 3D voxel space. GCA, on the other hand, can grow to the shape occupying any number of voxels in the neighborhood at each inference.\nMarkov chain models. Previous works learn the transition kernel of Markov chain operating on the input space which eventually produces samples matching the data distribution (Sohl-Dickstein et al. (2015); Anirudh et al. (2017); Bordes et al. (2017)). The diffusion-based models (Sohl-Dickstein et al. (2015); Anirudh et al. (2017)) start from a random noise distribution and incrementally denoise the input to generate the data. They learn the reverse of a diffusion process, which begins from the data and gradually adds noise to become a factorial noise distribution. On the other hand, the work of infusion training (Bordes et al. (2017)) slightly infuses data to input dimension during training, and learns the chain biased towards the data. This removes the process of inverting the diffusion chain and allows the chain to start from any distribution. The training of GCA utilizes the technique of infusion, enabling to train on both shape generation and completion.\n3D generative models. Recent works on 3D generative models (Yang et al. (2019); Cai et al. (2020); Wu et al. (2020)) achieve noticeable success with point cloud based representation. They are based on PointNet (Qi et al. (2016)), one of the most widely-used neural network architecture consuming point cloud. PointNet condenses the 3D shape into a single global feature vector to achieve permutation invariance, but in turn lacks the representation power of local context observed in deep neural architecture. On the other hand, voxel-based generation methods (Wu et al. (2016); Smith & Meger (2017)) can capture local features by directly adapting a CNN-based architecture, but suffer from immense memory consumption in high-resolution. Recent works of sparse CNN (Graham et al. (2018); Choy et al. (2019)) solves the problem of voxel representation by exploiting sparsity, and outperforms other methods by a significant margin in the field of 3D semantic segmentation. To the best of our knowledge, we are the first to utilize the expressive power of sparse CNN to learn a probabilistic 3D generative model in a high-resolution 3D grid." }, { "heading": "5.2 CELLULAR AUTOMATA", "text": "Cellular automata (CA) is introduced to simulate biological or chemical systems (Wolfram (1982); Markus & Hess (1990)). CA consists of a grid of cells where individual cells are repeatedly updated with the rule depending only on the state of neighborhood cells. Each cell is updated with the same set of local rules, but when the update is repeatedly distributed in the entire grid, complex and interesting behavior can emerge. However, CA requires a fixed rule to be given. Wulff & Hertz (1992) learn simple rules with a neural network and model the underlying dynamics of CA. Neural Cellular Automata (Mordvintsev et al. (2020)) shows an interesting extension of CA that incorporates neural network to learn the iterative update rule and successfully generates a pre-specified image. While the benefits of CA are not clear in the image domain, we show that the local update rule of CA can excel in generating 3D shape, which is sparse and connected, by achieving near state-of-the-art performance.\nThere is one major difference of GCA compared to the classical CA. The transition kernel employs the deep sparse CNN and the effective perception neighborhood is much larger than the update neighborhood. With the extension, GCA has the capacity to perceive the complex context and concurrently generates high-fidelity local shape as demonstrated by multi-category and multi-object generation in Figure 5." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this work, we present a probabilistic 3D generative model, named GCA, capable of producing diverse and high-fidelity shapes. Our model learns a transition kernel of a Markov chain operating only on the spatial vicinity of an input voxel shape, which is the local update rules of cellular automata. We effectively reduce the search space in a high resolution voxel space, with the advantage of employing a highly expressive sparse convolution neural network that leads to state-of-the-art performance in probabilistic completion and yields competitive results against recent methods in shape generation.\nAlthough our experiments are mainly focused on shape generation, we believe our framework is capable of modeling a wide range of sequential data, which tends to be sparse and connected. For instance, particle-based fluid simulation models continuous movement of fluid particles, where the local update rules of our approach can be applied." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Youngjae Lee for helpful discussion and advice. This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1C1C1008195) and the National Convergence Research of Scientific Challenges through the National Research Foundation of Korea (NRF) funded by Ministry of Science and ICT (NRF2020M3F7A1094300)." }, { "heading": "A PROOF OF PROPOSITION 1", "text": "We present the proposition in Sec. 3 and its proof.\nProposition 1. Let state s̃t ′\nbe partially connected to state x, where x has a finite number of occupied cells. We denote a sequence of states s̃t\n′:∞, recursively defined as s̃t+1 = N (s̃t) ∩ x. Then, there exists an integer T ′ such that s̃t = x, t ≥ T ′.\nProof. If s̃T ′ = x, s̃T ′+1 = N (s̃T ′) ∩ x = x and s̃t = x for t ≥ T ′. Now we show that there exists an integer T ′ such that s̃T ′ = x by showing s̃T\n′ ⊂ x and s̃T ′ ⊃ x. We first prove the former by showing that sequence of states s̃t ′:∞ is an increasing sequence bounded by x. ∀t > t′,\ns̃t = s̃t ∩ x ⊂ N(s̃t) ∩ x = s̃t+1\n⊂ x\nNow we show s̃T ′ ⊃ x to complete the proof. Recall the definition partially connected.\nDefinition 1. We call a state s̃′ to be partially connected to state x, if for any cell b ∈ x, there is a cell c0 ∈ s̃′ ∩ x, and a finite sequence of coordinates c0:Tb in x that starts with c0 and ends with cTb = b, where each subsequent element is closer than the given threshold distance r, i.e., for any b ∈ x, ∃c0:Tb , such that ci ∈ x, d(ci, ci+1) ≤ r , 0 ≤ i ≤ Tb and c0 ∈ s̃′, cTb = b.\nBy definition, there exists a sequence of coordinates cb0:Tb for each coordinate b ∈ x. Now we show cbt ∈ s̃t ′+t using mathematical induction. cb0 ∈ s̃t ′ . Assuming cbt ∈ s̃t ′+t for t > 0,\ncbt+1 ∈ N (cbt) ∩ x\n⊂ N (s̃t ′+t) ∩ x\n= s̃t ′+t+1\nSo cbTb = b ∈ s̃ t′+Tb . If we set T ′ = maxb∈x Tb, x ⊂ ⋃ b∈x s̃\nTb ⊂ s̃T ′ holds, where the second subset holds due to the increasing property of s̃t\n′:∞ proven above, i.e., s̃t1 ⊂ s̃t2 for t1 < t2. Thus we conclude that x ⊂ s̃T ′ ." }, { "heading": "B FULL DESCRIPTION OF TRAINING PROCEDURE", "text": "The training procedure described in Sec. 3 introduces the infusion chain and provides high-level description of the training procedure. When the training data is sampled from the infusion chain, we utilizes the data buffer B as presented in experience replay (ji Lin (1992)). Sequential state transitions of the same data point, including our proposed infusion chain, are highly correlated and increase the variance of updates. With the data buffer B, we can train the network with a batch of state transitions from different data points, and de-correlate gradients for back-propagation.\nThe overall training procedure is described in Algorithm 1. The buffer B carries tuples that consists of current state s, the whole shape x, and the time step t. Given the maximum budget |B|, the buffer is initialized by the tuples (s̃0, x, 0) where s̃0 ∼ q0(s̃0|x) for a subset of data x ∈ D. For probabilistic shape completion q0(s̃0|x) is a subset of x, and for shape generation we sample just one cell {c} ⊂ x. Then each training step utilizes M tuples of mini-batch popped from the buffer B. We update the parameters of neural network by maximizing the log-likelihood of the state that is closest to x among the neighborhood of the current state, i.e., argmaxθ log pθ(x ∩N(s̃t)|s̃t). Then we sample the next state from the infusion chain as defined in Eq. (3). If the next state does not meet the stopping criterion, then the tuple with the next state (s̃ti+1i , xi, ti + 1) is pushed back to the buffer. Else the buffer samples new data from the dataset.\nWhile the stopping criterion (line 9) should reflect the convergence to the whole shape x, the amount of time steps needed to generate the complete shape varies significantly depending on the incompleteness\nAlgorithm 1 Training GCA 1: Given dataset D, neural network parameter θ 2: Initialize buffer B with maximum budget |B| from D 3: repeat 4: Pop mini-batch (s̃tii , xi, ti)i=1:M from buffer B 5: L = 0 6: for each index i in mini-batch do 7: L ← L+ log pθ(xi ∩N(s̃tii )|s̃ ti i )\n8: s̃ti+1i ∼ qti(s̃ ti+1 i |s̃ ti i , xi)\n9: if s̃ti+1i does not meet stopping criterion then 10: Push (s̃ti+1i , xi, ti + 1) into buffer B 11: else 12: Sample xi ∈ D 13: Sample s̃0i ∼ q0(s̃0i |xi) 14: Push (s̃0i , xi, 0) into buffer B 15: end if 16: end for 17: Update parameters θ ← θ + η ∂L∂θ 18: until convergence of training or early stopping\nof the initial shape, the efficiency of the stochastic transitions, and the complexity of the full shape. In addition, we need to learn to stay at the complete state once it is reached. We propose a simple solution with an adaptive number of training steps for each data: When s̃t contains 95% of the x for the first time, we train only constant T̂ more time steps. The additional T̂ steps lead to learn a transition kernel similar to that of identity, implicitly learning to stay still after the generation is completed as in Anirudh et al. (2017).\nGCA framework is successfully trained to generate a wide range of shapes even though our transition kernel is confined within its local neighborhood. Over 99% of the training data in generation and completion experiments satisfies the stopping criterion, except for lamp in PartNet-Scan dataset on shape completion, in which 97% of data satisfies the stopping criterion." }, { "heading": "C ABLATION STUDY ON HYPERPARAMETERS", "text": "In this section, we further investigate the effects of hyperparameters of neighborhood radius r, infusion speed w, and time step T , with probabilistic shape completion on lamp dataset. The default hyperparameters are set to r = 3, w = 0.005, T = 70, and for each ablation study, only the corresponding feature is changed. For the study on neighborhood radius and infusion speed, the networks were retrained with the matching hyperparameters, while the ablation study on time steps show the result of the model trained on default hyperparameter but tested with the variation of time\nstep T . The Figure 8 shows the effects of r, w and T on MMD (quality), TMD (diversity), and UHD (fidelity).\nThe effect of the neighborhood radius r. The neighborhood radius r is the size of update neighborhood of the probabilistic distribution for individual cells, N (ci) = {c′ ∈ Zn| d(ci, c′) ≤ r}. When the radius r is too small (r = 1, 2), the MMD (quality) is considerably high compared to when r = 3, 4. Since the size of update is small, models require more time steps to reconstruct the whole shape, and are not able to generate the disconnected lamps as shown in Figure 7. When the neighborhood size is adequate (r = 3, 4), GCA produces fine reconstructed shapes with reasonable scores on all metrics. However, when the radius is too high (r = 5, 10), the model yields shapes that are considerably irrelevant to the given initial state, resulting in a high TMD and UHD scores. Due to the large possible transition states, few bad samplings lead to states that are irrelevant to the initial state, resulting in the degradation of performance.\nThe effect of the infusion speed w. The infusion speed controls how likely the groudtruth shape is injected (αt = max(wt, 1) in Eq. (3)) with the increasing time step. When the infusion speed is too low, the infusion chain is close to the sampling chain, but it is less likely to visit a state that observes the whole complete state. Also, as Huszár (2015) said, the model learns a biased estimator, which implies that GCA may not converge to the correct model. The results in Figure 8 show that MMD and UHD are high when the infusion speed w = 0, 0.001 compared to when w = 0.005. This indicates that small w generates shapes with worse quality and fidelity. On the other hand, when the infusion speed is set too high (w = 0.03, 0.05), the discrepancy between the infusion chain and sampling chain increases. We noticed the diverging behavior of GCA, when the infusion speed was set too high. Bordes et al. (2017) state that the optimal value for the infusion speed differs depending on the time step T . We empirically saw that w = 0.005 produced the best result for both shape generation and completion with T = 100 and T = 70.\nThe effect of the number of time step T . The time step T indicates the number of state transitions performed to generate the shape s0 → s1 → ... → sT . The right plot of Figure 8 shows the performance of GCA with different time steps trained on the default hyperparameter. Even though MMD is lowest when T=30, some lamps fail to complete the shape in the given time steps. As the time steps increase, all scores tend to increase by morphing the initial state to the complete state. However, we would like to emphasize that the scale of the plot is relatively small and the model is able to keep the MMD (quality) below 0.002 even when ran on 200 time steps. This is substantially low considering that the MMD of cGAN (Wu et al. (2020)) is 0.00197. This demonstrates that GCA is able to maintain the quality of shapes, being robust to increasing time steps." }, { "heading": "D COMPLETION RESULTS ON UNSEEN INITIAL STATES", "text": "We investigate the behavior of GCA when an unseen input is given as the initial state. The Figure 9 shows the result of GCA trained on chair completion, but a partial lamp is given as the input. The behavior can be classified largely into three classes. First, the model respects the initial state and is able to generalize to a novel shape as in Figure 9 (a). Second, as in Figure 9 (b) and (c) which are the most common cases, the model deforms the initial state into a trained state and generates results learned during training. The final state contains a state similar to the initial, but deformed to a shape that belongs to the trained category. Lastly, like in Figure 9 (d), if the initial state is dissimilar to the visited state, it fails to generalize and starts diverging." }, { "heading": "E DIVERSITY ANALYSIS ON LAMP DATASET", "text": "Among the results presented in Sec. 4, our diversity score (TMD) for shape completion is significantly higher than the state-of-the-art with lamp dataset (Table 1). By further investigating instances with high TMD scores in lamp dataset, we could observe that the input part is small and highly incomplete. The input partial data is created by randomly selecting a part from the instance, and the part labels in lamp dataset are fragmented and diverse with less regular structure with many small parts. A typical example with high TMD score is shown in Figure 10. The input is a small part of a lamp, and our completion results cover a wide variety of lamps including table lights and ceiling lights, and still exhibit fine geometry. In contrast, cGAN (Wu et al. (2020)) handles the ambiguity with consistently blurry output. The TMD score for this specific example is 20.45 for GCA, while the cGAN achieves 6.592." }, { "heading": "F CONVERGING TO A SINGLE MODE", "text": "Our scalable formulation can generate multiple voxels at a single inference time. This is a unique characteristic of our formulation compared to other autoregressive models (van den Oord et al. (2016); Sun et al. (2020)) and the key to handle high-resolution 3D shapes. Specifically, the multiple voxels of the next state are sampled from a conditionally independent occupancy for each cell, i.e., pθ(st+1|st) = ∏ c∈N (st) pθ(c|st). While we gain efficiency by making multiple independent decisions in a single inference, there is no central algorithm that explicitly controls the global shape. In other words, we can view the generation process of GCA as morphogenesis of an organism, but there is no unified ‘DNA’ to eventually move towards a single realistic shape. We indeed encounter rare failure cases that make multiple conflicting decisions on the global structure, as presented in Figure 11.\nHowever, for most of the cases, our model is capable of generating a globally consistent shapes, converging to a single mode out of multi-modal data distribution. For example, when a chair seat is given as the partial input shape as in Figure 12, there are multiple possible modes of shape completion, including four-leg chairs or swivel chairs. The ambiguity is present in s2:6, where both the center and\nthe corners have high probability of chair legs. GCA eventually picks a mode and decides to erase the legs at the corners and only grow towards the center, creating a swivel chair. As shown in this example, the transition kernel of GCA observes the change in the context and eventually converges to a globally consistent mode even with independent samples of individual cells." }, { "heading": "G EXPERIMENTAL DETAILS", "text": "G.1 NEURAL NETWORK ARCHITECTURE AND IMPLEMENTATION DETAILS\nAll experiments conducted use a variant of U-Net (Ronneberger et al. (2015)) architecture implemented with sparse convolution library MinkowskiEngine (Choy et al. (2019)), depicted in Figure 13. We train our method using the Adam (Kingma & Ba (2015)) optimizer with an initial learning rate of 5e-4 and batch size of 32. The learning rate decays by 0.5 every 100k steps. All experiments are run on RTX 2080 ti 11GB GPU, except for the analysis on the neighborhood size with r = 10, which ran on Titan RTX 24GB GPU.\nG.2 BASELINES\nAll experiments in Table 1 and Table 2 are excerpted from Wu et al. (2020) and Cai et al. (2020) except for the results of 3D-IWGAN (Smith & Meger (2017)). 3D-IWGAN was trained on the dataset same as ours for 4000 epochs using the author’s offical code: https://github.com/ EdwardSmith1884/3D-IWGAN, with the default hyperparameters. Since the original code only supports 32 x 32 x 32 voxel resolution, we added extra convolution / deconvolution layer for the GAN’s discriminator, generator and VAE, generating 64 x 64 x 64 voxel resolution.\nG.3 EVALUATION METRICS\nWe provide a more detailed explanation regarding quantitative metrics for evaluation of our approach. Since we compare our method against recent point cloud based methods, we convert voxels into point clouds simply by treating the coordinates of occupied voxels as points.\nFor all of the evaluations, we use the Chamfer distance (CD) to measure the similarity between shapes represented as point clouds. Chamfer distance between two sets of point cloud X and Y is formally defined as\ndCD(X,Y ) = ∑ x∈X min y∈Y ‖x− y‖22 + ∑ y∈Y min x∈X ‖x− y‖22. (4)\nProbabilistic Shape Completion. We employ the use of minimal matching distance (MMD), total mutual difference (TMD), and unidirectional Hausdorff distance (UHD), as in Wu et al. (2020).\nLet Sp be the set of input partial shapes and Sc be the set of complete shapes in the dataset. For each partial shape P ∈ Sp, We generate k = 10 complete shapes CP1:k. Let G = {CP1:k} be the collection of the entire generated set started from all elements in Sp.\n• MMD measures the quality of the generated set. For each complete shape in the test set, we compute the distance to the nearest neighbor in the generated set G and average it, which is\nformally defined by\nMMD = 1 |Sc| ∑ Y ∈Sc min X∈G dCD(X,Y ). (5)\n• TMD measures the diversity of the generated set. We average the Chamfer distance of all pairs of the generated set for the same partial input, formally defined by\nTMD = 1 |Sp| ∑ P∈Sp\n2 k(k − 1) ∑\n1≤i<k ∑ i<j≤k dCD(C P i , C P j ). (6)\n• UHD measures the fidelity of the completed results against the partial inputs. We average the unidirectional Hausdorff distance dHD from the partial input to each of the k completed results, formally defined as\nUHD = 1 |Sp| ∑ P∈Sp 1 k ∑ 1≤i≤k dHD(P,C P i ). (7)\nShape Generation. We employ the use of 1-nearest-neighbor-accuracy (1-NNA), coverage (COV, and minimal matching distance (MMD)) as in Cai et al. (2020). Since MMD was introduced above, we state the definition of 1-NNA and COV. Let Sg be the set of generated shapes and Sr be the set of reference shapes.\n• 1-NNA, proposed by Lopez-Paz & Oquab (2016), evaluates whether two distributions are identical. For shape X , we denote the nearest neighbor as NX = argminY ∈S−X dCD(X,Y ), where S−X represents the set including the entire generated and reference shapes except itself, S−X = Sr ∪ Sg −X . 1-NNA is defined to be\n1-NNA(Sg, Sr) =\n∑ X∈Sg 1NX∈Sg + ∑ Y ∈Sr 1NY ∈Sr\n|Sg|+ |Sr| , (8)\nwhere 1 is an indicator function. The optimal value of 1-NNA is 50%, when the distribution of two sets are equal, unable to distinguish the two sets. • COV measures the proportion of shapes in the reference set that are matched to at least one\nshape in the generated set, formally defined by\nCOV(Sg, Sr) = |{argminY ∈Sr dCD(X,Y )|X ∈ Sg}|\n|Sr| . (9)" }, { "heading": "H ADDITIONAL SAMPLES ON PROBABILISTIC COMPLETION", "text": "" }, { "heading": "I ADDITIONAL SAMPLES ON SHAPE GENERATION", "text": "" } ]
2,021
GENERATIVE CELLULAR AUTOMATA
SP:9326f169cc5e8d2f4268dcf39af31590ee004d98
[ "This paper extends the results for actor-critic with stochastic policies of [Zhang, ICML 2018] to deterministic policies and offers the proof of convergence under some specific assumptions. The authors consider both the on-policy setting and the off-policy setting and offers some convincing derivation. It provides a valuable idea and a promising direction in MARL, but the current version has several problems that need to be fixed. Specifically, some parts of equations, algorithms, and expressions are ambiguous and unintelligible. Besides, problems with the format in the formula and citations also exist, which degrade the paper’s quality and clarity." ]
[Zhang, ICML 2018] provided the first decentralized actor-critic algorithm for 1 multi-agent reinforcement learning (MARL) that offers convergence guarantees. In 2 that work, policies are stochastic and are defined on finite action spaces. We extend 3 those results to offer a provably-convergent decentralized actor-critic algorithm for 4 learning deterministic policies on continuous action spaces. Deterministic policies 5 are important in real-world settings. To handle the lack of exploration inherent in de6 terministic policies, we consider both off-policy and on-policy settings. We provide 7 the expression of a local deterministic policy gradient, decentralized deterministic 8 actor-critic algorithms and convergence guarantees for linearly-approximated value 9 functions. This work will help enable decentralized MARL in high-dimensional 10 action spaces and pave the way for more widespread use of MARL. 11
[]
[ { "authors": [ "Zhang" ], "title": "The desired result holds since Step 1 and Step 2 of the proof of Theorem", "venue": null, "year": 2018 }, { "authors": [ "Benveniste" ], "title": "492 proof is now similar to the proof of Lemma 2 on page", "venue": null, "year": 1990 }, { "authors": [ "Zhang" ], "title": "To prove this lemma we verify the conditions for Theorem", "venue": null, "year": 2018 }, { "authors": [ "Zhang" ], "title": "Using the same notation as in Assumption", "venue": null, "year": 2018 }, { "authors": [ "Zhang" ], "title": "So similar conclusions to the ones", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "[Zhang, ICML 2018] provided the first decentralized actor-critic algorithm for1 multi-agent reinforcement learning (MARL) that offers convergence guarantees. In2 that work, policies are stochastic and are defined on finite action spaces. We extend3 those results to offer a provably-convergent decentralized actor-critic algorithm for4 learning deterministic policies on continuous action spaces. Deterministic policies5 are important in real-world settings. To handle the lack of exploration inherent in de-6 terministic policies, we consider both off-policy and on-policy settings. We provide7 the expression of a local deterministic policy gradient, decentralized deterministic8 actor-critic algorithms and convergence guarantees for linearly-approximated value9 functions. This work will help enable decentralized MARL in high-dimensional10 action spaces and pave the way for more widespread use of MARL.11\n1 Introduction12\nCooperative multi-agent reinforcement learning (MARL) has seen considerably less use than its13 single-agent analog, in part because often no central agent exists to coordinate the cooperative agents.14 As a result, decentralized architectures have been advocated for MARL. Recently, decentralized15 architectures have been shown to admit convergence guarantees comparable to their centralized16 counterparts under mild network-specific assumptions (see Zhang et al. [2018], Suttle et al. [2019]).17 In this work, we develop a decentralized actor-critic algorithm with deterministic policies for multi-18 agent reinforcement learning. Specifically, we extend results for actor-critic with stochastic policies19 (Bhatnagar et al. [2009], Degris et al. [2012], Maei [2018], Suttle et al. [2019]) to handle deterministic20 policies. Indeed, theoretical and empirical work has shown that deterministic algorithms outperform21 their stochastic counterparts in high-dimensional continuous action settings (Silver et al. [January22 2014b], Lillicrap et al. [2015], Fujimoto et al. [2018]). Deterministic policies further avoid estimating23 the complex integral over the action space. Empirically this allows for lower variance of the critic24 estimates and faster convergence. On the other hand, deterministic policy gradient methods suffer25 from reduced exploration. For this reason, we provide both off-policy and on-policy versions of our26 results, the off-policy version allowing for significant improvements in exploration. The contributions27 of this paper are three-fold: (1) we derive the expression of the gradient in terms of the long-term28 average reward, which is needed in the undiscounted multi-agent setting with deterministic policies;29 (2) we show that the deterministic policy gradient is the limiting case, as policy variance tends to30 zero, of the stochastic policy gradient; and (3) we provide a decentralized deterministic multi-agent31 actor critic algorithm and prove its convergence under linear function approximation.32\nSubmitted to 34th Conference on Neural Information Processing Systems (NeurIPS 2020). Do not distribute.\n2 Background33\nConsider a system of N agents denoted by N = [N ] in a decentralized setting. Agents determine34 their decisions independently based on observations of their own rewards. Agents may however com-35 municate via a possibly time-varying communication network, characterized by an undirected graph36 Gt = (N , Et), where Et is the set of communication links connecting the agents at time t ∈ N. The37 networked multi-agent MDP is thus characterized by a tuple (S, { Ai } i∈N , P, { Ri } i∈N , {Gt}t≥0)38 where S is a finite global state space shared by all agents in N , Ai is the action space of agent i, and39 {Gt}t≥0 is a time-varying communication network. In addition, let A = ∏ i∈N Ai denote the joint40 action space of all agents. Then, P : S × A × S → [0, 1] is the state transition probability of the41 MDP, and Ri : S ×A → R is the local reward function of agent i. States and actions are assumed42 globally observable whereas rewards are only locally observable. At time t, each agent i chooses its43 action ait ∈ Ai given state st ∈ S, according to a local parameterized policy πiθi : S ×A\ni → [0, 1],44 where πiθi(s, a\ni) is the probability of agent i choosing action ai at state s, and θi ∈ Θi ⊆ Rmi is45 the policy parameter. We pack the parameters together as θ = [(θ1)>, · · · , (θN )>]> ∈ Θ where46 Θ = ∏ i∈N Θ i. We denote the joint policy by πθ : S×A → [0, 1] where πθ(s, a) = ∏ i∈N π i θi(s, a\ni).47 Note that decisions are decentralized in that rewards are observed locally, policies are evaluated48 locally, and actions are executed locally. We assume that for any i ∈ N , s ∈ S, ai ∈ Ai, the49 policy function πiθi(s, a i) > 0 for any θi ∈ Θi and that πiθi(s, a i) is continuously differentiable with50 respect to the parameters θi over Θi. In addition, for any θ ∈ Θ, let P θ : S × S → [0, 1] denote51 the transition matrix of the Markov chain {st}t≥0 induced by policy πθ, that is, for any s, s′ ∈ S,52 P θ(s′|s) = ∑ a∈A πθ(s, a) · P (s′|s, a). We make the standard assumption that the Markov chain53 {st}t≥0 is irreducible and aperiodic under any πθ and denote its stationary distribution by dθ.54\nOur objective is to find a policy πθ that maximizes the long-term average reward over the network.55 Let rit+1 denote the reward received by agent i as a result of taking action a i t. Then, we wish to solve:56\nmax θ J(πθ) = lim T→∞\n1 T E [ T−1∑ t=0 1 N ∑ i∈N rit+1 ] = ∑ s∈S,a∈A dθ(s)πθ(s, a)R̄(s, a),\nwhere R̄(s, a) = (1/N) · ∑ i∈N R\ni(s, a) is the globally averaged reward function. Let r̄t =57 (1/N) · ∑ i∈N r i t, then R̄(s, a) = E [r̄t+1|st = s, at = a], and therefore, the global relative action-58\nvalue function is: Qθ(s, a) = ∑ t≥0 E [r̄t+1 − J(θ)|s0 = s, a0 = a, πθ] , and the global relative59\nstate-value function is: Vθ(s) = ∑ a∈A πθ(s, a)Qθ(s, a). For simplicity, we refer to Vθ and Qθ60 as simply the state-value function and action-value function. We define the advantage function as61 Aθ(s, a) = Qθ(s, a)− Vθ(s).62 Zhang et al. [2018] provided the first provably convergent MARL algorithm in the context of the63 above model. The fundamental result underlying their algorithm is a local policy gradient theorem:64\n∇θiJ(µθ) = Es∼dθ,a∼πθ [ ∇θi log πiθi(s, a i) ·Aiθ(s, a) ] ,\nwhere Aiθ(s, a) = Qθ(s, a) − Ṽ iθ (s, a−i) is a local advantage function and Ṽ iθ (s, a−i) =65 ∑ ai∈Ai π i θi(s, a i)Qθ(s, a i, a−i). This theorem has important practical value as it shows that the66 policy gradient with respect to each local parameter θi can be obtained locally using the corresponding67 score function ∇θi log πiθi provided that agent i has an unbiased estimate of the advantage functions68 Aiθ or Aθ. With only local information, the advantage functions A i θ or Aθ cannot be well estimated69\nsince the estimation requires the rewards { rit } i∈N of all agents. Therefore, they proposed a consensus70 based actor-critic that leverages the communication network to share information between agents71 by placing a weight ct(i, j) on the message transmitted from agent j to agent i at time t. Their72 action-value function Qθ was approximated by a parameterized function Q̂ω : S ×A → R, and each73 agent i maintains its own parameter ωi, which it uses to form a local estimate Q̂ωi of the global Qθ.74 At each time step t, each agent i shares its local parameter ωit with its neighbors on the network, and75 the shared parameters are used to arrive at a consensual estimate of Qθ over time.76\n3 Local Gradients of Deterministic Policies77\nWhile the use of a stochastic policy facilitates the derivations of convergence proofs, most real-world78 control tasks require a deterministic policy to be implementable. In addition, the quantities estimated79 in the deterministic critic do not involve estimation of the complex integral over the action space found80 in the stochastic version. This offers lower variance of the critic estimates and faster convergence. To81 address the lack of exploration that comes with deterministic policies, we provide both off-policy82 and on-policy versions of our results. Our first requirement is a local deterministic policy gradient83 theorem.84\nWe assume that Ai = Rni . We make standard regularity assumptions on our MDP. That is, we85 assume that for any s, s′ ∈ S, P (s′|s, a) and Ri(s, a) are bounded and have bounded first and86 second derivatives. We consider local deterministic policies µiθi : S → A\ni with parameter vector87 θi ∈ Θi, and denote the joint policy by µθ : S → A, where µθ(s) = (µ1θ1(s), . . . , µNθN (s)) and88 θ = [(θ1)>, . . . , (θN )>]>. We assume that for any s ∈ S, the deterministic policy function µiθi(s)89 is twice continuously differentiable with respect to the parameter θi over Θi. Let P θ denote the90 transition matrix of the Markov chain {st}t≥0 induced by policy µθ, that is, for any s, s′ ∈ S,91 P θ(s′|s) = P (s′|s, µθ(s)). We assume that the Markov chain {st}t≥0 is irreducible and aperiodic92 under any µθ and denote its stationary distribution by dµθ .93\nOur objective is to find a policy µθ that maximizes the long-run average reward:94\nmax θ J(µθ) = Es∼dµθ [R̄(s, µθ(s))] = ∑ s∈S dµθ (s)R̄(s, µθ(s)).\nAnalogous to the stochastic policy case, we denote the action-value function by Qθ(s, a) =95 ∑ t≥0 E[r̄t+1 − J(µθ)|s0 = s, a0 = a, µθ], and the state-value function by Vθ(s) = Qθ(s, µθ(s)).96\nWhen there is no ambiguity, we will denote J(µθ) and dµθ by simply J(θ) and dθ, respectively. We97 present three results for the long-run average reward: (1) an expression for the local deterministic98 policy gradient in the on-policy setting ∇θiJ(µθ), (2) an expression for the gradient in the off-policy99 setting, and (3) we show that the deterministic policy gradient can be seen as the limit of the stochastic100 one.101\nOn-Policy Setting102 Theorem 1 (Local Deterministic Policy Gradient Theorem - On Policy). For any θ ∈ Θ, i ∈ N ,103 ∇θiJ(µθ) exists and is given by104\n∇θiJ(µθ) = Es∼dµθ [ ∇θiµiθi(s)∇ai Qθ(s, µ −i θ−i(s), a i) ∣∣ ai=µi\nθi (s)\n] .\nThe first step of the proof consists in showing that ∇θJ(µθ) =105 Es∼dθ [ ∇θµθ(s)∇a Qθ(s, a)|a=µθ(s) ] . This is an extension of the well-known stochastic106 case, for which we have∇θJ(πθ) = Es∼dθ [∇θ log(πθ(a|s))Qθ(s, a)], which holds for a long-term107 averaged return with stochastic policy (e.g Theorem 1 of Sutton et al. [2000a]). See the Appendix for108 the details.109\nOff-Policy Setting In the off-policy setting, we are given a behavior policy π : S → P(A), and110 our goal is to maximize the long-run average reward under state distribution dπ:111\nJπ(µθ) = Es∼dπ [ R̄(s, µθ(s)) ] = ∑ s∈S dπ(s)R̄(s, µθ(s)). (1)\nNote that we consider here an excursion objective (Sutton et al. [2009], Silver et al. [January 2014a],112 Sutton et al. [2016]) since we take the average over the state distribution of the behaviour policy π of113 the state-action reward when selecting action given by the target policy µθ. We thus have:114 Theorem 2 (Local Deterministic Policy Gradient Theorem - Off Policy). For any θ ∈ Θ, i ∈ N ,115 π : S → P(A) a fixed stochastic policy,∇θiJπ(µθ) exists and is given by116\n∇θiJπ(µθ) = Es∼dπ [ ∇θiµiθi(s)∇ai R̄(s, µ −i θ−i(s), a i) ∣∣ ai=µi\nθi (s)\n] .\nProof. Since dπ is independent of θ we can take the gradient on both sides of (1)117 ∇θJπ(µθ) = Es∼dπ [ ∇θµθ(s) ∇aR̄(s, µθ(s)) ∣∣ a=µθ(s) ] .\nGiven that∇θiµjθ(s) = 0 if i 6= j, we have∇θµθ(s) = Diag(∇θ1µ1θ1(s), . . . ,∇θNµ N θN (s)) and the118 result follows.119\nThis result implies that, off-policy, each agent needs access to µ−i θ−it (st) for every t .120\nLimit Theorem As noted by Silver et al. [January 2014b], the fact that the deterministic gradient121 is a limit case of the stochastic gradient enables the standard machinery of policy gradient, such as122 compatible-function approximation (Sutton et al. [2000b]), natural gradients (Kakade [2001]), on-line123 feature adaptation (Prabuchandran et al. [2016],) and actor-critic (Konda [2002]) to be used with124 deterministic policies. We show that it holds in our setting. The proof can be found in the Appendix.125\nTheorem 3 (Limit of the Stochastic Policy Gradient for MARL). Let πθ,σ be a stochastic policy126 such that πθ,σ(a|s) = νσ(µθ(s), a), where σ is a parameter controlling the variance, and νσ satisfy127 Condition 1 in the Appendix. Then,128\nlim σ↓0 ∇θJπθ,σ (πθ,σ) = ∇θJµθ (µθ)\nwhere on the l.h.s the gradient is the standard stochastic policy gradient and on the r.h.s. the gradient129 is the deterministic policy gradient.130\n4 Algorithms131\nWe provide two decentralized deterministic actor-critic algorithms, one on-policy and the other132 off-policy and demonstrate their convergence in the next section; assumptions and proofs are provided133 in the Appendix.134\nOn-Policy Deterministic Actor-Critic135\nAlgorithm 1 Networked deterministic on-policy actor-critic Initialize: step t = 0; parameters Ĵ i0, ω i 0, ω̃ i 0, θ i 0,∀i ∈ N ; state s0; stepsizes {βω,t}t≥0, {βθ,t}t≥0 Draw ai0 = µ i θi0\n(s0) and compute ãi0 = ∇θiµiθi0(s0) Observe joint action a0 = (a10, . . . , a N 0 ) and ã0 = ( ã10, . . . , ã N 0 ) repeat\nfor i ∈ N do Observe st+1 and reward rit+1 = r i(st, at)\nUpdate Ĵ it+1 ← (1− βω,t) · Ĵ it + βω,t · rit+1 Draw action at+1 = µiθit(st+1) and compute ã i t+1 = ∇θiµiθit(st+1)\nend for Observe joint action at+1 = (a1t+1, . . . , a N t+1) and ãt+1 = ( ã1t+1, . . . , ã N t+1 ) for i ∈ N do\nUpdate: δit ← rit+1 − Ĵ it + Q̂ωit(st+1, at+1)− Q̂ωit(st, at) Critic step: ω̃it ← ωit + βω,t · δit · ∇ωQ̂ωi(st, at) ∣∣∣ ω=ωit Actor step: θit+1 = θit + βθ,t · ∇θiµiθit(st) ∇aiQ̂ωit(st, a −i t , a i) ∣∣∣ ai=ait Send ω̃it to the neighbors {j ∈ N : (i, j) ∈ Et} over Gt Consensus step: ωit+1 ← ∑ j∈N c ij t · ω̃ j t\nend for Update t← t+ 1\nuntil end\nConsider the following on-policy algorithm. The actor step is based on an expression for∇θiJ(µθ)136 in terms of∇aiQθ(see Equation (15) in the Appendix). We approximate the action-value function Qθ137 using a family of functions Q̂ω : S×A → R parameterized by ω, a column vector in RK . Each agent138 i maintains its own parameter ωi and uses Q̂ωi as its local estimate of Qθ. The parameters ωi are139 updated in the critic step using consensus updates through a weight matrix Ct = ( cijt ) i,j ∈ RN×N140 where cijt is the weight on the message transmitted from i to j at time t, namely:141\nĴ it+1 = (1− βω,t) · Ĵ it + βω,t · rit+1 (2)\nω̃it = ω i t + βω,t · δit · ∇ωQ̂ωi(st, at) ∣∣∣ ω=ωit\n(3)\nωit+1 = ∑ j∈N cijt · ω̃ j t (4)\nwith142 δit = r i t+1 − Ĵ it + Q̂ωit(st+1, at+1)− Q̂ωit(st, at).\nFor the actor step, each agent i improves its policy via:143\nθit+1 = θ i t + βθ,t · ∇θiµiθit(st) · ∇aiQ̂ωit(st, a −i t , a i) ∣∣∣ ai=ait . (5)\nSince Algorithm 1 is an on-policy algorithm, each agent updates the critic using only (st, at, st+1), at144 time t knowing that at+1 = µθt(st+1). The terms in blue are additional terms that need to be shared145 when using compatible features (this is explained further in the next section).146\nOff-Policy Deterministic Actor-Critic We further propose an off-policy actor-critic algorithm,147 defined in Algorithm 2 to enable better exploration capability. Here, the goal is to maximize148 Jπ(µθ) where π is the behavior policy. To do so, the globally averaged reward function R̄(s, a) is149 approximated using a family of functions ˆ̄Rλ : S ×A → R that are parameterized by λ, a column150 vector in RK . Each agent i maintains its own parameter λi and uses ˆ̄Rλi as its local estimate of R̄.151 Based on (1), the actor update is152\nθit+1 = θ i t + βθ,t · ∇θiµiθit(st) · ∇ai ˆ̄Rλit(st, µ −i θ−it (st), a i) ∣∣∣ ai=µ\nθit (st)\n, (6)\nwhich requires each agent i to have access to µj θjt (st) for j ∈ N .153\nThe critic update is154\nλ̃it = λ i t + βλ,t · δit · ∇λ ˆ̄Rλi(st, at) ∣∣∣ λ=λit\n(7)\nλit+1 = ∑ j∈N cijt λ̃ j t , (8)\nwith155 δit = r i(st, at)− ˆ̄Rλit(st, at). (9)\nIn this case, δit was motivated by distributed optimization results, and is not related to the local156 TD-error (as there is no \"temporal\" relationship for R). Rather, it is simply the difference between157 the sample reward and the bootstrap estimate. The terms in blue are additional terms that need to be158 shared when using compatible features (this is explained further in the next section).159\n5 Convergence160\nTo show convergence, we use a two-timescale technique where in the actor, updating deterministic161 policy parameter θi occurs more slowly than that of ωi and Ĵ i in the critic. We study the asymptotic162 behaviour of the critic by freezing the joint policy µθ, then study the behaviour of θt under convergence163 of the critic. To ensure stability, projection is often assumed since it is not clear how boundedness of164\nAlgorithm 2 Networked deterministic off-policy actor-critic\nInitialize: step t = 0; parameters λi0, λ̃ i 0, θ i 0,∀i ∈ N ; state s0; stepsizes {βλ,t}t≥0, {βθ,t}t≥0 Draw ai0 ∼ πi(s0) , compute ȧi0 = µiθi0(s0) and ã i 0 = ∇θiµiθi0(s0) Observe joint action a0 = (a10, . . . , a N 0 ), ȧ0 = (ȧ 1 0, . . . , ȧ N 0 ) and ã0 = ( ã10, . . . , ã N 0\n) repeat\nfor i ∈ N do Observe st+1 and reward rit+1 = r\ni(st, at) end for for i ∈ N do\nUpdate: δit ← rit+1 − ˆ̄Rλit(st, at) Critic step: λ̃it ← λit + βλ,t · δit · ∇λ ˆ̄Rλi(st, at) ∣∣∣ λ=λit Actor step: θit+1 = θit + βθ,t · ∇θiµiθit(st) · ∇ai ˆ̄Rλit(st, µ −i θ−it (st), a i) ∣∣∣ ai=µ\nθit (st)\nSend λ̃it to the neighbors {j ∈ N : (i, j) ∈ Et} over Gt end for for i ∈ N do\nConsensus step: λit+1 ← ∑ j∈N c ij t · λ̃ j t Draw action at+1 ∼ π(st+1), compute ȧit+1 = µiθit+1(st+1) and compute ã i t+1 =\n∇θiµiθit+1(st+1) end for Observe joint action at+1 = (a1t+1, . . . , a N t+1), ȧt+1 = (ȧ 1 t+1, . . . , ȧ N t+1) and ãt+1 =(\nã1t+1, . . . , ã N t+1 ) Update t← t+ 1\nuntil end\n{ θit }\ncan otherwise be ensured (see Bhatnagar et al. [2009]). However, in practice, convergence is165 typically observed even without the projection step (see Bhatnagar et al. [2009], Degris et al. [2012],166 Prabuchandran et al. [2016], Zhang et al. [2018], Suttle et al. [2019]). We also introduce the following167 technical assumptions which will be needed in the statement of the convergence results.168 Assumption 1 (Linear approximation, average-reward). For each agent i, the average-reward function169 R̄ is parameterized by the class of linear functions, i.e., ˆ̄Rλi,θ(s, a) = wθ(s, a) ·λi where wθ(s, a) =170 [ wθ,1(s, a), . . . , wθ,K(s, a) ] ∈ RK is the feature associated with the state-action pair (s, a). The171 feature vectors wθ(s, a), as well as ∇awθ,k(s, a) are uniformly bounded for any s ∈ S, a ∈ A, k ∈172 J1,KK. Furthermore, we assume that the feature matrix Wπ ∈ R|S|×K has full column rank, where173 the k-th column of Wπ,θ is [ ∫ A π(a|s)wθ,k(s, a)da, s ∈ S ] for any k ∈ J1,KK.174 Assumption 2 (Linear approximation, action-value). For each agent i, the action-value function175 is parameterized by the class of linear functions, i.e., Q̂ωi(s, a) = φ(s, a) · ωi where φ(s, a) =176 [ φ1(s, a), . . . , φK(s, a) ] ∈ RK is the feature associated with the state-action pair (s, a). The feature177 vectors φ(s, a), as well as∇aφk(s, a) are uniformly bounded for any s ∈ S , a ∈ A, k ∈ {1, . . . ,K}.178 Furthermore, we assume that for any θ ∈ Θ, the feature matrix Φθ ∈ R|S|×K has full column rank,179 where the k-th column of Φθ is [ φk(s, µθ(s)), s ∈ S ] for any k ∈ J1,KK. Also, for any u ∈ RK ,180 Φθu 6= 1.181 Assumption 3 (Bounding θ). The update of the policy parameter θi includes a local projection by182 Γi : Rmi → Θi that projects any θit onto a compact set Θi that can be expressed as {θi|qij(θi) ≤183 0, j = 1, . . . , si} ⊂ Rmi , for some real-valued, continuously differentiable functions {qij}1≤j≤si184 defined on Rmi . We also assume that Θ = ∏N i=1 Θ\ni is large enough to include at least one local185 minimum of J(θ).186\nWe use {Ft} to denote the filtration with Ft = σ(sτ , Cτ−1, aτ−1, rτ−1, τ ≤ t).187 Assumption 4 (Random matrices). The sequence of non-negative random matrices {Ct = (cijt )ij}188 satisfies:189\n1. Ct is row stochastic and E(Ct|Ft) is a.s. column stochastic for each t, i.e., Ct1 = 1 and190 1>E(Ct|Ft) = 1> a.s. Furthermore, there exists a constant η ∈ (0, 1) such that, for any191 cijt > 0, we have c ij t ≥ η.192\n2. Ct respects the communication graph Gt, i.e., cijt = 0 if (i, j) /∈ Et.193 3. The spectral norm of E [ C>t · (I − 11 >/N) · Ct ] is smaller than one.194\n4. Given the σ-algebra generated by the random variables before time t, Ct, is conditionally195 independent of st, at and rit+1 for any i ∈ N .196\nAssumption 5 (Step size rules, on-policy). The stepsizes βω,t, βθ,t satisfy:197 ∑ t βω,t = ∑ t\nβθ,t =∞∑ t\n(β2ω,t + β 2 θ,t) <∞∑\nt\n|βθ,t+1 − βθ,t| <∞.\nIn addition, βθ,t = o(βω,t) and limt→∞βω,t+1/βω,t = 1.198\nAssumption 6 (Step size rules, off-policy). The step-sizes βλ,t, βθ,t satisfy:199 ∑ t βλ,t = ∑ t βθ,t =∞, ∑ t β2λ,t + β 2 θ,t <∞\nβθ,t = o(βλ,t), lim t→∞ βλ,t+1/βλ,t = 1.\nOn-Policy Convergence To state convergence of the critic step, we define Dsθ = Diag [ dθ(s), s ∈\nS ] , R̄θ = [ R̄(s, µθ(s)), s ∈ S ]> ∈ R|S| and the operator TQθ : R|S| → R|S| for any action-value vector Q ∈ R|S| (and not R|S|·|A| since there is a mapping associating an action to each state) as:\nTQθ (Q ′) = R̄θ − J(µθ) · 1 + P θQ′.\nTheorem 4. Under Assumptions 3, 4, and 5, for any given deterministic policy µθ, with {Ĵt} and {ωt} generated from (2), we have limt→∞ 1N ∑ i∈N Ĵ i t = J(µθ) and limt→∞ω i t = ωθ a.s. for any i ∈ N , where J(µθ) =\n∑ s∈S dθ(s)R̄(s, µθ(s))\nis the long-term average return under µθ, and ωθ is the unique solution to200\nΦθ >Dsθ [ TQθ (Φθωθ)− Φθωθ ] = 0. (10)\nMoreover, ωθ is the minimizer of the Mean Square Projected Bellman Error (MSPBE), i.e., the solution to\nminimize ω ‖Φθω −ΠTQθ (Φθω)‖ 2 Dsθ ,\nwhere Π is the operator that projects a vector to the space spanned by the columns of Φθ, and ‖·‖2Dsθ201 denotes the euclidean norm weighted by the matrix Dsθ.202\nTo state convergence of the actor step, we define quantities ψit,θ, ξ i t and ξ i t,θ as203\nψit,θ = ∇θiµiθi(st) and ψ i t = ψ i t,θt = ∇θiµ i θit (st),\nξit,θ = ∇aiQ̂ωθ (st, a −i t , ai) ∣∣∣ ai=ai=µi\nθit\n(st) = ∇aiφ(st, a−it , ai) ∣∣ ai=ai=µi\nθit\n(st) ωθ,\nξit = ∇aiQ̂ωit(st, a −i t , ai) ∣∣∣ ai=µi\nθi (st)\n= ∇aiφ(st, a−it , ai) ∣∣ ai=µi\nθi (st)\nωit.\nAdditionally, we introduce the operator Γ̂(·) as204\nΓ̂i [g(θ)] = lim 0<η→0\nΓi [ θi + η · g(θ) ] − θi\nη (11)\nfor any θ ∈ Θ and g : Θ→ Rmi a continuous function. In case the limit above is not unique we take205 Γ̂i [g(θ)] to be the set of all possible limit points of (11).206 Theorem 5. Under Assumptions 2, 3, 4, and 5, the policy parameter θit obtained from (5) converges207 a.s. to a point in the set of asymptotically stable equilibria of208\nθ̇i = Γ̂i [ Est∼dθ,µθ [ ψit,θ · ξit,θ ]] , for any i ∈ N . (12)\nIn the case of multiple limit points, the above is treated as a differential inclusion rather than an209 ODE.210\nThe convergence of the critic step can be proved by taking similar steps as that in Zhang et al. [2018].211 For the convergence of the actor step, difficulties arise from the projection (which is handled using212 Kushner-Clark Lemma Kushner and Clark [1978]) and the state-dependent noise (that is handled by213 “natural” timescale averaging Crowder [2009]). Details are provided in the Appendix.214 Remark. Note that that with a linear function approximator Qθ, ψt,θ · ξt,θ =215 ∇θµθ(st) ∇aQ̂ωθ (st, a) ∣∣∣ a=µθ(st) may not be an unbiased estimate of∇θJ(θ):216\nEs∼dθ [ ψt,θ·ξt,θ ] = ∇θJ(θ)+Es∼dθ [ ∇θµθ(s) · ( ∇aQ̂ωθ (s, a) ∣∣∣ a=µθ(s) − ∇aQωθ (s, a)|a=µθ(s) )] .\nA standard approach to overcome this approximation issue is via compatible features (see, for217 example, Silver et al. [January 2014a] and Zhang and Zavlanos [2019]), i.e. φ(s, a) = a · ∇θµθ(s)>,218 giving, for ω ∈ Rm,219\nQ̂ω(s, a) = a · ∇θµθ(s)>ω = (a− µθ(s)) · ∇θµθ(s)>ω + V̂ω(s), with V̂ω(s) = Q̂ω(s, µθ(s)) and ∇aQ̂ω(s, a) ∣∣∣ a=µθ(s) = ∇θµθ(s)>ω.\nWe thus expect that the convergent point of (5) corresponds to a small neighborhood of a local220 optimum of J(µθ), i.e., ∇θiJ(µθ) = 0, provided that the error for the gradient of the action-221 value function ∇aQ̂ω(s, a) ∣∣∣ a=µθ(s) − ∇aQθ(s, a)|a=µθ(s) is small. However, note that using222 compatible features requires computing, at each step t, φ(st, at) = at · ∇θµθ(st)>. Thus, in223 Algorithm 1, each agent observes not only the joint action at+1 = (a1t+1, . . . , a N t+1) but also224\n(∇θ1µ1θ1t (st+1), . . . ,∇θNµ N θNt (st+1)) (see the parts in blue in Algorithm 1).225\nOff-Policy Convergence226 Theorem 6. Under Assumptions 1, 4, and 6, for any given behavior policy π and any θ ∈ Θ, with227 {λit} generated from (7), we have limt→∞λit = λθ a.s. for any i ∈ N , where λθ is the unique228 solution to229\nBπ,θ · λθ = Aπ,θ · dsπ (13) where dsπ = [ dπ(s), s ∈ S ]> , Aπ,θ = [ ∫ A π(a|s)R̄(s, a)w(s, a) >da, s ∈ S ] ∈ RK×|S| and230\nBπ,θ = [∑ s∈S d π(s) ∫ A π(a|s)wi(s, a) · w(s, a) >da, 1 ≤ i ≤ K ] ∈ RK×K .231\nFrom here on we let232\nξit,θ = ∇ai ˆ̄Rλθ (st, µ −i θ−it (st), ai) ∣∣∣ ai=µi\nθit\n(st) = ∇aiw(st, µ−iθ−it (st), ai) ∣∣∣ ai=µi\nθit\n(st) λθ\nξit = ∇ai ˆ̄Rλit(st, µ −i θ−it (st), ai) ∣∣∣ ai=µi\nθit\n(st) = ∇aiw(st, µ−iθ−i(st), ai) ∣∣ ai=µi θi (st) λit\nand we keep233\nψit,θ = ∇θiµiθi(st), and ψ i t = ψ i t,θt = ∇θiµ i θit (st).\nTheorem 7. Under Assumptions 1, 3, 4, and 6, the policy parameter θit obtained from (6) converges234 a.s. to a point in the asymptotically stable equilibria of235\nθ̇i = Γi [ Es∼dπ [ ψit,θ · ξit,θ ]] . (14)\nWe define compatible features for the action-value and the average-reward function in an analogous236 manner: wθ(s, a) = (a− µθ(s)) · ∇θµθ(s)>. For λ ∈ Rm,237\nˆ̄Rλ,θ(s, a) = (a− µθ(s)) · ∇θµθ(s)> · λ\n∇a ˆ̄Rλ,θ(s, a) = ∇θµθ(s)> · λ\nand we have that, for λ∗ = argmin λ\nEs∼dπ [ ‖∇a ˆ̄Rλ,θ(s, µθ(s))−∇aR̄(s, µθ(s))‖2 ] :\n∇θJπ(µθ) = Es∼dπ [ ∇θµθ(s) · ∇aR̄(s, a) ∣∣ a=µθ(s) ] = Es∼dπ [ ∇θµθ(s) · ∇a ˆ̄Rλ∗,θ(s, a) ∣∣∣ a=µθ(s) ] .\nThe use of compatible features requires each agent to observe not only the joint action taken238 at+1 = (a 1 t+1, . . . , a N t+1) and the “on-policy action” ȧt+1 = (ȧ 1 t+1, . . . , ȧ N t+1), but also ãt+1 =239\n(∇θ1µ1θ1t (st+1), . . . ,∇θNµ N θNt (st+1)) (see the parts in blue in Algorithm 2).240\nWe illustrate algorithm convergence on multi-agent extension of a continuous bandit problem from241 Sec. 5.1 of Silver et al. [January 2014b]. Details are in the Appendix. Figure 2 shows the convergence242 of Algorithms 1 and 2 averaged over 5 runs. In all cases, the system converges and the agents are243 able to coordinate their actions to minimize system cost.\n244\n6 Conclusion245\nWe have provided the tools needed to implement decentralized, deterministic actor-critic algorithms246 for cooperative multi-agent reinforcement learning. We provide the expressions for the policy247 gradients, the algorithms themselves, and prove their convergence in on-policy and off-policy settings.248 We also provide numerical results for a continuous multi-agent bandit problem that demonstrates249 the convergence of our algorithms. Our work differs from Zhang and Zavlanos [2019] as the latter250 was based on policy consensus whereas ours is based on critic consensus. Our approach represents251 agreement between agents on every participants’ contributions to the global reward, and as such,252 provides a consensus scoring function with which to evaluate agents. Our approach may be used253 in compensation schemes to incentivize participation. An interesting extension of this work would254 be to prove convergence of our actor-critic algorithm for continuous state spaces, as it may hold255 with assumptions on the geometric ergodicity of the stationary state distribution induced by the256 deterministic policies (see Crowder [2009]). The expected policy gradient (EPG) of Ciosek and257 Whiteson [2018], a hybrid between stochastic and deterministic policy gradient, would also be258 interesting to leverage. The Multi-Agent Deep Deterministic Policy Gradient algorithm (MADDPG)259 of Lowe et al. [2017] assumes partial observability for each agent and would be a useful extension,260 but it is likely difficult to extend our convergence guarantees to the partially observed setting.261\nReferences262 Albert Benveniste, Pierre Priouret, and Michel Métivier. Adaptive Algorithms and Stochastic263 Approximations. Springer-Verlag, Berlin, Heidelberg, 1990. ISBN 0-387-52894-6.264\nShalabh Bhatnagar, Richard S. Sutton, Mohammad Ghavamzadeh, and Mark Lee. Natural actor-critic265 algorithms. Automatica, 45(11):2471–2482, November 2009. ISSN 0005-1098. doi: 10.1016/j.266 automatica.2009.07.008. URL http://dx.doi.org/10.1016/j.automatica.2009.07.008.267\nKamil Ciosek and Shimon Whiteson. Expected Policy Gradients for Reinforcement Learning. arXiv268 e-prints, art. arXiv:1801.03326, Jan 2018.269\nMartin Crowder. Stochastic approximation: A dynamical systems viewpoint by vivek s. borkar.270 International Statistical Review, 77(2):306–306, 2009.271\nThomas Degris, Martha White, and Richard S. Sutton. Off-policy actor-critic. CoRR, abs/1205.4839,272 2012. URL http://arxiv.org/abs/1205.4839.273\nScott Fujimoto, Herke van Hoof, and Dave Meger. Addressing function approximation error in actor-274 critic methods. CoRR, abs/1802.09477, 2018. URL http://arxiv.org/abs/1802.09477.275\nSham Kakade. A natural policy gradient. In Proceedings of the 14th International Conference on276 Neural Information Processing Systems: Natural and Synthetic, NIPS’01, pages 1531–1538, Cam-277 bridge, MA, USA, 2001. MIT Press. URL http://dl.acm.org/citation.cfm?id=2980539.278 2980738.279\nVijaymohan Konda. Actor-critic Algorithms. PhD thesis, Cambridge, MA, USA, 2002. AAI0804543.280\nHarold J. (Harold Joseph) Kushner and (joint author.) Clark, Dean S. Stochastic approximation281 methods for constrained and unconstrained systems. New York : Springer-Verlag, 1978. ISBN282 0387903410.283\nTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez,284 Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement285 learning. CoRR, abs/1509.02971, 2015.286\nRyan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor-287 critic for mixed cooperative-competitive environments. Neural Information Processing Systems288 (NIPS), 2017.289\nHamid Reza Maei. Convergent actor-critic algorithms under off-policy training and function approxi-290 mation. CoRR, abs/1802.07842, 2018. URL http://arxiv.org/abs/1802.07842.291\nP. Marbach and J. N. Tsitsiklis. Simulation-based optimization of markov reward processes. IEEE292 Transactions on Automatic Control, 46(2):191–209, Feb 2001. ISSN 0018-9286. doi: 10.1109/9.293 905687.294\nK. J. Prabuchandran, Shalabh Bhatnagar, and Vivek S. Borkar. Actor-critic algorithms with online295 feature adaptation. ACM Trans. Model. Comput. Simul., 26(4):24:1–24:26, February 2016. ISSN296 1049-3301. doi: 10.1145/2868723. URL http://doi.acm.org/10.1145/2868723.297\nMartin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John298 Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. ISBN 0471619779.299\nDavid Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller.300 Deterministic Policy Gradient Algorithms. International Conference on Machine Learning, pages301 387–395, January 2014a.302\nDavid Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller.303 Deterministic Policy Gradient Algorithms. International Conference on Machine Learning, pages304 387–395, January 2014b.305\nWesley Suttle, Zhuoran Yang, Kaiqing Zhang, Zhaoran Wang, Tamer Basar, and Ji Liu. A multi-agent306 off-policy actor-critic algorithm for distributed reinforcement learning. CoRR, abs/1903.06372,307 2019. URL http://arxiv.org/abs/1903.06372.308\nRichard S Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient309 methods for reinforcement learning with function approximation. In S. A. Solla, T. K. Leen, and310 K. Müller, editors, Advances in Neural Information Processing Systems 12, pages 1057–1063. MIT311 Press, 2000a.312\nRichard S Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient313 methods for reinforcement learning with function approximation. In S. A. Solla, T. K. Leen, and314 K. Müller, editors, Advances in Neural Information Processing Systems 12, pages 1057–1063. MIT315 Press, 2000b.316\nRichard S. Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba317 Szepesvári, and Eric Wiewiora. Fast gradient-descent methods for temporal-difference learning318 with linear function approximation. In Proceedings of the 26th Annual International Conference319 on Machine Learning, ICML ’09, pages 993–1000, New York, NY, USA, 2009. ACM. ISBN320 978-1-60558-516-1.321\nRichard S. Sutton, A. Rupam Mahmood, and Martha White. An emphatic approach to the problem322 of off-policy temporal-difference learning. J. Mach. Learn. Res., 17(1):2603–2631, January 2016.323 ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=2946645.3007026.324\nKaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, and Tamer Basar. Fully decentralized multi-325 agent reinforcement learning with networked agents. 80:5872–5881, 10–15 Jul 2018.326\nYan Zhang and Michael M. Zavlanos. Distributed off-policy actor-critic reinforcement learning with327 policy consensus. CoRR, abs/1903.09255, 2019.328\nNumerical experiment details329\nWe demonstrate the convergence of our algorithm in a continuous bandit problem that is a multi-330 agent extension of the experiment in Section 5.1 of Silver et al. (2014). Each agent chooses331 an action ai ∈ Rm. We assume all agents have the same reward function given by Ri(a) =332 − (∑\ni a i − a∗\n)T C (∑\ni a i − a∗\n) . The matrix C is positive definite with eigenvalues chosen from333 {0.1, 1}, and a∗ = [4, . . . , 4]T. We consider 10 agents and action dimensions m = 10, 20, 50. Note334 that there are multiple possible solutions for this problem, requiring the agents to coordinate their335 actions to sum to a∗. We assume a target policy of the form µθi = θi for each agent i and a Gaussian336 behaviour policy β(·) ∼ N (θi, σ2β) where σβ = 0.1. We use the Gaussian behaviour policy for both337 Algorithms 1 and 2. Strictly speaking, Algorithm 1 is on-policy, but in this simplified setting where338 the target policy is constant, the on-policy version would be degenerate such that the Q estimate does339 not affect the TD-error. Therefore, we add a Gaussian behaviour policy to Algorithm 1. Each agent340 maintains an estimate Qω i\n(a) of the critic using a linear function of the compatible features a− θ341 and a bias feature. The critic is recomputed from each successive batch of 2m steps and the actor342 is updated once per batch. The critic step size is 0.1 and the actor step size is 0.01. Performance343 is evaluated by measuring the cost of the target policy (without exploration). Figure 2 shows the344 convergence of Algorithms 1 and 2 averaged over 5 runs. In all cases, the system converges and the345 agents are able to coordinate their actions to minimize system cost. The jupyter notebook will be346 made available for others to use. In fact, in this simple experiment, we also observe convergence347 under discounted rewards.348\nProof of Theorem 1349\nThe proof follows the same scheme as Sutton et al. [2000a], naturally extending their results for a350 deterministic policy µθ and a continuous action space A.351 Note that our regularity assumptions ensure that, for any s ∈ S, Vθ(s), ∇θVθ(s), J(θ), ∇θJ(θ),352 dθ(s) are Lipschitz-continuous functions of θ (since µθ is twice continuously differentiable and Θ is353 compact), and that Qθ(s, a) and ∇aQθ(s, a) are Lipschitz-continuous functions of a (Marbach and354 Tsitsiklis [2001]).355\nWe first show that∇θJ(θ) = Es∼dθ [ ∇θµθ(s)∇a Qθ(s, a)|a=µθ(s)].356\nThe Poisson equation under policy µθ is given by Puterman [1994]357\nQθ(s, a) = R̄(s, a)− J(θ) + ∑ s′∈S P (s′|s, a)Vθ(s′).\nSo,358\n∇θVθ(s) = ∇θQθ(s, µθ(s)) = ∇θ [ R̄(s, µθ(s))− J(θ) + ∑ s′∈S P (s′|s, µθ(s))Vθ(s′) ]\n= ∇θµθ(s) ∇aR̄(s, a) ∣∣ a=µθ(s) −∇θJ(θ) +∇θ ∑ s′∈S P (s′|s, µθ(s))Vθ(s′)\n= ∇θµθ(s) ∇aR̄(s, a) ∣∣ a=µθ(s) −∇θJ(θ)\n+ ∑ s′∈S ∇θµθ(s) ∇aP (s′|s, a)|a=µθ(s) Vθ(s ′) + ∑ s′∈S P (s′|s, µθ(s))∇θVθ(s′)\n= ∇θµθ(s)∇a [ R̄(s, a) + ∑ s′∈S P (s|s′, a)Vθ(s′) ]∣∣∣∣∣ a=µθ(s)\n−∇θJ(θ) + ∑ s′∈S P (s′|s, µθ(s))∇θVθ(s′)\n= ∇θµθ(s)∇a Qθ(s, a)|a=µθ(s) + ∑ s′∈S P (s′|s, µθ(s))∇θVθ(s′)−∇θJ(θ)\nHence,359\n∇θJ(θ) = ∇θµθ(s)∇a Qθ(s, a)|a=µθ(s) + ∑ s′∈S\nP (s′|s, µθ(s))∇θVθ(s′)−∇θVθ(s)∑ s∈S dθ(s)∇θJ(θ) = ∑ s∈S dθ(s)∇θµθ(s)∇a Qθ(s, a)|a=µθ(s)\n+ ∑ s∈S dθ(s) ∑ s′∈S P (s′|s, µθ(s))∇θVθ(s′)− ∑ s∈S dθ(s)∇θVθ(s).\nUsing stationarity property of dθ, we get∑ s∈S ∑ s′∈S dθ(s)P (s′|s, µθ(s))∇θVθ(s′) = ∑ s′∈S dθ(s′)∇θVθ(s′).\nTherefore, we get ∇θJ(θ) = ∑ s∈S dθ(s)∇θµθ(s) ∇aQθ(s, a)|a=µθ(s) = Es∼dθ [ ∇θµθ(s) ∇aQθ(s, a)|a=µθ(s)].\nGiven that ∇θiµjθ(s) = 0 if i 6= j, we have ∇θµθ(s) = Diag(∇θ1µ1θ1(s), . . . ,∇θNµ N θN (s)), which360 implies361\n∇θiJ(θ) = Es∼dθ [ ∇θiµiθi(s)∇ai Qθ(s, µ −i θ−i(s), a i) ∣∣ ai=µi\nθi (s)\n]. (15)\nProof of Theorem 3362\nWe extend the notation for off-policy reward function to stochastic policies as follows. Let β be a363 behavior policy under which {st}t≥0 is irreducible and aperiodic, with stationary distribution dβ . For364 a stochastic policy π : S → P(A), we define365\nJβ(π) = ∑ s∈S dβ(s) ∫ A π(a|s)R̄(s, a)da.\nRecall that for a deterministic policy µ : S → A, we have366 Jβ(µ) = ∑ s∈S dβ(s)R̄(s, µ(s)).\nWe introduce the following conditions which are identical to Conditions B1 from Silver et al.367 [January 2014a].368\nConditions 1. Functions νσ parametrized by σ are said to be regular delta-approximation onR ⊂ A369 if they satisfy the following conditions:370\n1. The distributions νσ converge to a delta distribution: limσ↓0 ∫ A νσ(a\n′, a)f(a)da = f(a′)371 for a′ ∈ R and suitably smooth f . Specifically we require that this convergence is uniform372 in a′ and over any class F of L-Lipschitz and bounded functions, ‖∇af(a)‖< L < ∞,373 supaf(a) < b <∞, i.e.:374\nlim σ↓0 sup f∈F,a′∈R ∣∣∣∣∫ A νσ(a ′, a)f(a)da− f(a′) ∣∣∣∣ = 0.\n2. For each a′ ∈ R, νσ(a′, ·) is supported on some compact Ca′ ⊆ A with Lipschitz boundary375 bd(Ca′), vanishes on the boundary and is continuously differentiable on Ca′ .376\n3. For each a′ ∈ R, for each a ∈ A, the gradient∇a′νσ(a′, a) exists.377\n4. Translation invariance: for all a ∈ A, a′ ∈ R, and any δ ∈ Rn such that a + δ ∈ A,378 a′ + δ ∈ A, νσ(a′, a) = νσ(a′ + δ, a+ δ).379\nThe following lemma is an immediate corollary of Lemma 1 from Silver et al. [January 2014a].380 Lemma 1. Let νσ be a regular delta-approximation onR ⊆ A. Then, wherever the gradients exist\n∇a′ν(a′, a) = −∇aν(a′, a).\nTheorem 3 is a less technical restatement of the following result.381 Theorem 8. Let µθ : S → A. Denote the range of µθ by Rθ ⊆ A, and R = ∪θRθ. For382 each θ, consider πθ,σ a stochastic policy such that πθ,σ(a|s) = νσ(µθ(s), a), where νσ satisfy383 Conditions 1 on R. Then, there exists r > 0 such that, for each θ ∈ Θ, σ 7→ Jπθ,σ (πθ,σ),384 σ 7→ Jπθ,σ (µθ), σ 7→ ∇θJπθ,σ (πθ,σ), and σ 7→ ∇θJπθ,σ (µθ) are properly defined on [ 0, r ]\n(with385 Jπθ,0(πθ,0) = Jπθ,0(µθ) = Jµθ (µθ) and ∇θJπθ,0(πθ,0) = ∇θJπθ,0(µθ) = ∇θJµθ (µθ)), and we386 have:387\nlim σ↓0 ∇θJπθ,σ (πθ,σ) = lim σ↓0 ∇θJπθ,σ (µθ) = ∇θJµθ (µθ).\nTo prove this result, we first state and prove the following Lemma.388 Lemma 2. There exists r > 0 such that, for all θ ∈ Θ and σ ∈ [ 0, r ] , stationary distribution dπθ,σ389\nexists and is unique. Moreover, for each θ ∈ Θ, σ 7→ dπθ,σ and σ 7→ ∇θdπθ,σ are properly defined390 on [ 0, r ] and both are continuous at 0.391\nProof of Lemma 2. For any policy β, we let ( P βs,s′ ) s,s′∈S be the transition matrix associated to the392 Markov Chain {st}t≥0 induced by β. In particular, for each θ ∈ Θ, σ > 0, s, s′ ∈ S, we have393 Pµθs,s′ = P (s ′|s, µθ(s)),\nP πθ,σ s,s′ = ∫ A πθ,σ(a|s)P (s′|s, a)da = ∫ A νσ(µθ(s), a)P (s ′|s, a)da.\nLet θ ∈ Θ, s, s′ ∈ S, (θn) ∈ ΘN such that θn → θ and (σn)n∈N ∈ R+ N, σn ↓ 0:394 ∣∣∣Pπθn,σns,s′ − Pµθs,s′ ∣∣∣ ≤ ∣∣∣Pπθn,σns,s′ − Pµθns,s′ ∣∣∣+ ∣∣∣Pµθns,s′ − Pµθs,s′ ∣∣∣ .\nApplying the first condition of Conditions 1 with f : a 7→ P (s′|s, a) belonging to F :395 ∣∣∣Pπθn,σns,s′ − Pµθns,s′ ∣∣∣ = ∣∣∣∣∫ A νσn(µθn(s), a)P (s ′|s, a)da− P (s′|s, µθn(s)) ∣∣∣∣\n≤ sup f∈F,a′∈R ∣∣∣∣∫ A νσn(a ′, a)f(a)da− f(a′) ∣∣∣∣ −→n→∞ 0.\nBy regularity assumptions on θ 7→ µθ(s) and P (s′|s, ·), we have396 ∣∣∣Pµθns,s′ − Pµθs,s′ ∣∣∣ = |P (s′|s, µθn(s))− P (s′|s, µθ(s))| −→n→∞ 0.\nHence,397 ∣∣∣Pπθn,σns,s′ − Pµθs,s′ ∣∣∣ −→n→∞ 0. Therefore, for each s, s′ ∈ S, (θ, σ) 7→ Pπθ,σs,s′ , with P πθ,0 s,s′ = P µθ s,s′ , is continuous on Θ× {0}. Note398\nthat, for each n ∈ N, P 7→ ∏ s,s′ (P\nn)s,s′ is a polynomial function of the entries of P . Thus, for399 each n ∈ N, fn : (θ, σ) 7→ ∏ s,s′ (P πθ,σn)s,s′ , with fn(θ, 0) = ∏ s,s′ (P\nµθn)s,s′ is continuous on400 Θ × {0}. Moreover, for each θ ∈ Θ, σ ≥ 0, from the structure of Pπθ,σ , if there is some n∗ ∈ N401 such that fn∗(θ, σ) > 0 then, for all n ≥ n∗, fn(θ, σ) > 0.402\nNow let us suppose that there exists (θn) ∈ ΘN ∗ such that, for each n > 0 there is a σn ≤ n−1 such403 that fn(θn, σn) = 0. By compacity of Θ, we can take (θn) converging to some θ ∈ Θ. For each404 n∗ ∈ N, by continuity we have fn∗(θ, 0) = lim\nn→∞ fn∗(θn, σn) = 0. Since Pµθ is irreducible and405 aperiodic, there is some n ∈ N such that for all s, s′ ∈ S and for all n∗ ≥ n, ( Pµθn ∗ ) s,s′ > 0, i.e.406 fn∗(θ, 0) > 0. This leads to a contradiction.407\nHence, there exists n∗ > 0 such that for all θ ∈ Θ and σ ≤ n∗−1, fn(θ, σ) > 0. We let r = n∗−1. It408 follows that, for all θ ∈ Θ and σ ∈ [ 0, r ] , Pπθ,σ is a transition matrix associated to an irreducible and409 aperiodic Markov Chain, thus dπθ,σ is well defined as the unique stationary probability distribution410 associated to Pπθ,σ . We fix θ ∈ Θ in the remaining of the proof.411\nLet β a policy for which the Markov Chain corresponding to P β is irreducible and aperiodic. Let412 s∗ ∈ S, as asserted in Marbach and Tsitsiklis [2001], considering stationary distribution dβ as a413 vector ( dβs ) s∈S ∈ R\n|S|, dβ is the unique solution of the balance equations:414 ∑ s∈S dβsP β s,s′ = d β s′ s\n′ ∈ S\\{s∗},∑ s∈S dβs = 1.\nHence, we have Aβ an |S| × |S| matrix and a 6= 0 a constant vector of R|S| such that the balance415 equations is of the form416\nAβdβ = a (16)\nwith Aβs,s′ depending on P β s′,s in an affine way, for each s, s ′ ∈ S. Moreover, Aβ is invertible, thus417 dβ is given by418\ndβ = 1\ndet(Aβ) adj(Aβ)>a.\nEntries of adj(Aβ) and det(Aβ) are polynomial functions of the entries of P β .419\nThus, σ 7→ dπθ,σ = 1 det(Aπθ,σ )\nadj(Aπθ,σ )>a is defined on [ 0, r ] and is continuous at 0.420\nLemma 1 and integration by parts imply that, for s, s′ ∈ S, σ ∈ [ 0, r ] :421 ∫\nA ∇a′νσ(a′, a)|a′=µθ(s) P (s ′|s, a)da = − ∫ A ∇aνσ(µθ(s), a)P (s′|s, a)da\n= ∫ Cµθ(s) νσ(µθ(s), a)∇aP (s′|s, a)da+ boundary terms\n= ∫ Cµθ(s) νσ(µθ(s), a)∇aP (s′|s, a)da\nwhere the boundary terms are zero since νσ vanishes on the boundary due to Conditions 1.422\nThus, for s, s′ ∈ S, σ ∈ [ 0, r ] :423\n∇θP πθ,σ s,s′ = ∇θ ∫ A πθ,σ(a|s)P (s′|s, a)da\n= ∫ A ∇θπθ,σ(a|s)P (s′|s, a)da (17)\n= ∫ A ∇θµθ(s) ∇a′νσ(a′, a)|a′=µθ(s) P (s ′|s, a)da\n= ∇θµθ(s) ∫ Cµθ(s) νσ(µθ(s), a)∇aP (s′|s, a)da\nwhere exchange of derivation and integral in (17) follows by application of Leibniz rule with:424\n• ∀a ∈ A, θ 7→ πθ,σ(a|s)P (s′|s, a) is differentiable, and ∇θπθ,σ(a|s)P (s′|s, a) =425 ∇θµθ(s) ∇a′νσ(a′, a)|a′=µθ(s).426 427\n• Let a∗ ∈ R, ∀θ ∈ Θ,428 ‖∇θπθ,σ(a|s)P (s′|s, a)‖ = ∥∥∥∇θµθ(s) ∇a′νσ(a′, a)|a′=µθ(s)∥∥∥\n≤ ‖∇θµθ(s)‖op ∥∥∥∇a′νσ(a′, a)|a′=µθ(s)∥∥∥\n≤ sup θ∈Θ ‖∇θµθ(s)‖op ‖∇aνσ(µθ(s), a)‖\n= sup θ∈Θ ‖∇θµθ(s)‖op ‖∇aνσ(a ∗, a− µθ(s) + a∗)‖ (18)\n≤ sup θ∈Θ ‖∇θµθ(s)‖op sup a∈Ca∗ ‖∇aνσ(a∗, a)‖ 1a∈Ca∗\nwhere ‖·‖op denotes the operator norm, and (18) comes from translation invariance (we take429 ∇aνσ(a∗, a) = 0 for a ∈ Rn\\Ca∗ ). a 7→ sup\nθ∈Θ ‖∇θµθ(s)‖op sup a∈Ca∗ ‖∇aνσ(a∗, a)‖ 1a∈Ca∗ is430 measurable, bounded and supported on Ca∗ , so it is integrable on A.431 • Dominated convergence ensures that, for each k ∈ J1,mK, partial derivative gk(θ) =432 ∂θk ∫ A∇θπθ,σ(a|s)P (s ′|s, a)da is continuous: let θn ↓ θ, then433\ngk(θn) = ∂θk ∫ A ∇θπθn,σ(a|s)P (s′|s, a)da\n= ∂θkµθn(s) ∫ Ca∗ νσ(a ∗, a− µθn(s) + a∗)∇aP (s′|s, a)da\n−→ n→∞ ∂θkµθ(s) ∫ Ca∗ νσ(a ∗, a− µθ(s) + a∗)∇aP (s′|s, a)da = gk(θ)\nwith the dominating function a 7→ sup a∈Ca∗ |νσ(a∗, a)|sup a∈A ‖∇aP (s′|s, a)‖ 1a∈Ca∗ .434\nThus σ 7→ ∇θP πθ,σ s,s′ is defined for σ ∈\n[ 0, r ]\nand is continuous at 0, with ∇θP πθ,0 s,s′ =435 ∇θµθ(s) ∇aP (s′|s, a)|a=µθ(s). Indeed, let (σn)n∈N ∈ [ 0, r ]+N\n, σn ↓ 0, then, applying the first436 condition of Conditions 1 with f : a 7→ ∇aP (s′|s, a) belonging to F , we get437 ∥∥∥∇θPπθ,σns,s′ −∇θPµθs,s′∥∥∥\n= ‖∇θµθ(s)‖op ∥∥∥∥∥ ∫ Cµθ(s) νσn(µθ(s), a)∇aP (s′|s, a)da− ∇aP (s′|s, a)|a=µθ(s) ∥∥∥∥∥ −→n→∞ 0. Since dπθ,σ = 1\ndet(Aπθ,σ ) adj (Aπθ,σ )> a with |det (Aπθ,σ ) | > 0 for all σ ∈\n[ 0, r ]\nand since entries438 of adj (Aπθ,σ ) and det (Aπθ,σ ) are polynomial functions of the entries of Pπθ,σ , it follows that439\nσ 7→ ∇θdπθ,σ is properly defined on [ 0, r ]\nand is continuous at 0, which concludes the proof of440 Lemma 2.441\nWe now proceed to prove Theorem 8.442 Let θ ∈ Θ, πθ as in Theorem 3, and r > 0 such that σ 7→ dπθ,σ , σ 7→ ∇θdπθ,σ are well defined on443 [ 0, r ] and are continuous at 0. Then, the following two functions444\nσ 7→ Jπθ,σ (πθ,σ) = ∑ s∈S dπθ,σ (s) ∫ A πθ,σ(a|s)R̄(s, a)da,\nσ 7→ Jπθ,σ (µθ) = ∑ s∈S dπθ,σ (s)R̄(s, µθ(s)),\nare properly defined on [ 0, r ]\n(with Jπθ,0(πθ,0) = Jπθ,0(µθ) = Jµθ (µθ)). Let s ∈ S, by taking445 similar arguments as in the proof of Lemma 2, we have446\n∇θ ∫ A πθ,σ(a|s)R̄(s, a)da = ∫ A ∇θπθ,σ(a, s)R̄(s, a)da,\n= ∇θµθ(s) ∫ Cµθ(s) νσ(µθ(s), a)∇aR̄(s, a)da.\nThus, σ 7→ ∇θJπθ,σ (πθ,σ) is properly defined on [ 0, r ] and447\n∇θJπθ,σ (πθ,σ) = ∑ s∈S ∇θdπθ,σ (s) ∫ A πθ,σ(a|s)R̄(s, a)da\n+ ∑ s∈S dπθ,σ (s)∇θ ∫ A πθ,σ(a|s)R̄(s, a)da\n= ∑ s∈S ∇θdπθ,σ (s) ∫ A νσ(µθ(s), a)R̄(s, a)da\n+ ∑ s∈S dπθ,σ (s)∇θµθ(s) ∫ Cµθ(s) νσ(µθ(s), a)∇aR̄(s, a)da.\nSimilarly, σ 7→ ∇θJπθ,σ (µθ) is properly defined on [ 0, r ] and448\n∇θJπθ,σ (µθ) = ∑ s∈S ∇θdπθ,σ (s)R̄(s, µθ(s)) + ∑ s∈S dπθ,σ (s)∇θµθ(s) ∇aR̄(s, a) ∣∣ a=µθ(s)\nTo prove continuity at 0 of both σ 7→ ∇θJπθ,σ (πθ,σ) and σ 7→ ∇θJπθ,σ (µθ) (with ∇θJπθ,0(πθ,0) =449 ∇θJπθ,0(µθ) = ∇θJµθ (µθ)), let (σn)n≥0 ↓ 0:450\n∥∥∇θJπθ,σn (πθ,σn)−∇θJπθ,0(πθ,0)∥∥ ≤ ∥∥∇θJπθ,σn (πθ,σn)−∇θJπθ,σn (µθ)∥∥+ ∥∥∇θJπθ,σn (µθ)−∇θJµθ (µθ)∥∥ . (19)\nFor the first term of the r.h.s we have451\n∥∥∇θJπθ,σn (πθ,σn)−∇θJπθ,σn (µθ)∥∥ ≤ ∑ s∈S ‖∇θdπθ,σn (s)‖ ∣∣∣∣∫ A νσn(µθ(s), a)R̄(s, a)da− R̄(s, µθ(s))\n∣∣∣∣ + ∑ s∈S dπθ,σn (s)‖∇θµθ(s)‖op ∥∥∥∥∫ A νσn(µθ(s), a)∇aR̄(s, a)da− ∇aR̄(s, a) ∣∣ a=µθ(s) ∥∥∥∥ .\nApplying the first assumption in Condition 1 with f : a 7→ R̄(s, a) and f : a 7→ ∇aR̄(s, a) belonging452 to F we have, for each s ∈ S:453 ∣∣∣∣∫\nA νσn(µθ(s), a)R̄(s, a)da− R̄(s, µθ(s)) ∣∣∣∣ −→n→∞ 0 and∥∥∥∥∫ A νσn(µθ(s), a)∇aR̄(s, a)da− ∇aR̄(s, a) ∣∣ a=µθ(s)\n∥∥∥∥ −→n→∞ 0. Moreover, for each s ∈ S, dπθ,σn (s) −→\nn→∞ dµθ (s) and∇θdπθ,σn (s) −→ n→∞ ∇θdµθ (s) (by Lemma 2),454 and ‖∇θµθ(s)‖op<∞, so455 ∥∥∇θJπθ,σn (πθ,σn)−∇θJπθ,σn (µθ)∥∥ −→n→∞ 0. For the second term of the r.h.s of (19), we have456 ∥∥∇θJπθ,σn (µθ)−∇θJµθ (µθ)∥∥ ≤∑\ns∈S ‖∇θdπθ,σn (s)−∇θdµθ (s)‖ ∣∣R̄(s, µθ(s))∣∣ + ∑ s∈S |dπθ,σn (s)− dµθ (s)| ‖∇θµθ(s)‖op\n∥∥∥∇aR̄(s, a)∣∣a=µθ(s)∥∥∥ . Continuity at 0 of σ 7→ dπθ,σ (s) and σ 7→ ∇θdπθ,σ (s) for each s ∈ S, boundedness of R̄(s, ·),457 ∇aR̄(s, ·) and ∇θ(s)µθ(s) implies that458 ∥∥∇θJπθ,σn (µθ)−∇θJµθ (µθ)∥∥ −→n→∞ 0. Hence,459 ∥∥∇θJπθ,σn (πθ,σn)−∇θJπθ,0(πθ,0)∥∥ −→n→∞ 0. So, σ 7→ ∇θJπθ,σ (πθ,σ) and ∇θJπθ,σ (µθ) are continuous at 0:460\nlim σ↓0 ∇θJπθ,σ (πθ,σ) = lim σ↓0 ∇θJπθ,σ (µθ) = ∇θJµθ (µθ).\nProof of Theorem 4461\nWe will use the two-time-scale stochastic approximation analysis . We let the policy parameter θt462 fixed as θt ≡ θ when analysing the convergence of the critic step. Thus we can show the convergence463 of ωt towards an ωθ depending on θ, which will then be used to prove the convergence for the slow464 time-scale.465 Lemma 3. Under Assumptions 3 – 5, the sequence ωit generated from (2) is bounded a.s., i.e.,466 supt‖ωit‖<∞ a.s., for any i ∈ N .467\nThe proof follows the same steps as that of Lemma B.1 in the PMLR version of Zhang et al. [2018].468\nLemma 4. Under Assumption 5, the sequence {Ĵ it} generated as in 2 is bounded a.s, i.e., supt|Ĵ it | <469 ∞ a.s., for any i ∈ N .470\nThe proof follows the same steps as that of Lemma B.2 in the PMLR version of Zhang et al. [2018].471\nThe desired result holds since Step 1 and Step 2 of the proof of Theorem 4.6 in Zhang et al. [2018]472 can both be repeated in the setting of deterministic policies.473\nProof of Theorem 5474\nLet Ft,2 = σ(θτ , sτ , τ ≤ t) a filtration. In addition, we define475\nH(θ, s, ω) = ∇θµθ(s) · ∇aQω(s, a)|a=µθ(s) , H(θ, s) = H(θ, s, ωθ),\nh(θ) = Es∼dθ [H(θ, s)] .\nThen, for each θ ∈ Θ, we can introduce νθ : S → Rn the solution to the Poisson equation:476 ( I − P θ ) νθ(·) = H(θ, ·)− h(θ)\nthat is given by νθ(s) = ∑ k≥0 Esk+1∼P θ(·|sk) [H(θ, sk)− h(θ)|s0 = s] which is properly defined477 (similar to the differential value function V ).478\nWith projection, actor update (5) becomes479 θt+1 = Γ [θt + βθ,tH(θt, st, ωt)] (20)\n= Γ [θt + βθ,th(θt)− βθ,t (h(θt)−H(θt, st))− βθ,t (H(θt, st)−H(θt, st, ωt))] = Γ [ θt + βθ,th(θt) + βθ,t ( (I − P θt)νθt(st) ) + βθ,tA 1 t ] = Γ [ θt + βθ,th(θt) + βθ,t (νθt(st)− νθt(st+1)) + βθ,t ( νθt(st+1)− P θtνθt(st) ) + βθ,tA 1 t\n] = Γ [ θt + βθ,t ( h(θt) +A 1 t +A 2 t +A 3 t\n)] where480\nA1t = H(θt, st, ωt)−H(θt, st), A2t = νθt(st)− νθt(st+1), A3t = νθt(st+1)− P θtνθt(st).\nFor r < t we have481 t−1∑ k=r βθ,kA 2 k = t−1∑ k=r βθ,k (νθk(sk)− νθk(sk+1))\n= t−1∑ k=r βθ,k ( νθk(sk)− νθk+1(sk+1) ) + t−1∑ k=r βθ,k ( νθk+1(sk+1)− νθk(sk+1) ) =\nt−1∑ k=r (βθ,k+1 − βθ,k) νθk+1(sk+1) + βθrνθr (sr)− βθtνθt(st) + t−1∑ k=r (2) k\n= t−1∑ k=r (1) k + t−1∑ k=r (2) k + ηr,t\nwhere482 (1) k = (βθ,k+1 − βθ,k) νθk+1(sk+1),\n(2) k = βθ,k ( νθk+1(sk+1)− νθk(sk+1) ) ,\nηr,t = βθrνθr (sr)− βθtνθt(st). Lemma 5. ∑t−1 k=0 βθ,kA 2 k converges a.s. for t→∞483\nProof of Lemma 5. Since νθ(s) is uniformly bounded for θ ∈ Θ, s ∈ S, we have for some K > 0484 t−1∑ k=0 ∥∥∥ (1)k ∥∥∥ ≤ K t−1∑ k=0 |βθ,k+1 − βθ,k|\nwhich converges given Assumption 5.485\nMoreover, since µθ(s) is twice continuously differentiable, θ 7→ νθ(s) is Lipschitz for each s, and so486 we have487\nt−1∑ k=0 ∥∥∥ (2)k ∥∥∥ ≤ t−1∑ k=0 βθ,k ∥∥νθk(sk+1)− νθk+1(sk+1)∥∥\n≤ K2 t−1∑ k=0 βθ,k ‖θk − θk+1‖\n≤ K3 t−1∑ k=0 β2θ,k.\nFinally, lim t→∞ ‖η0,t‖ = βθ,0 ‖νθ0(s0)‖ <∞ a.s.488\nThus, ∑t−1 k=0 ∥∥βθ,kA2k∥∥ ≤∑t−1k=0 ∥∥∥ (1)k ∥∥∥+∑t−1k=0 ∥∥∥ (2)k ∥∥∥+ ‖η0,t‖ converges a.s.489 Lemma 6. ∑t−1 k=0 βθ,kA 3 k converges a.s. for t→∞.490\nProof of Lemma 6. We set491\nZt = t−1∑ k=0 βθ,kA 3 k = t−1∑ k=0 βθ,k ( νθk(sk+1)− P θkνθk(sk) ) .\nSince Zt is Ft-adapted and E [νθt(st+1)|Ft] = P θtνθt(st), Zt is a martingale. The remaining of the492 proof is now similar to the proof of Lemma 2 on page 224 of Benveniste et al. [1990].493\nLet gi(θt) = Est∼dθt [ ψit · ξit,θt |Ft,2 ] and g(θ) = [ g1(θ), . . . , gN (θ) ] . We have\ngi(θt) = ∑ st∈S dθt(st) · ψit · ξit,θt .\nGiven (10), θ 7→ ωθ is continuously differentiable and θ 7→ ∇θωθ is bounded so θ 7→ ωθ is494 Lipschitz-continuous. Thus θ 7→ ξit,θ is Lipschitz-continuous for each st ∈ S . Due to our regularity495 assumptions, θ 7→ ψit,θt is also continuous for each i ∈ N , st ∈ S. Moreover, θ 7→ d\nθ(s) is also496 Lipschitz continuous for each s ∈ S. Hence, θ 7→ g(θ) is Lipschitz-continuous in θ and the ODE497 (12) is well-posed. This holds even when using compatible features.498\nBy critic faster convergence, we have limt→∞‖ξit − ξit,θt‖= 0 so limt→∞A 1 t = 0.499\nHence, by Kushner-Clark lemma Kushner and Clark [1978] (pp 191-196) we have that the update in500 (20) converges a.s. to the set of asymptotically stable equilibria of the ODE (12).501\nProof of Theorem 6502\nWe use the two-time scale technique: since critic updates at a faster rate than the actor, we let the503 policy parameter θt to be fixed as θ when analysing the convergence of the critic update.504 Lemma 7. Under Assumptions 4, 1 and 6, for any i ∈ N , sequence {λit} generated from (7) is505 bounded almost surely.506\nTo prove this lemma we verify the conditions for Theorem A.2 of Zhang et al. [2018] to hold.507 We use {Ft,1} to denote the filtration with Ft,1 = σ(sτ , Cτ−1, aτ−1, rτ , λτ , τ ≤ t). With λt =508 [ (λ1t ) >, . . . , (λNt ) >]>, critic step (7) has the form:509\nλt+1 = (Ct ⊗ I) (λt + βλ,t · yt+1) (21) with yt+1 = ( δ1tw(st, at) >, . . . , δNt w(st, at) >)> ∈ RKN , ⊗ denotes Kronecker product and I is510 the identity matrix. Using the same notation as in Assumption A.1 from Zhang et al. [2018], we511 have:512\nhi(λit, st) = Ea∼π [ δitw(st, a) >|Ft,1 ] = ∫ A π(a|st)(Ri(st, a)− w(st, a) · λit)w(st, a)>da, M it+1 = δ i tw(st, at) > − Ea∼π [ δitw(st, a) >|Ft,1 ] ,\nh̄i(λt) = A i π,θ · dsπ −Bπ,θ · λt, where Aiπ,θ = [∫ A π(a|s)Ri(s, a)w(s, a)>da, s ∈ S ] .\nSince feature vectors are uniformly bounded for any s ∈ S and a ∈ A, hi is Lipschitz continuous in513 its first argument. Since, for i ∈ N , the ri are also uniformly bounded, E [ ‖Mt+1‖2|Ft,1 ] ≤ K · (1 +514 ‖λt‖2) for some K > 0. Furthermore, finiteness of |S| ensures that, a.s., ‖h̄(λt) − h(λt, st)‖2≤515 K ′ · (1 + ‖λt‖2). Finally, h∞(y) exists and has the form516 h∞(y) = −Bπ,θ · y. From Assumption 1, we have that −Bπ,θ is a Hurwitcz matrix, thus the origin is a globally asymptot-517 ically stable attractor of the ODE ẏ = h∞(y). Hence Theorem A.2 of Zhang et al. [2018] applies,518 which concludes the proof of Lemma 7.519\nWe introduce the following operators as in Zhang et al. [2018]:520\n• 〈·〉 : RKN → RK521 〈λ〉 = 1\nN (1> ⊗ I)λ = 1 N ∑ i∈N λi.\n• J = (\n1 N 11\n> ⊗ I ) : RKN → RKN such that J λ = 1⊗ 〈λ〉.522\n• J⊥ = I − J : RKN → RKN and we note λ⊥ = J⊥λ = λ− 1⊗ 〈λ〉.523\nWe then proceed in two steps as in Zhang et al. [2018], firstly by showing the convergence a.s. of the524 disagreement vector sequence {λ⊥,t} to zero, secondly showing that the consensus vector sequence525 {〈λt〉} converges to the equilibrium such that 〈λt〉 is solution to (13).526 Lemma 8. Under Assumptions 4, 1 and 6, for any M > 0, we have527\nsup t E [ ‖β−1λ,tλ⊥,t‖ 2 1{supt‖λt‖≤M} ] <∞.\nSince dynamic of {λt} described by (21) is similar to (5.2) in Zhang et al. [2018] we have528\nE [ ‖β−1λ,t+1λ⊥,t+1‖ 2|Ft,1 ]\n= β2λ,t β2λ,t+1\nρ ( ‖β−1λ,tλ⊥,t‖ 2+2 · ‖β−1λ,tλ⊥,t‖·E(‖yt+1‖ 2|Ft,1) 1 2 + E(‖yt+1‖2|Ft,1) ) (22)\nwhere ρ represents the spectral norm of E [ C>t · (I − 11 >/N) · Ct ] , with ρ ∈ [0, 1) by Assumption529\n4. Since yit+1 = δ i t · w(st, at)> we have530 E [ ‖yt+1‖2|Ft,1 ] = E [∑ i∈N ‖(ri(st, at)− w(st, at)λit) · w(st, at)>‖2|Ft,1 ] ≤ 2 · E\n[∑ i∈N ‖ri(st, at)w(st, at)>‖2+‖w(st, at)>‖4·‖λit‖2|Ft,1 ] .\nBy uniform boundedness of r(s, ·) and w(s, ·) (Assumptions 1) and finiteness of S, there exists531 K1 > 0 such that532 E [ ‖yt+1‖2|Ft,1 ] ≤ K1(1 + ‖λt‖2).\nThus, for any M > 0 there exists K2 > 0 such that, on the set {supτ≤t‖λτ‖< M},533 E [ ‖yt+1‖21{supτ≤t‖λτ‖<M}|Ft,1 ] ≤ K2. (23)\nWe let vt = ‖β−1λ,tλ⊥,t‖21{supτ≤t‖λτ‖<M}. Taking expectation over (22), noting that534 1{supτ≤t+1‖λτ‖<M} ≤ 1{supτ≤t‖λτ‖<M} we get535\nE(vt+1) ≤ β2λ,t β2λ,t+1\nρ ( E(vt) + 2 √ E(vt) · √ K2 +K2 ) which is the same expression as (5.10) in Zhang et al. [2018]. So similar conclusions to the ones of536 Step 1 of Zhang et al. [2018] holds:537\nsup t E [ ‖β−1λ,tλ⊥,t‖ 2 1{supt‖λt‖≤M} ] <∞ (24)\nand lim t λ⊥,t = 0 a.s. (25)\nWe now show convergence of the consensus vector 1⊗ 〈λt〉. Based on (21) we have538\n〈λt+1〉 = 〈(Ct ⊗ I)(1⊗ 〈λt〉+ λ⊥,t + βλ,tyt+1)〉 = 〈λt〉+ 〈λ⊥,t〉+ βλ,t〈(Ct ⊗ I)(yt+1 + β−1λ,tλ⊥,t)〉 = 〈λt〉+ βλ,t(h(λt, st) +Mt+1)\nwhere h(λt, st) = Eat∼π [ 〈yt+1〉|Ft ] andMt+1 = 〈(Ct⊗I)(yt+1+β−1λ,tλ⊥,t)〉−Eat∼π [ 〈yt+1〉|Ft ] .539 Since 〈δt〉 = r̄(st, at)− w(st, at)〈λt〉, we have540\nh(λt, st) = Eat∼π(r̄(st, at)w(st, at)>|Ft) + Eat∼π(w(st, at)〈λt〉 · w(st, at)>|Ft,1)\nso h is Lipschitz-continuous in its first argument. Moreover, since 〈λ⊥,t〉 = 0 and 1>E(Ct|Ft,1) =541 1> a.s.:542\nEat∼π [ 〈(Ct ⊗ I)(yt+1 + β−1λ,tλ⊥,t)〉|Ft,1 ] = Eat∼π [ 1 N (1> ⊗ I)(Ct ⊗ I)(yt+1 + β−1λ,tλ⊥,t)|Ft,1 ]\n= 1\nN (1> ⊗ I)(E(Ct|Ft,1)⊗ I)Eat∼π\n[ yt+1 + β −1 λ,tλ⊥,t|Ft,1 ] = 1\nN (1>E(Ct|Ft,1)⊗ I)Eat∼π\n[ yt+1 + β −1 λ,tλ⊥,t|Ft,1 ] = Eat∼π [ 〈yt+1〉|Ft,1 ] a.s.\nSo {Mt} is a martingale difference sequence. Additionally we have543 E [ ‖Mt+1‖2|Ft,1 ] ≤ 2 · E [ ‖yt+1 + β−1λ,tλ⊥,t‖ 2 Gt |Ft,1 ] + 2 · ‖E [ 〈yt+1〉|Ft,1 ] ‖2\nwith Gt = N−2 ·C>t 11 >Ct ⊗ I whose spectral norm is bounded for Ct is stochastic. From (23) and544 (24) we have that, for any M > 0, over the set {supt‖λt‖≤M}, there exists K3,K4 <∞ such that545\nE [ ‖yt+1+β−1λ,tλ⊥,t‖ 2 Gt |Ft,1 ] 1{supt‖λt‖≤M} ≤ K3·E [ ‖yt+1‖2+‖β−1λ,tλ⊥,t‖ 2|Ft,1 ] 1{supt‖λt‖≤M} ≤ K4.\nBesides, since rit+1 and w are uniformly bounded, there exists K5 < ∞ such that546 ‖E [ 〈yt+1〉|Ft,1 ] ‖2≤ K5 · (1 + ‖〈λt〉‖2). Thus, for any M > 0, there exists some K6 < ∞547 such that over the set {supt‖λt‖≤M}548\nE [ ‖Mt+1‖2|Ft,1 ] ≤ K6 · (1 + ‖〈λt〉‖2).\nHence, for any M > 0, assumptions (a.1) - (a.5) of B.1. from Zhang et al. [2018] are verified on the549 set {supt‖λt‖≤M}. Finally, we consider the ODE asymptotically followed by 〈λt〉:550\n˙〈λt〉 = −Bπ,θ · 〈λt〉+Aπ,θ · dπ\nwhich has a single globally asymptotically stable equilibrium λ∗ ∈ RK , since Bπ,θ is positive551 definite: λ∗ = B−1π,θ ·Aπ,θ · dπ . By Lemma 7, supt‖〈λt〉‖<∞ a.s., all conditions to apply Theorem552 B.2. of Zhang et al. [2018] hold a.s., which means that 〈λt〉 −→\nt→∞ λ∗ a.s. As λt = 1⊗ 〈λt〉+ λ⊥,t553\nand λ⊥,t −→ t→∞ 0 a.s., we have for each i ∈ N , a.s.,554\nλit −→ t→∞ B−1π,θ ·Aπ,θ · d π.\nProof of Theorem 7555\nLet Ft,2 = σ(θτ , τ ≤ t) be the σ-field generated by {θτ , τ ≤ t}, and let556\nζit,1 = ψ i t · ξit − Est∼dπ [ ψit · ξit|Ft,2 ] , ζit,2 = Est∼dπ [ ψit · (ξit − ξit,θt)|Ft,2 ] .\nWith local projection, actor update (6) becomes557 θit+1 = Γ i [ θit + βθ,tEst∼dπ [ ψit · ξit,θt |Ft,2 ] + βθ,tζ i t,1 + βθ,tζ i t,2 ] . (26)\nSo with hi(θt) = Est∼dπ [ ψit · ξit,θt |Ft,2 ] and h(θ) = [ h1(θ), . . . , hN (θ) ] , we have\nhi(θt) = ∑ st∈S dπ(st) · ψit · ξit,θt .\nGiven (10), θ 7→ ωθ is continuously differentiable and θ 7→ ∇θωθ is bounded so θ 7→ ωθ is Lipschitz-558 continuous. Thus θ 7→ ξit,θ is Lipschitz-continuous for each st ∈ S. Our regularity assumptions559\nensure that θ 7→ ψit,θt is continuous for each i ∈ N , st ∈ S. Moreover, θ 7→ d θ(s) is also Lipschitz560 continuous for each s ∈ S. Hence, θ 7→ g(θ) is Lipschitz-continuous in θ and the ODE (12) is561 well-posed. This holds even when using compatible features.562\nBy critic faster convergence, we have limt→∞‖ξit − ξit,θt‖= 0.563 Let M it = ∑t−1 τ=0 βθ,τζ i τ,1. M i t is a martingale sequence with respect to Ft,2. Since564 {ωt}t, {∇aφk(s, a)}s,k, and {∇θµθ(s)}s are bounded (Lemma 3, Assumption 2), it follows565 that the sequence { ζit,1 } is bounded. Thus, by Assumption 5, ∑ t E [∥∥M it+1 −M it∥∥2 |Ft,2] =566 ∑\nt ∥∥βθ,tζit,1∥∥2 < ∞ a.s. The martingale convergence theorem ensures that {M it} converges a.s.567 Thus, for any > 0,568\nlim t P ( sup n≥t ∥∥∥∥∥ n∑ τ=t βθ,τζ i τ,1 ∥∥∥∥∥ ≥ ) = 0.\nHence, by Kushner-Clark lemma Kushner and Clark [1978] (pp 191-196) we have that the update in569 (26) converges a.s. to the set of asymptotically stable equilibria of the ODE (12).570" } ]
2,020
Decentralized Deterministic Multi-Agent Reinforcement Learning
SP:cc282126b689c7311c3a28f0d173a004ed24382f
[ "The paper proposes a new training objective for fine-tuning pre-trained models: a weighted sum of the classical cross-entropy (CE) and a new supervised contrastive learning term (SCP). The latter uses the (negated) softmax over the embedding distances (i.e. dot products) between a training instance and all other instances in the batch with the same label. In contrast to the more traditional self-supervised contrastive learning (where positive pairs are obtained by applying transformations to the original data instance), there is no data augmentation; two examples with the same label constitute a positive pair." ]
State-of-the-art natural language understanding classification models follow twostages: pre-training a large language model on an auxiliary task, and then finetuning the model on a task-specific labeled dataset using cross-entropy loss. However, the cross-entropy loss has several shortcomings that can lead to sub-optimal generalization and instability. Driven by the intuition that good generalization requires capturing the similarity between examples in one class and contrasting them with examples in other classes, we propose a supervised contrastive learning (SCL) objective for the fine-tuning stage. Combined with cross-entropy, our proposed SCL loss obtains significant improvements over a strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in few-shot learning settings, without requiring specialized architecture, data augmentations, memory banks, or additional unsupervised data. Our proposed fine-tuning objective leads to models that are more robust to different levels of noise in the fine-tuning training data, and can generalize better to related tasks with limited labeled data.
[ { "affiliations": [], "name": "Beliz Gunel" }, { "affiliations": [], "name": "Jingfei Du" }, { "affiliations": [], "name": "Alexis Conneau" }, { "affiliations": [], "name": "Ves Stoyanov" } ]
[ { "authors": [ "Armen Aghajanyan", "Akshat Shrivastava", "Anchit Gupta", "Naman Goyal", "Luke Zettlemoyer", "Sonal Gupta" ], "title": "Better fine-tuning by reducing representational collapse", "venue": null, "year": 2008 }, { "authors": [ "Philip Bachman", "R. Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Alexei Baevski", "Henry Zhou", "Abdelrahman Mohamed", "Michael Auli" ], "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In EMNLP,", "year": 2015 }, { "authors": [ "Kaidi Cao", "Colin Wei", "Adrien Gaidon", "N. Aréchiga", "Tengyu Ma" ], "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey E. Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey E. Hinton" ], "title": "Big self-supervised models are strong semi-supervised learners", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Alexis Conneau", "Alexei Baevski", "Ronan Collobert", "Abdelrahman Mohamed", "Michael Auli" ], "title": "Unsupervised cross-lingual representation learning for speech recognition", "venue": "arXiv preprint arXiv:2006.13979,", "year": 2020 }, { "authors": [ "E. Cubuk", "Barret Zoph", "Dandelion Mané", "V. Vasudevan", "Quoc V. Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "E.D. Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V. Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2020 }, { "authors": [ "J. Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "Jesse Dodge", "Gabriel Ilharco", "Roy Schwartz", "Ali Farhadi", "Hannaneh Hajishirzi", "Noah A. Smith" ], "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", "venue": null, "year": 2002 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier" ], "title": "Understanding back-translation at scale", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Gamaleldin F. Elsayed", "Dilip Krishnan", "Hossein Mobahi", "Kevin Regan", "Samy Bengio" ], "title": "Large margin deep networks for classification", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Hongchao Fang", "Pengtao Xie" ], "title": "Cert: Contrastive self-supervised learning for language understanding", "venue": "ArXiv, abs/2005.12766,", "year": 2020 }, { "authors": [ "M. Gutmann", "A. Hyvärinen" ], "title": "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics", "venue": "J. Mach. Learn. Res.,", "year": 2012 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "IEEE/CVF International Conference on Computer Vision Workshop (ICCVW),", "year": 2019 }, { "authors": [ "Kaiming He", "X. Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross B. Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Olivier J. Hénaff", "A. Srinivas", "J. Fauw", "Ali Razavi", "C. Doersch", "S. Eslami", "A. Oord" ], "title": "Data-efficient image recognition with contrastive predictive", "venue": "coding. ArXiv,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Thomas G. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "K. Hermann", "Tomás Kociský", "Edward Grefenstette", "Lasse Espeholt", "W. Kay", "Mustafa Suleyman", "P. Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Geoffrey E. Hinton", "Oriol Vinyals", "J. Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NeurIPS Deep Learning and Representation Learning Workshop,", "year": 2015 }, { "authors": [ "R. Devon Hjelm", "A. Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Tuo Zhao" ], "title": "Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "Tushar Khot", "A. Sabharwal", "Peter Clark" ], "title": "Scitail: A textual entailment dataset from science question answering", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Byeongchang Kim", "Hyunwoo Kim", "Gunhee Kim" ], "title": "Abstractive summarization of reddit posts with multi-level memory networks", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "A. Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Hao Liu", "P. Abbeel" ], "title": "Hybrid discriminative-generative training via contrastive learning", "venue": "ArXiv, abs/2007.09070,", "year": 2020 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Meng Yang" ], "title": "Large-margin softmax loss for convolutional neural networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Y. Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "M. Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "I. Misra", "L.V.D. Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "A. Mnih", "K. Kavukcuoglu" ], "title": "Learning word embeddings efficiently with noise-contrastive estimation", "venue": "In NeurIPS,", "year": 2013 }, { "authors": [ "Marius Mosbach", "Maksym Andriushchenko", "Dietrich Klakow" ], "title": "On the stability of fine-tuning bert: Misconceptions", "venue": "explanations, and strong baselines. ArXiv,", "year": 2020 }, { "authors": [ "R. Müller", "Simon Kornblith", "Geoffrey E. Hinton" ], "title": "When does label smoothing help", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Courtney Napoles", "Matthew R. Gormley", "Benjamin Van Durme" ], "title": "Annotated gigaword", "venue": "In AKBC-WEKEX@NAACL-HLT,", "year": 2012 }, { "authors": [ "K. Nar", "O. Ocal", "S. Sastry", "K. Ramchandran" ], "title": "Cross-entropy loss and low-rank features have responsibility for adversarial", "venue": "examples. ArXiv,", "year": 2019 }, { "authors": [ "Andrew Y. Ng", "Michael I. Jordan" ], "title": "On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes", "venue": "NeurIPS,", "year": 2001 }, { "authors": [ "Yixin Nie", "Adina Williams", "Emily Dinan", "Mohit Bansal", "J. Weston", "Douwe Kiela" ], "title": "Adversarial nli: A new benchmark for natural language understanding", "venue": null, "year": 2020 }, { "authors": [ "A. Oord", "Y. Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive", "venue": "coding. ArXiv,", "year": 2018 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "A. Radford", "Jeffrey Wu", "R. Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Rajat Raina", "Yirong Shen", "Andrew Y. Ng", "Andrew McCallum" ], "title": "Classification with hybrid generative/discriminative models", "venue": "In NeurIPS,", "year": 2003 }, { "authors": [ "Olga Russakovsky", "J. Deng", "H. Su", "J. Krause", "S. Satheesh", "S. Ma", "Zhiheng Huang", "A. Karpathy", "A. Khosla", "M. Bernstein", "A. Berg", "Li Fei-Fei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Nikunj Saunshi", "Orestis Plevrakis", "Sanjeev Arora", "Mikhail Khodak", "Hrishikesh Khandeparkar" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Florian Schroff", "D. Kalenichenko", "J. Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Sainbayar Sukhbaatar", "Joan Bruna", "Manohar Paluri", "Lubomir D. Bourdev", "Rob Fergus" ], "title": "Training convolutional networks with noisy labels", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Christian Szegedy", "V. Vanhoucke", "S. Ioffe", "Jon Shlens", "Z. Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "S. Yu", "D. Lin" ], "title": "Unsupervised feature learning via non-parametric instance discrimination", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard H. Hovy", "Minh-Thang Luong", "Quoc V. Le" ], "title": "Unsupervised data augmentation for consistency training", "venue": "arXiv: Learning,", "year": 2019 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "I Zeki Yalniz", "Hervé Jégou", "Kan Chen", "Manohar Paluri", "Dhruv Mahajan" ], "title": "Billion-scale semisupervised learning for image classification", "venue": null, "year": 1905 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Hongyi Zhang", "M. Cissé", "Yann Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Tianyi Zhang", "Felix Wu", "Arzoo Katiyar", "Kilian Q. Weinberger", "Yoav Artzi" ], "title": "Revisiting few-sample bert fine-tuning", "venue": "ArXiv, abs/2006.05987,", "year": 2020 }, { "authors": [ "X. Zhang", "J. Zhao", "Y. LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Zhilu Zhang", "Mert R. Sabuncu" ], "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "venue": "In NeurIPS,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "State-of-the-art for most existing natural language processing (NLP) classification tasks is achieved by models that are first pre-trained on auxiliary language modeling tasks and then fine-tuned on the task of interest with cross-entropy loss (Radford et al., 2019; Howard & Ruder, 2018; Liu et al., 2019; Devlin et al., 2019). Although ubiquitous, the cross-entropy loss – the KL-divergence between one-hot vectors of labels and the distribution of model’s output logits – has several shortcomings. Cross entropy loss leads to poor generalization performance (Liu et al., 2016; Cao et al., 2019), and it lacks robustness to noisy labels (Zhang & Sabuncu, 2018; Sukhbaatar et al., 2015) or adversarial examples (Elsayed et al., 2018; Nar et al., 2019). Effective alternatives have been proposed to modify the reference label distributions through label smoothing (Szegedy et al., 2016; Müller et al., 2019), Mixup (Zhang et al., 2018), CutMix (Yun et al., 2019), knowledge distillation (Hinton et al., 2015) or self-training (Yalniz et al., 2019; Xie et al., 2020).\nFine-tuning using cross entropy loss in NLP also tends to be unstable across different runs (Zhang et al., 2020; Dodge et al., 2020), especially when supervised data is limited, a scenario in which pre-training is particularly helpful. To tackle the issue of unstable fine-tuning and poor generalization, recent works propose local smoothness-inducing regularizers (Jiang et al., 2020) and regularization methods inspired by the trust region theory (Aghajanyan et al., 2020) to prevent representation collapse. Empirical evidence suggests that fine-tuning for more iterations, reinitializing top few layers (Zhang et al., 2020), and using debiased Adam optimizer during fine-tuning (Mosbach et al., 2020) can make the fine-tuning stage more stable.\nInspired by the learning strategy that humans utilize when given a few examples, we seek to find the commonalities between the examples of each class and contrast them with examples from other classes. We hypothesize that a similarity-based loss will be able to hone in on the important dimensions of the multidimensional hidden representations hence lead to better few-shot learning results and be more stable while fine-tuning pre-trained language models. We propose a novel objective for fine-tuning that includes a supervised contrastive learning (SCL) term that pushes the examples from the same class close and the examples from different classes further apart. The SCL\n∗Work done during Facebook AI research internship, correspondence to bgunel@stanford.edu.\nterm is similar to the contrastive objectives used in self-supervised representation learning across image, speech, and video domains. (Sohn, 2016; Oord et al., 2018; Wu et al., 2018; Bachman et al., 2019; Hénaff et al., 2019; Baevski et al., 2020; Conneau et al., 2020; Tian et al., 2020; Hjelm et al., 2019; Han et al., 2019; He et al., 2020; Misra & Maaten, 2020; Chen et al., 2020a;b). Unlike these methods, however, we use a contrastive objective for supervised learning of the final task, instead of contrasting different augmented views of examples.\nIn few-shot learning settings (20, 100, 1000 labeled examples), the addition of the SCL term to the finetuning objective significantly improves the performance on several natural language understanding classification tasks from the popular GLUE benchmark (Wang et al., 2019) over the very strong baseline of fine-tuning RoBERTa-Large with cross-entropy loss only. Furthermore, pre-trained language models fine-tuned with our proposed objective are not only robust to noise in the fine-tuning training data, but can also exhibit improved generalization to related tasks with limited labeled task data. Our approach does not require any specialized network architectures (Bachman et al., 2019; Hénaff et al., 2019), memory banks (Wu et al., 2018; Tian et al., 2020; Misra & Maaten, 2020), data augmentation of any kind, or additional unsupervised data. To the best of our knowledge, our work is the first to successfully integrate a supervised contrastive learning objective for fine-tuning pre-trained language models. We empirically demonstrate that the new objective has desirable properties across several different settings. Our contributions in this work are listed in the following:\n• We propose a novel objective for fine-tuning pre-trained language models that includes a supervised contrastive learning term, as described in Section 2.\n• We obtain strong improvements in the few-shot learning settings (20, 100, 1000 labeled examples) as shown in Table 2, leading up to 10.7 points improvement on a subset of GLUE benchmark tasks (SST-2, QNLI, MNLI) for the 20 labeled example few-shot setting, over a very strong baseline – RoBERTa-Large fine-tuned with cross-entropy loss.\n• We demonstrate that our proposed fine-tuning objective is more robust, in comparison to RoBERTa-Large fine-tuned with cross-entropy loss, across augmented noisy training datasets (used to fine-tune the models for the task of interest) with varying noise levels as shown in Table 3 – leading up to 7 points improvement on a subset of GLUE benchmark tasks (SST-2, QNLI, MNLI) across augmented noisy training datasets. We use a backtranslation model to construct the augmented noisy training datasets of varying noise levels (controlled by the temperature parameter), as described in detail in Section 4.2.\n• We show that the task-models fine-tuned with our proposed objective have improved generalizability to related tasks despite having limited availability of labeled task data (Table 7). This led to a 2.9 point improvement on Amazon-2 over the task model fine-tuned with cross-entropy loss only. Moreover, it considerably reduced the variance across few-shot training samples, when transferred from the source SST-2 sentiment analysis task model." }, { "heading": "2 APPROACH", "text": "We propose a novel objective that includes a supervised contrastive learning term for fine-tuning pre-trained language models. The loss is meant to capture the similarities between examples of the same class and contrast them with the examples from other classes.\nFor a multi-class classification problem with C classes, we work with a batch of training examples of size N, {xi, yi}i=1,...N . Φ(·) ∈ Rd denotes an encoder that outputs the l2 normalized final encoder hidden layer before the softmax projection; Nyi is the total number of examples in the batch that have the same label as yi; τ > 0 is an adjustable scalar temperature parameter that controls the separation of classes; yi,c denotes the label and ŷi,c denotes the model output for the probability of the ith example belonging to the class c; λ is a scalar weighting hyperparameter that we tune for each downstream task and setting. The overall loss is then given in the following:\nL = (1− λ)LCE + λLSCL (1)\nLCE = − 1\nN N∑ i=1 C∑ c=1 yi,c · logŷi,c (2)\nLSCL = N∑ i=1 − 1 Nyi − 1 N∑ j=1 1i6=j1yi=yj log exp (Φ(xi) · Φ(xj)/τ)∑N k=1 1i 6=k exp (Φ(xi) · Φ(xk)/τ) (3)\nThe overall loss is a weighted average of CE and the proposed SCL loss, as given in the equation (1). The canonical definition of the multi-class CE loss that we use is given in equation (2). The novel SCL loss is given in the equation (3).\nThis loss can be applied using a variety of encoders Φ(·) ∈ Rd – for example a ResNet for a computer vision application or a pre-trained language model such as BERT for an NLP application. In this work, we focus on fine-tuning pre-trained language models for single sentence and sentence-pair classification settings. For single sentence classification, each example xi consists of sequence of tokens prepended with the special [CLS] token xi = [[CLS], t1, t2, . . . , tL, [EOS]]. The length of sequence L is constrained such that L < Lmax. Similarly, for sentence-pair classification tasks, each example xi is a concatenation of two sequences of tokens [t1, t2, . . . tL] and [s1, s2, . . . , sM ] corresponding to the sentences with special tokens delimiting them: xi = [[CLS], t1, t2, . . . , tL, [SEP ], s1, s2, . . . , sM , [EOS]]. The length of concatenated sequences is constrained such that L+M < Lmax. In both cases, Φ(xi) ∈ Rd uses the embedding of [CLS] token as the representation for example xi. These choices follow standard practices for fine-tuning pre-trained language models for classification (Devlin et al., 2019; Liu et al., 2019).\nEmpirical observations show that both l2 normalization of the encoded embedding representations and an adjustable scalar temperature parameter τ improve performance. Lower temperature increases the influence of examples that are harder to separate, effectively creating harder negatives. Using hard negatives has been previously shown to improve performance in the context of margin-based loss formulations such as triplet loss (Schroff et al., 2015). The empirical behavior of the adjustable temperature parameter is consistent with the observations of previous work related to supervised contrastive learning. (Chen et al., 2020a; Khosla et al., 2020).\nRelationship to Self-Supervised Contrastive Learning Self-supervised contrastive learning has shown success in learning powerful representations, particularly in the computer vision domain. (Chen et al., 2020a; He et al., 2020; Tian et al., 2020; Mnih & Kavukcuoglu, 2013; Gutmann & Hyvärinen, 2012; Kolesnikov et al., 2019) Self-supervised learning methods do not require any labeled data; instead they sample a mini batch from unsupervised data and create positive and negative examples\nfrom these samples using strong data augmentation techniques such as AutoAugment (Cubuk et al., 2019) or RandAugment (Cubuk et al., 2020) for computer vision. Positive examples are constructed by applying data augmentation to the same example (cropping, flipping, etc. for an image), and negative examples are simply all the other examples in the sampled mini batch. Intuitively, selfsupervised contrastive objectives are learning representations that are invariant to different views of positive pairs; while maximizing the distance between negative pairs. The distance metric used is often the inner product or the Euclidean distance between vector representations of the examples.\nFor a batch of size N, self-supervised contrastive loss is defined as:\nLself = 2N∑ i=1 − log exp (Φ(x′2i−1) · Φ(x′2i)/τ)∑2N k=1 1i 6=k exp (Φ(x ′ i) · Φ(x′k)/τ)\n(4)\nwhere Φ(·) ∈ Rd denotes an encoder that outputs the l2 normalized final encoder hidden layer before the softmax projection; τ > 0 is a scalar temperature parameter. A is defined as a data augmentation block that generates two randomly generated augmented examples, x′2i and x ′ 2i−1 from the original example xi: A({xi, yi}i=1,...N ) = {x′i, y′i}i=1,...2N . As an example, A can be RandAugment for a computer vision application; or it could be a back-translation model for an NLP application." }, { "heading": "3 RELATED WORK", "text": "Traditional Machine Learning and Theoretical Understanding Several works have analyzed the shortcomings of the widely adopted cross-entropy loss, demonstrating that it leads to poor generalization performance due to poor margins (Liu et al., 2016; Cao et al., 2019), and lack of robustness to noisy labels (Zhang & Sabuncu, 2018; Sukhbaatar et al., 2015) or adversarial examples (Elsayed et al., 2018; Nar et al., 2019). On the other hand, there has been a body of work that has explored the performance difference for classifiers trained with discriminative (i.e., optimizing for p(y|x), where y is the label and x is the input) losses such as cross-entropy loss and generative losses (i.e. optimizing for p(x|y)). Ng & Jordan (2001) show that classifiers trained with generative losses can outperform their counterparts trained with discriminative losses in the context of Logistic Regression and Naive Bayes. Raina et al. (2003) show that a hybrid discriminative and generative objective outperforms both solely discriminative and generative approaches. In the context of contrastive learning, Saunshi et al. (2019) propose a theoretical framework for analyzing contrastive learning algorithms through hypothesizing that semantically similar points are sampled from the same latent class, which allows showing formal guarantees on the quality of learned representations.\nContrastive Learning There has been several recent investigations for the use of contrastive objectives for self-supervised, semi-supervised, and supervised learning methods, primarily in the computer vision domain. Chen et al. (2020a) propose a framework for contrastive learning of visual representations without specialized architectures or a memory bank, and show state-of-the-art results on ImageNet ILSVRC-2012 (Russakovsky et al., 2015) – outperforming previous methods for self-supervised, semi-supervised and transfer learning. Similarly, Khosla et al. (2020) propose a supervised contrastive loss that outperforms cross entropy loss and gets state-of-the-art results on ImageNet on both ResNet-50 and ResNet-200 (He et al., 2016) with AutoAugment (Cubuk et al., 2019) data augmentation. They also show increased robustness on the ImageNet-C dataset (Hendrycks & Dietterich, 2019), and demonstrate that supervised contrastive loss is less sensitive to different hyperparameter settings for optimizers or data augmentations compared to the cross-entropy loss. Liu & Abbeel (2020) propose a hybrid discriminative-generative training of energy-based models where they approximate the generative term with a contrastive loss using large batch sizes and show improved classification accuracy of WideResNet-28-10 (Zagoruyko & Komodakis, 2016) on CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) datasets, outperforming state-of-the-art discriminative and generative classifiers. They also demonstrate improved performance for WideResNet-28-10 on robustness, out-of-distribution detection, and calibration, compared to other state-of-the-art generative and hybrid models. Finally, Fang & Xie (2020) propose pre-training language models using a self-supervised contrastive learning objective at the sentence level using back-translation as the augmentation method, followed by fine-tuning by predicting whether two augmented sentences originate from the same sentence – demonstrating improvements over fine-tuning BERT on a subset of GLUE benchmark tasks.\nStability and Robustness of Fine-tuning Pre-trained Language Models There has been recent works on analyzing the stability and robustness of fine-tuning pre-trained language models, since they have been shown to overfit to the labeled task data while fine-tuning and hence fail to generalize to unseen data when there is limited labeled data for the task (Aghajanyan et al., 2020). To improve the generalization performance, Jiang et al. (2020) propose a local smoothness-inducing regularizer to manage the complexity of the model and a Bregman proximal point optimization method, an instance of trust-region methods, to prevent aggressive updating of the model during fine-tuning. They show state-of-the-art performance on GLUE, SNLI (Bowman et al., 2015), SciTail (Khot et al., 2018), and ANLI (Nie et al., 2020) natural language understanding benchmarks. Similarly, Aghajanyan et al. (2020) propose a regularized fine-tuning procedure inspired by trust-region theory that replaces adversarial objectives with parametric noise sampled from normal or uniform distribution in order to prevent representation collapse during fine-tuning for better generalization performance, without hurting the performance. They show improved performance on a range of natural language understanding and generation tasks including DailyMail/CNN (Hermann et al., 2015), Gigaword (Napoles et al., 2012), Reddit TIFU (Kim et al., 2019), and the GLUE benchmark. There has also been some empirical analysis that suggests fine-tuning for more epochs, reinitializing top few layers (Zhang et al., 2020) instead of only the classification head, and using debiased Adam optimizer instead of BERTAdam (Devlin et al., 2019) during fine-tuning (Mosbach et al., 2020) can make the fine-tuning procedure more stable across different runs." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "" }, { "heading": "4.1 DATASETS AND TRAINING DETAILS", "text": "We use datasets from the GLUE natural language understanding benchmark (Wang et al., 2019) for evaluation. We include both single sentence classification tasks and sentence-pair classification tasks to test whether our hypothesis is generally applicable across tasks. We summarize each dataset based on their main task, domain, number of training examples, and number of classes in Table 1.\nIn our few-shot learning experiments, we sample half of the original validation set of the GLUE benchmark and use it as our test set, and sample ∼500 examples for our validation set from the original GLUE validation set, both taking the label distribution of the original validation set into account. For each task, we want the validation set to be small enough to avoid easy overfitting on the validation set, and big enough to avoid high-variance when early-stopping at various epochs for the few-shot learning experiments. For full dataset experiments, such as the ones shown in Table 5, Table 6, Table 8, and Table 9, we sample a validation set from the original training set of the GLUE benchmark based on the size of the original validation set of GLUE, and report our test results on the original validation set of GLUE.\nWe run each experiment with 10 different seeds, and report the average test accuracy, standard deviation, along with p-values with respect to the baseline. We pick the best hyperparameter combination based on the average validation accuracy across 10 seeds. For few-shot learning experiments, such as the ones shown in Table 2, Table 3, and Table 10, we sample 10 different training set samples based on the total number of examples N specified from the original training set of the GLUE benchmark, taking the label distribution of the original training set into account. We report the average and the standard deviation of the test accuracies of the top 3 models based on their validation accuracies out of 10 random training set samples. Best hyperparameter combination is picked based on the average validation accuracy of the top 3 models. The reason why we focus on the top 3 models for this setting is that we would like to reduce the variance across training set samples.\nWe use fairseq Ott et al. (2019) library and the open-source RoBERTa-Large model for all of our experiments. During all the fine-tuning runs, we use Adam optimizer with a learning rate of 1e-5, batch size of 16 (unless specified otherwise), and dropout rate of 0.1. For each experiment that includes the SCL term, we conduct a grid-based hyperparameter sweep for λ ∈ {0.1, 0.3, 0.5, 0.7, 0.9, 1.0} and τ ∈ {0.1, 0.3, 0.5, 0.7}. We observe that models with best test accuracies across all experimental settings overwhelmingly use the hyperparameter combination τ = 0.3 and λ = 0.9." }, { "heading": "Dataset Task Domain #Train #Classes", "text": "" }, { "heading": "4.2 CONSTRUCTING AUGMENTED NOISY TRAINING DATASETS", "text": "Machine learning researchers or practitioners often do not know how noisy their datasets are, as input examples might be corrupted or ground truth labeling might not be perfect. Therefore, it is preferable to use robust training objectives that can get more information out of datasets of different noise levels, even where there is limited amount of labeled data. We construct augmented noisy training datasets (used to fine-tune the pre-trained language models for the task of interest) of different noise levels using a back-translation model (Edunov et al., 2018), where we increase the temperature parameter to create more noisy examples. Back-translation refers to the procedure of translating an example in language A into language B and then translating it back to language A, and it is a commonly used data augmentation procedure for NLP applications, as the new examples obtained through back-translation provide targeted inductive bias to the model while preserving the meaning of the original example. Specifically, we use WMT’18 English-German and German-English translation models, use random sampling to get more diverse examples, and employ and augmentation ratio of 1:3 for supervised examples:augmented examples. We observe that employing random sampling with a tunable temperature parameter is critical to get diverse paraphrases for the supervised examples, consistent with the previous work (Edunov et al., 2018; Xie et al., 2019), since commonly used beam search results in very regular sentences that do not provide diversity to the existing data distribution. We keep the validation and test sets same with the experiments shown in Table 2." }, { "heading": "5 ANALYSIS AND RESULTS", "text": "" }, { "heading": "5.1 GLUE BENCHMARK FEW-SHOT LEARNING RESULTS", "text": "We proposed adding the SCL term inspired by the learning strategy of humans when they are given few examples. In Table 2, we report our few-shot learning results on SST-2, QNLI, and MNLI from the GLUE benchmark with 20, 100, 1000 labeled training examples. Details of the experimental setup are explained in Section 4. We use a very strong baseline of fine-tuning RoBERTa-Large with cross-entropy loss. We observe that the SCL term improves performance over the baseline significantly across all datasets and data regimes, leading to 10.7 points improvement on QNLI, 3.4 points improvement on MNLI, and 2.2 points improvement on SST-2, where we have 20 labeled examples for fine-tuning. This shows that our proposed objective is effective both for binary single sentence classification such as sentiment analysis; and sentence pair classification tasks such as textual entailment and paraphrasing – when we are given only few labeled examples for the task. We see that as we increase the number of labeled examples, performance improvement over the baseline decreases, leading to 1.9 points improvement on MNLI for 100 examples and 0.6 points improvement on QNLI for 1000 examples. We also would like to acknowledge that improvements over the baseline when N=1000 on both SST-2 and MNLI are not statistically significant. In addition, we conduct an ablation study where we investigate the importance of l2 normalization and temperature scaling where we replace SCL loss with CE loss but keep the l2 normalization and temperature scaling, as shown in Table 10 in the Appendix under the method name CE+CE.\nIn Figure 2, we show tSNE plots of the learned representations of the CLS embeddings on SST-2 test set when RoBERTa-Large is fine-tuned with 20 labeled examples, comparing CE with and without the SCL term. We can clearly see that the SCL term enforces more compact clustering of examples with the same label; while the distribution of the embeddings learned with CE is close to random. We include a more detailed comparison for CE and CE+SCL showing learned representations of\nexamples as tSNE plots, where we have 20, 100 labeled examples and full dataset respectively for fine-tuning in Figure 3 in the Appendix." }, { "heading": "5.2 ROBUSTNESS ACROSS AUGMENTED NOISY TRAINING DATASETS", "text": "In Table 3, we report our results on augmented noisy training sets with varying levels of noise. We have 100 labeled examples for fine-tuning for each task, and we augment their training sets with noisy examples using a back-translation model, as described in detail in Section 4.2. Note that we use the back-translation model to simulate training datasets of varying noise levels and not as a method to boost model performance. Experimental setup follows what is described in Section 4 for few-shot learning experiments. T is the temperature for the back-translation model used to augment the training sets, and higher temperature corresponds to more noise in the augmented training set.\nWe observe consistent improvements over the RoBERTa-Large baseline with our proposed objective across all datasets across all noise levels, with 0.4 points improvement on SST-2, 2.5 points improvement on QNLI, and 7 points improvement on MNLI on average across augmented training sets. The improvement is particularly significant for inference tasks (QNLI, MNLI) when the noise levels are higher (higher temperature), leading to 7.7 points improvement on MNLI when T=0.7, and 4.2 points improvement on QNLI when T=0.9. We show some samples of the augmented examples used in this robustness experiment in Table 4. For T=0.3, examples mostly stay the same with minor changes in their phrasing, while for T=0.9, some grammatical mistakes and factual errors are introduced." }, { "heading": "Dataset Loss Original T=0.3 T=0.5 T=0.7 T=0.9 Average", "text": "" }, { "heading": "5.3 GLUE BENCHMARK FULL DATASET RESULTS", "text": "In Table 5, we report results using our proposed objective on six downstream tasks from the GLUE benchmark. We use a very strong baseline of fine-tuning RoBERTa-Large with cross-entropy loss, which is currently the standard practice for the state-of-the-art NLP classification models. Details of the experimental setup are explained in Section 4.\nWe observe that adding the SCL term to the objective improves the performance over the RoBERTaLarge baseline that lead to 3.1 points improvement on MRPC, 3.5 points improvement on QNLI, and an average improvement of 1.2 points across all 6 datasets. We conduct these experiments to investigate the effect of the SCL term in high-data regimes, as we observe that it’s effective in few-shot learning settings. We acknowledge that only MRPC and QNLI results are statistically significant, and we report the results on the other datasets as a finding for the sake of completeness.\nWe hypothesize larger batch sizes lead to better performance, but we leave that for future work as that requires additional engineering effort. We show evidence for this hypothesis in our ablation studies that we show in Table 6, where we conduct the full dataset experiments for CE+SCL with the same experimental setup described here for Table 5 on SST-2, CoLA, QNLI, and MNLI for batch sizes 16, 64, and 256 using RoBERTa-Base. We observe that as we increase the batch size, performance improves significantly across all datasets. Specifically, we observe 0.3 points improvement on SST-2, 0.8 points improvement on CoLA, 0.4 points improvement on QNLI, and 1.3 points improvement on MNLI, when we increase the batch size from 16 to 256 for CE+SCL. We also investigate the effect of SCL term in the overall training speed, and we measure that with average updates per second metric, shown in Table 6. For batch size 16, the batch size we use throughout the paper across all experimental settings, effect of SCL is negligible – decreasing average updates per second from 15.9 to 15.08. As we increase the batch size, effect of SCL to training speed becomes more significant – decreasing average updates per second from 2.46 to 1.54 for batch size 256. In addition, we conduct an ablation study where we investigate the importance of l2 normalization and temperature scaling where we replace SCL loss with CE loss but keep the normalization and scaling (denoted as CE+CE) both for full dataset results in Table 8, and for batch size ablation in Table 9 in the Appendix." }, { "heading": "Model Loss SST-2 CoLA MRPC RTE QNLI MNLI Avg", "text": "" }, { "heading": "Model Loss Bsz SST-2 CoLA QNLI MNLI Avg ups/sec", "text": "" }, { "heading": "5.4 GENERALIZATION ABILITY OF TASK MODELS", "text": "In this experiment, we first fine-tune RoBERTa-Large on SST-2 using its full training set and get a task model with and without SCL term. Then, we transfer this task model to two related single sentence sentiment analysis binary classification tasks for the movie reviews domain – Amazon-2 and Yelp-2 (Zhang et al., 2015). For both, we sample 20 labeled examples for each class, and follow the few-shot learning experimental setup described in Section 4. In Table 7, we demonstrate that using the SCL term for both source (SST-2) and target domains (Amazon-2, Yelp-2) lead to better generalization ability, with 2.9 points improvement on Amazon-2 and 0.4 points improvement on Yelp-2 along with significant reduction in variance across training set samples." }, { "heading": "6 CONCLUSION", "text": "We propose a supervised contrastive learning objective for fine-tuning pre-trained language models and demonstrate significant improvements over a strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in the few-shot learning settings. We also show that our proposed objective leads to models that are more robust to different levels of noise in the training data and can generalize better to related tasks with limited labeled task data. Currently, data augmentation methods in NLP and their effects on the downstream tasks are neither as effective nor as well understood as their counterparts in the computer vision domain. In future work, we plan to study principled and automated data augmentation techniques for NLP that would allow extending our supervised contrastive learning objective to both semi-supervised and self-supervised learning settings." }, { "heading": "Model Loss SST-2 CoLA MRPC RTE QNLI MNLI Avg", "text": "" }, { "heading": "A APPENDIX", "text": "" }, { "heading": "Model Loss Bsz SST-2 CoLA QNLI MNLI Avg ups/sec", "text": "" } ]
2,021
PRE-TRAINED LANGUAGE MODEL FINE-TUNING
SP:7eb0d8278168465270570233e4af64ebb3f2f154
[ "Paper proposes to attack the challenging problem of RL with sparse feedback by leveraging a few demonstrations and learnable reward redistribution. The redistributed reward is computed by aligning the key events (a set of clustered symbols) to the demonstrations via PSSM-based seq matching. Experiments on two artificial tasks and a Minecraft task demonstrate that the presented method performs advantageously than two baselines (DQfD and BC+Q learning).", "The paper considers the challenge of improving sample efficiency of RUDDER-style algorithms in sparse MDPs. Building on prior work by Arjona-Medina et al [1], the authors incorporate demonstrations of optimal trajectories from an expert in the training pipeline. Additionally, to improve the sample efficiency and stability, the authors replace LSTM-model of RUDDER with an alignment based profile model. The approach is evaluated on two synthetic grid-world based environments and a MineCraft based environment. On both benchmarks the proposed algorithm works better than baseline RUDDER. ", "The paper uses Sequence Alignment technique to redistribute rewards, to a similar effect as with LSTM in RUDDER. The hierarchical agent is trained with behavioral cloning and fine-tuned with RL (tabular in Rooms environment / PPO in MineRL). Tasks are automatically divided into subtasks and specialized agents used for the subtasks. The main contributions seem to be: a) introduction of the Sequence Alignment technique which works well with few expert demonstrations; b) experimental demonstration in Rooms / MineRL." ]
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
[ { "affiliations": [], "name": "STRATIONS BY" }, { "affiliations": [], "name": "REWARD REDISTRIBUTION" } ]
[ { "authors": [ "P. Abbeel", "A.Y. Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the Twenty-First International Conference on Machine Learning, pp", "year": 2004 }, { "authors": [ "S.F. Altschul", "W. Gish", "W. Miller", "E.W. Myers", "D.J. Lipman" ], "title": "Basic local alignment search tool", "venue": "J. Molec. Biol.,", "year": 1990 }, { "authors": [ "S.F. Altschul", "T.L. Madden", "A.A. Schäffer", "J. Zhang", "Z. Zhang", "W. Miller", "D.J. Lipman" ], "title": "Gapped BLAST and PSI-BLAST: a new generation of protein database search programs", "venue": "Nucleic Acids Research,", "year": 1997 }, { "authors": [ "J.A. Arjona-Medina", "M. Gillhofer", "M. Widrich", "T. Unterthiner", "S. Hochreiter" ], "title": "RUDDER: return decomposition for delayed rewards", "venue": null, "year": 2018 }, { "authors": [ "J.A. Arjona-Medina", "M. Gillhofer", "M. Widrich", "T. Unterthiner", "J. Brandstetter", "S. Hochreiter" ], "title": "RUDDER: return decomposition for delayed rewards", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "P.L. Bacon", "J. Harb", "D. Precup" ], "title": "The option-critic architecture", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "A. Bairoch", "P. Bucher" ], "title": "PROSITE: recent developments", "venue": "Nucleic acids research,", "year": 1994 }, { "authors": [ "A. Barreto", "W. Dabney", "R. Munos", "J. Hunt", "T. Schaul", "H.P. vanHasselt", "D. Silver" ], "title": "Successor features for transfer in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Y. Bengio", "Y. Lecun" ], "title": "Large-scale kernel machines, chapter Scaling learning algorithms towards AI, pp. 321–359", "venue": null, "year": 2007 }, { "authors": [ "A. Billard", "S. Calinon", "R. Dillmann", "S. Schaal" ], "title": "Robot programming by demonstration", "venue": "Springer Handbook of Robotics,", "year": 2008 }, { "authors": [ "T. Brys", "A. Harutyunyan", "H.B. Suay", "S. Chernova", "M.E. Taylor", "A. Nowé" ], "title": "Reinforcement learning from demonstration through shaping", "venue": "In Proc. of the 24th Int. Joint Conf. on Artificial Intelligence,", "year": 2015 }, { "authors": [ "K. Chao", "L. Zhang" ], "title": "Sequence comparison: theory and methods", "venue": null, "year": 2009 }, { "authors": [ "P.J.A. Cock", "T. Antao", "J.T. Chang", "B.A. Chapman", "C.J. Cox", "A. Dalke", "I. Friedberg", "T. Hamelryck", "F. Kauff", "B. Wilczynski", "M.J.L. de Hoon" ], "title": "Biopython: freely available Python tools for computational molecular biology and bioinformatics", "venue": "Bioinformatics, 25(11):1422–1423,", "year": 2009 }, { "authors": [ "G. Comanici", "D. Precup" ], "title": "Optimal policy switching algorithms for reinforcement learning", "venue": "In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS),", "year": 2010 }, { "authors": [ "F. Corpet" ], "title": "Multiple sequence alignment with hierarchical clustering", "venue": "Nucleic Acids Research,", "year": 1988 }, { "authors": [ "C. Daniel", "H. vanHoof", "J. Peters", "G. Neumann" ], "title": "Probabilistic inference for determining options in reinforcement learning", "venue": "Machine Learning,", "year": 2016 }, { "authors": [ "P. Dayan" ], "title": "Improving generalization for temporal difference learning: The successor representation", "venue": "Neural Computation,", "year": 1993 }, { "authors": [ "T.G. Dietterich" ], "title": "Hierarchical reinforcement learning with the MAXQ value function decomposition", "venue": "Journal of Artificial Intelligence Research,", "year": 2000 }, { "authors": [ "Y. Duan", "M. Andrychowicz", "B.C. Stadie", "J. Ho", "J. Schneider", "I. Sutskever", "P. Abbeel", "W. Zaremba" ], "title": "One-shot imitation learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "A. Ecoffet", "J. Huizinga", "J. Lehman", "K.O. Stanley", "J. Clune" ], "title": "Go-Explore: A new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "R.C. Edgar" ], "title": "MUSCLE: multiple sequence alignment with high accuracy and high throughput", "venue": "Nucleic Acids Research,", "year": 2004 }, { "authors": [ "L. Espeholt", "H. Soyer", "R. Munos", "K. Simonyan", "V. Mnih", "T. Ward", "Y. Doron", "V. Firoiu", "T. Harley", "I. Dunning", "S. Legg", "K. Kavukcuoglu" ], "title": "IMPALA: Scalable distributed Deep-RL with importance weighted actor-learner architectures", "venue": "In J. Dy and A. Krause (eds.), Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "B. Eysenbach", "R. Salakhutdinov", "S. Levine" ], "title": "Search on the replay buffer: Bridging planning and reinforcement learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "J. Felsenstein" ], "title": "Cases in which parsimony or compatibility methods will be positively misleading", "venue": "Systematic Zoology,", "year": 1978 }, { "authors": [ "C. Finn", "T. Yu", "T. Zhang", "P. Abbeel", "S. Levine" ], "title": "One-shot visual imitation learning via metalearning", "venue": "In 1st Annual Conference on Robot Learning (CoRL),", "year": 2017 }, { "authors": [ "K. Frans", "J. Ho", "X. Chen", "P. Abbeel", "J. Schulman" ], "title": "Meta learning shared hierarchies", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "B.J. Frey", "D. Dueck" ], "title": "Clustering by passing messages between data points", "venue": "Science, 315(5814):", "year": 2007 }, { "authors": [ "O. Gotoh" ], "title": "An improved algorithm for matching biological sequences", "venue": "Journal of Molecular Biology,", "year": 1982 }, { "authors": [ "W.H. Guss", "C. Codel", "K. Hofmann", "B. Houghton", "N. Kuno", "S. Milani", "S.P. Mohanty", "D.P. Liebana", "R. Salakhutdinov", "N. Topin", "M. Veloso", "P. Wang" ], "title": "The MineRL competition on sample efficient reinforcement learning using human priors. arXiv, 2019a", "venue": null, "year": 2019 }, { "authors": [ "W.H. Guss", "B. Houghton", "N. Topin", "P. Wang", "C. Codel", "M. Veloso", "R. Salakhutdinov" ], "title": "MineRL: A large-scale dataset of Minecraft demonstrations", "venue": "In Proc. of the 28th Int. Joint Conf. on Artificial Intelligence (IJCAI’19),", "year": 2019 }, { "authors": [ "T. Haarnoja", "A. Zhou", "P. Abbeel", "S. Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "C.R. Harris", "K.J. Millman", "S.J. van der Walt", "R. Gommers", "P. Virtanen", "D. Cournapeau", "E. Wieser", "J. Taylor", "S. Berg", "N.J. Smith", "R. Kern", "M. Picus", "S. Hoyer", "M.H. van Kerkwijk", "M. Brett", "A. Haldane", "J.F. del Río", "M. Wiebe", "P. Peterson", "P. Gérard-Marchant", "K. Sheppard", "T. Reddy", "W. Weckesser", "H. Abbasi", "C. Gohlke", "T.E. Oliphant" ], "title": "Array programming with NumPy", "venue": "doi: 10.1038/s41586-020-2649-2", "year": 2020 }, { "authors": [ "S. Henikoff", "J.G. Henikoff" ], "title": "Amino acid substitution matrices from protein blocks", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 1992 }, { "authors": [ "M. Hessel", "J. Modayil", "H. van Hasselt", "T. Schaul", "G. Ostrovski", "W. Dabney", "D. Horgan", "B. Piot", "M.G. Azar", "D. Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "T. Hester", "M. Vecerík", "O. Pietquin", "M. Lanctot", "T. Schaul", "B. Piot", "A. Sendonaris", "G. Dulac-Arnold", "I. Osband", "J. Agapiou", "J.Z. Leibo", "A. Gruslys" ], "title": "Learning from demonstrations for real world reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "T. Hester", "M. Vecerík", "O. Pietquin", "M. Lanctot", "T. Schaul", "B. Piot", "D. Horgan", "J. Quan", "A. Sendonaris", "I. Osband", "G. Dulac-Arnold", "J. Agapiou", "J.Z. Leibo", "A. Gruslys" ], "title": "Deep q-learning from demonstrations", "venue": "In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). Association for the Advancement of Artificial Intelligence,", "year": 2018 }, { "authors": [ "D.S. Hirschberg" ], "title": "A linear space algorithm for computing maximal common subsequences", "venue": "Communications of the ACM,", "year": 1975 }, { "authors": [ "J. Ho", "S. Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "S. Hochreiter" ], "title": "Untersuchungen zu dynamischen neuronalen Netzen", "venue": "Master’s thesis, Technische Universität München,", "year": 1991 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long short-term memory", "venue": "Technical Report FKI-207-95, Fakultät für Informatik, Technische Universität München,", "year": 1995 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "LSTM can solve hard long time lag problems", "venue": "Advances in Neural Information Processing Systems", "year": 1997 }, { "authors": [ "M. Holzleitner", "L. Gruber", "J.A. Arjona-Medina", "J. Brandstetter", "S. Hochreiter" ], "title": "Convergence proof for actor-critic methods applied to PPO and RUDDER", "venue": null, "year": 2020 }, { "authors": [ "I.A. Hosu", "T. Rebedea" ], "title": "Playing Atari games with deep reinforcement learning and human checkpoint", "venue": "replay. ArXiv,", "year": 2016 }, { "authors": [ "J.D. Hunter" ], "title": "Matplotlib: A 2d graphics environment", "venue": "Computing in Science & Engineering,", "year": 2007 }, { "authors": [ "M. Jing", "X. Ma", "W. Huang", "F. Sun", "C. Yang", "B. Fang", "H. Liu" ], "title": "Reinforcement learning from imperfect demonstrations under soft expert", "venue": "guidance. ArXiv,", "year": 2019 }, { "authors": [ "K. Judah", "A.P. Fern", "T.G. Dietterich", "P. Adepalli" ], "title": "Active imitation learning: Formal and practical reductions to i.i.d. learning", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "S. Kakade", "J. Langford" ], "title": "Approximately optimal approximate reinforcement learning", "venue": "In 19th International Conference on Machine Learning (ICML),", "year": 2002 }, { "authors": [ "A. Kanervisto", "J. Karttunen", "V. Hautamäki" ], "title": "Playing Minecraft with behavioural cloning", "venue": "Proceedings of Machine Learning Research (PMLR),", "year": 2020 }, { "authors": [ "S. Karlin", "S.F. Altschul" ], "title": "Methods for assessing the statistical significance of molecular sequence features by using general scoring schemes", "venue": "Proceedings of the National Academy of Sciences of the United States of America,", "year": 1990 }, { "authors": [ "S. Karlin", "A. Dembo", "T. Kawabata" ], "title": "Statistical composition of high-scoring segments from molecular sequences", "venue": "Ann. Statist.,", "year": 1990 }, { "authors": [ "R. Khardon" ], "title": "Learning to take actions", "venue": "Machine Learning,", "year": 1999 }, { "authors": [ "B. Kim", "A. Farahmand", "J. Pineau", "D. Precup" ], "title": "Learning from limited demonstrations", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "T.D. Kulkarni", "K. Narasimhan", "A. Saeedi", "Josh J. Tenenbaum" ], "title": "Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "A.S. Lakshminarayanan", "S. Ozair", "Y. Bengio" ], "title": "Reinforcement learning with few expert demonstrations", "venue": "In NIPS Workshop on Deep Learning for Action and Interaction,", "year": 2016 }, { "authors": [ "K.Y. Levy", "N. Shimkin" ], "title": "Unified inter and intra options learning using policy gradient methods", "venue": "Recent Advances in Reinforcement Learning,", "year": 2012 }, { "authors": [ "M. Lopes", "F.S. Melo", "L. Montesano" ], "title": "Active learning for reward estimation in inverse reinforcement learning", "venue": "In European Conference on Machine Learning and Knowledge Discovery in Databases (ECML,PKDD),", "year": 2009 }, { "authors": [ "J. Luoma", "S. Ruutu", "A.W. King", "H. Tikkanen" ], "title": "Time delays, competitive interdependence, and firm performance", "venue": "Strategic Management Journal,", "year": 2017 }, { "authors": [ "M. Machado", "C. Rosenbaum", "X. Guo", "M. Liu", "G. Tesauro", "M. Campbell" ], "title": "Eigenoption discovery through the deep successor", "venue": "representation. arXiv,", "year": 2017 }, { "authors": [ "D.J. Mankowitz", "T.A. Mann", "S. Mannor" ], "title": "Adaptive skills adaptive partitions (ASAP)", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "J.A. Mccammon", "P.G. Wolynes" ], "title": "Highly specific protein sequence motifs for genome analysis", "venue": "Computational Biomolecular Science,", "year": 1998 }, { "authors": [ "D. Michie", "R. Camacho" ], "title": "Building Symbolic Representations of Intuitive Real-Time Skills from Performance Data, pp. 385–418", "venue": null, "year": 1994 }, { "authors": [ "S. Milani", "N. Topin", "B. Houghton", "W.H. Guss", "S.P. Mohanty", "K Nakata", "O. Vinyals", "N.S. Kuno" ], "title": "Retrospective analysis of the 2019 MineRL competition on sample efficient reinforcement learning", "venue": null, "year": 2003 }, { "authors": [ "B. Morgenstern" ], "title": "DIALIGN: Multiple DNA and protein sequence alignment at BiBiServ", "venue": "Nucleic Acids Research,", "year": 2004 }, { "authors": [ "A. Nair", "B. McGrew", "M. Andrychowicz", "W. Zaremba", "P. Abbeel" ], "title": "Overcoming exploration in reinforcement learning with demonstrations", "venue": "IEEE International Conference on Robotics and Automation,", "year": 2018 }, { "authors": [ "S.B. Needleman", "C.D. Wunsch" ], "title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins", "venue": "Journal of Molecular Biology,", "year": 1970 }, { "authors": [ "A.Y. Ng", "S.J. Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Proceedings of the Seventeenth International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "C. Notredame", "D.G. Higgins", "J. Heringa" ], "title": "T-coffee: a novel method for fast and accurate multiple sequence alignment", "venue": "Journal of Molecular Biology,", "year": 2000 }, { "authors": [ "D.A. Pomerleau" ], "title": "Efficient training of artificial neural networks for autonomous navigation", "venue": "Neural Comput.,", "year": 1991 }, { "authors": [ "H. Rahmandad", "N. Repenning", "J. Sterman" ], "title": "Effects of feedback delay on learning", "venue": "System Dynamics Review,", "year": 2009 }, { "authors": [ "R. Ramesh", "M. Tomar", "B. Ravindran" ], "title": "Successor options: An option discovery framework for reinforcement learning", "venue": "In Proc. of the 28th Int. Joint Conf. on Artificial Intelligence", "year": 2019 }, { "authors": [ "S. Reddy", "A.D. Dragan", "S. Levine" ], "title": "SQIL: imitation learning via regularized behavioral cloning", "venue": "In Eighth International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "D. Rolnick", "A. Ahuja", "J. Schwarz", "T.P. Lillicrap", "G. Wayne" ], "title": "Experience replay for continual learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "S. Ross", "D. Bagnell" ], "title": "Efficient reductions for imitation learning", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "S. Ross", "G. Gordon", "D. Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "S. Schaal" ], "title": "Learning from demonstration", "venue": "Proceedings of the 9th International Conference on Neural Information Processing Systems", "year": 1996 }, { "authors": [ "C. Scheller", "Y. Schraner", "M. Vogel" ], "title": "Sample efficient reinforcement learning through learning from demonstrations in Minecraft", "venue": "Proceedings of Machine Learning Research (PMLR),", "year": 2020 }, { "authors": [ "J. Schulman", "F. Wolski", "P. Dhariwal", "A. Radford", "O. Klimov" ], "title": "Proximal policy optimization algorithms", "venue": null, "year": 2018 }, { "authors": [ "F. Sievers", "A. Wilm", "D. Dineen", "T.J. Gibson", "K. Karplus", "W. Li", "R. Lopez", "H. McWilliam", "M. Remmert", "J. Soding", "J.D. Thompson", "D.G. Higgins" ], "title": "Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega", "venue": "Molecular Systems Biology,", "year": 2014 }, { "authors": [ "Jr. A.R. Silva", "V. Grassi", "D.F. Wolf" ], "title": "Continuous deep maximum entropy inverse reinforcement learning using online POMDP", "venue": "In 19th International Conference on Advanced Robotics (ICAR),", "year": 2019 }, { "authors": [ "D. Silver", "K. Ciosek" ], "title": "Compositional planning using optimal option models", "venue": "In Proceedings of the 29th International Conference on Machine Learning (ICML),", "year": 2012 }, { "authors": [ "D. Silver", "A. Huang", "C.J. Maddison", "A. Guez", "L. Sifre", "G. van den Driessche", "J. Schrittwieser", "I. Antonoglou", "V. Panneershelvam", "M. Lanctot", "S. Dieleman", "D. Grewe", "J. Nham", "N. Kalchbrenner", "I. Sutskever", "T.P. Lillicrap", "M. Leach", "K. Kavukcuoglu", "T. Graepel", "D. Hassabis" ], "title": "Mastering the game of Go with deep neural networks and tree search", "venue": null, "year": 2016 }, { "authors": [ "D. Silver", "T. Hubert", "J. Schrittwieser", "I. Antonoglou", "M. Lai", "A. Guez", "M. Lanctot", "L. Sifre", "D. Kumaran", "T. Graepel", "T.P. Lillicrap", "K. Simonyan", "D. Hassabis" ], "title": "Mastering Chess and Shogi by self-play with a general reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "A. Skrynnik", "A. Staroverov", "E. Aitygulov", "K. Aksenov", "V. Davydov", "A.I. Panov" ], "title": "Hierarchical deep q-network with forgetting from imperfect demonstrations in Minecraft", "venue": null, "year": 1912 }, { "authors": [ "A. Skrynnik", "A. Staroverov", "E. Aitygulov", "K. Aksenov", "V. Davydov", "A.I. Panov" ], "title": "Forgetful experience replay in hierarchical reinforcement learning from demonstrations, 2020", "venue": null, "year": 2020 }, { "authors": [ "T.F. Smith", "M.S. Waterman" ], "title": "Identification of common molecular subsequences", "venue": "Journal of Molecular Biology,", "year": 1981 }, { "authors": [ "I. Solaiman", "M.Brundage", "J. Clark", "A. Askell", "A. Herbert-Voss", "J. Wu", "A. Radford", "J. Wang" ], "title": "Release strategies and the social impacts of language models", "venue": null, "year": 1908 }, { "authors": [ "M. Stolle", "D. Precup" ], "title": "Learning options in reinforcement learning", "venue": "In Lecture Notes in Computer Science,", "year": 2002 }, { "authors": [ "G.D. Stormo", "T.D. Schneider", "L. Gold", "A. Ehrenfeucht" ], "title": "Use of the ’Perceptron’ algorithm to distinguish translational initiation sites in E. coli", "venue": "Nucleic Acids Research,", "year": 1982 }, { "authors": [ "H.B. Suay", "T. Brys", "M.E. Taylor", "S. Chernova" ], "title": "Learning from demonstration for shaping through inverse reinforcement learning", "venue": "In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems,", "year": 2016 }, { "authors": [ "K. Subramanian", "C.L. Isbell", "A.L. Thomaz" ], "title": "Exploration from demonstration for interactive reinforcement learning", "venue": "In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems,", "year": 2016 }, { "authors": [ "W. Sun", "A. Venkatraman", "G.J. Gordon", "B. Boots", "J.A. Bagnell" ], "title": "Deeply AggreVaTeD: Differentiable imitation learning for sequential prediction", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "W. Sun", "J.A. Bagnell", "B. Boots" ], "title": "Truncated horizon policy search: Combining reinforcement learning & imitation learning", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2018 }, { "authors": [ "R.S. Sutton", "D. Precup", "S.P. Singh" ], "title": "Between MDPs and Semi-MDPs: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial Intelligence,", "year": 1999 }, { "authors": [ "U. Syed", "R.E. Schapire" ], "title": "A game-theoretic approach to apprenticeship learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2007 }, { "authors": [ "M.E. Taylor", "H.B. Suay", "S. Chernova" ], "title": "Integrating reinforcement learning with human demonstrations of varying ability", "venue": "In 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011), Taipei,", "year": 2011 }, { "authors": [ "J.D. Thompson", "D.G. Higgins", "T.J. Gibson" ], "title": "CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice", "venue": "Nucleic Acids Research,", "year": 1994 }, { "authors": [ "A.S. Vezhnevets", "S. Osindero", "T. Schaul", "N. Heess", "M. Jaderberg", "D. Silver", "K. Kavukcuoglu" ], "title": "FeUdal networks for hierarchical reinforcement", "venue": "learning. arXiv,", "year": 2017 }, { "authors": [ "L. Wang", "T. Jiang" ], "title": "On the Complexity of Multiple Sequence Alignment", "venue": "Journal of Computational Biology,", "year": 1994 }, { "authors": [ "C.J.C.H. Watkins" ], "title": "Learning from Delayed Rewards", "venue": "PhD thesis, King’s College,", "year": 1989 }, { "authors": [ "X. Zhang", "H. Ma" ], "title": "Pretraining deep actor-critic reinforcement learning algorithms with expert", "venue": "demonstrations. ArXiv,", "year": 2018 }, { "authors": [ "A. Zhou", "E. Jang", "D. Kappler", "A. Herzog", "M. Khansari", "P. Wohlhart", "Y. Bai", "M. Kalakrishnan", "S. Levine", "C. Finn" ], "title": "Watch, try, learn: Meta-learning from demonstrations and rewards", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "X. Zuo" ], "title": "AAAI’08, pp. 1433–1438", "venue": null, "year": 2008 } ]
[ { "heading": null, "text": "Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently." }, { "heading": "1 INTRODUCTION", "text": "Reinforcement learning algorithms struggle with learning complex tasks that have sparse and delayed rewards (Sutton & Barto, 2018; Rahmandad et al., 2009; Luoma et al., 2017). For delayed rewards, temporal difference (TD) suffers from vanishing information (Arjona-Medina et al., 2019). On the other hand Monte Carlo (MC) has high variance since it must average over all possible futures (ArjonaMedina et al., 2019). Monte-Carlo Tree Search (MCTS), used for Go and chess, can handle delayed and rare rewards since it has a perfect environment model (Silver et al., 2016; 2017). RUDDER (Arjona-Medina et al., 2019; 2018) has been shown to excel in model-free learning of policies when only sparse and delayed rewards are given. RUDDER requires episodes with high rewards to store them in its lessons replay buffer for learning a reward redistribution model like an LSTM network. However, for complex tasks, current exploration strategies find episodes with high rewards only after an incommensurate long time. Humans and animals obtain high reward episodes by teachers, role models, or prototypes. Along this line, we assume that episodes with high rewards are given as demonstrations. Since generating demonstrations is often tedious for humans and time-consuming for exploration strategies, typically, only a few demonstrations are available. However, RUDDER’s LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997a) as a deep learning method requires many examples for learning. Therefore, we introduce Align-RUDDER, which replaces RUDDER’s LSTM with a profile model obtained from multiple sequence alignment (MSA) of the demonstrations. Profile models are well known in bioinformatics. They are used to score new sequences according to their sequence similarity to the aligned sequences. Like RUDDER also Align-RUDDER performs reward redistribution —using an alignment model—, which considerably speeds up learning even if only a few demonstrations are available.\nOur main contributions are:\n• We suggest a reinforcement algorithm that works well for sparse and delayed rewards, where standard exploration fails but few demonstrations with high rewards are available.\n• We adopt multiple sequence alignment from bioinformatics to construct a reward redistribution technique that works with few demonstrations.\n• We propose a method that uses alignment techniques and reward redistribution for identifying sub-goals and sub-tasks which in turn allow for hierarchical reinforcement learning." }, { "heading": "2 REVIEW OF RUDDER", "text": "Basic insight: Q-functions for complex tasks are step functions. Complex tasks are typically composed of sub-tasks. Therefore the Q-function of an optimal policy resembles a step function. The Q-function is the expected future return and it increases (i.e, makes a step) when a sub-task is completed. Identifying large steps in the Q-function speeds up learning since it allows (i) to increase the return by performing actions that cause the step and (ii) to sample episodes with a larger return for learning.\nAn approximation to the Q-function must predict the expected future return for every state-action pair. However, a Q-function that resembles a step-function is mostly constant. Therefore predictions are only necessary at the steps. We have to identify the relevant state-actions that cause the steps and then predict the size of the steps. An LSTM network (Hochreiter, 1991; Hochreiter & Schmidhuber, 1995; 1997a;b) can identify relevant state-actions that open the input gate to store the size of the steps in the memory cells. Consequently, LSTM only updates its states and changes its return prediction when a new relevant state-action pair is observed. Therefore, both the change of the prediction and opening input gates indicate Q-function steps through an LSTM network that predicts the return of an episode.\nReward Redistribution. We consider episodic Markov decision processes (MDPs), i.e., the reward is only given once at the end of the sequence. The Q-function is assumed to be a step function, that is, the task can be decomposed into sub-tasks (see previous paragraph). Reward redistribution aims at giving the differences in the Q-function of an optimal policy as a new immediate reward. Since the Q-function of an optimal policy is not known, we approximate it by predicting the expected return by an LSTM network or by an alignment model in this work. The differences in predictions determine the reward redistribution. The prediction model will first identify the largest steps in the Q-function as they decrease the prediction error most. Fortunately, just identifying the largest steps even with poor predictions speeds up learning considerably. See Figure 1 for a description of the reward redistribution.\nLearning methods based on reward redistribution. The redistributed reward serves as reward for a subsequent learning method: (A) The Q-values can be directly estimated (Arjona-Medina et al., 2019), which is used in the experiments for the artificial tasks and BC pre-training for MineCraft. (B) Redistributed rewards can serve for learning with policy gradients like Proximal Policy Optimization (PPO) (Schulman et al., 2018), which is used in the MineCraft experiments. (C) Redistributed rewards can serve for temporal difference learning like Q-learning (Watkins, 1989).\nLSTM models for reward redistribution. RUDDER uses an LSTM model for predicting the future return. The reward redistribution is the difference between two subsequent predictions. If a stateaction pair increases the prediction of the return, then it is immediately rewarded. Using state-action sub-sequences (s, a)0:t = (s0, a0, . . . , st, at), the redistributed reward is Rt+1 = g((s, a)0:t) − g((s, a)0:t−1), where g is an LSTM model that predicts the return of the episode. The LSTM model learns at first to approximate the largest steps of the Q-function since they reduce the prediction error the most." }, { "heading": "3 ALIGN-RUDDER: RUDDER WITH FEW DEMONSTRATIONS", "text": "In bioinformatics, sequence alignment identifies similarities between biological sequences to determine their evolutionary relationship (Needleman & Wunsch, 1970; Smith & Waterman, 1981). The result of the alignment of multiple sequences is a profile model. The profile model is a consensus sequence, a frequency matrix, or a Position-Specific Scoring Matrix (PSSM) (Stormo et al., 1982). New sequences can be aligned to a profile model and receive an alignment score that indicates how well the new sequences agree to the profile model.\nAlign-RUDDER uses such alignment techniques to align two or more high return demonstrations. For the alignment, we assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other analog to being evolutionary related. If the agent generates a state-action sequence (s, a)0:t−1, then this sequence is aligned to the profile model g giving a score g((s, a)0:t−1). The next action of the agent extends the state-action sequence by one state-action pair (st, at). The extended sequence (s, a)0:t is also aligned to the profile model g giving another score g((s, a)0:t). The redistributed reward Rt+1 is the difference of these scores: Rt+1 = g((s, a)0:t)− g((s, a)0:t−1) (see Eq. (1)). This difference indicates how much of the return is gained or lost by a adding another sequence element.\nAlign-RUDDER scores how close an agent follows an underlying strategy, which has been extracted by the profile model. Similar to the LSTM model, we identify the largest steps in the Q-function via relevant events determined by the profile model. Therefore, redistributing the reward by sequence alignment fits into the RUDDER framework with all its theoretical guarantees. RUDDER’s theory for reward redistribution is valid for LSTM, other recurrent networks, attention mechanisms, or sequence and profile models.\nAdvantages of alignment compared to LSTM. Learning an LSTM model is severely limited when very few demonstrations are available. First, LSTM is known to require a large number of samples to generalize to new sequences. In contrast, sequence alignment requires only two examples to generalize well as known from bioinformatics. Second, expert demonstrations have high rewards.\nTherefore random demonstrations with very low rewards have to be generated. LSTM does not generalize well when only these extreme reward cases can be observed in the training set. In contrast, sequence alignment only uses examples that are closely related; that is, they belong to the same category (expert demonstrations).\nReward Redistribution by Sequence Alignment. The new reward redistribution approach consists of five steps, see Fig. 3: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model like a PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1). In the following, the five steps of Align-RUDDER’s reward redistribution are outlined. For the interested reader, each step is detailed in Sec. A.3 in the appendix. Finally, in Sec. A.7.3 in the appendix, we illustrate these five steps on the example of Minecraft.\n(I) Defining Events. Instead of states, we consider differences of consecutive states to detect a change caused by an important event like achieving a sub-goal. An event is defined as a cluster of state differences. We use similarity-based clustering like affinity propagation (AP) (Frey & Dueck, 2007). If states are only enumerated, we suggest to use the “successor representation” (Dayan, 1993) or “successor features” (Barreto et al., 2017). We use the demonstrations combined with state-action sequences generated by a random policy to construct the successor representation.\nA sequence of events is obtained from a state-action sequence by mapping states s to its cluster identifier e (the event) and ignoring the actions. Alignment techniques from bioinformatics assume sequences composed of a few events, e.g. 20 events. If there are too many events, good fitting alignments cannot be distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978).\n(II) Determining the Alignment Scoring System. A scoring matrix S with entries si,j determines the score for aligning event i with j. A priori, we only know that a relevant event should be aligned to itself but not to other events. Therefore, we set si,j = 1/pi for i = j and si,j = α for i 6= j. Here, pi is the relative frequency of event i in the demonstrations. α is a hyper-parameter, which is typically a small negative number. This scoring scheme encourages alignment of rare events, for which pi is small. For more details see Appendix Sec. A.3.\n(III) Multiple sequence alignment (MSA). An MSA algorithm maximizes the sum of all pairwise scores SMSA = ∑ i,j,i<j ∑L t=0 si,j,ti,tj ,t in an alignment, where si,j,ti,tj ,t is the score at alignment\ncolumn t for aligning the event at position ti in sequence i to the event at position tj in sequence j. L ≥ T is the alignment length, since gaps make the alignment longer than the length of each sequence. We use ClustalW (Thompson et al., 1994) for MSA. MSA constructs a guiding tree by agglomerative hierarchical clustering of pairwise alignments between all demonstrations. This guiding tree allows to identify multiple strategies. For more details see Appendix Sec. A.3.\n(IV) Position-Specific Scoring Matrix (PSSM) and MSA profile model. From the alignment, we construct a profile model as a) column-wise event probabilities and b) a PSSM (Stormo et al., 1982). The PSSM is a column-wise scoring matrix to align new sequences to the profile model. More details are given in Appendix Sec. A.3.\n(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L l=0 sl,tl . Here, sl,tl is the alignment score for event etl at position l in the alignment. Alignment gaps are columns to which no event was aligned, which have tl = T + 1 with gap penalty sl,T+1. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is\nRt+1 = (S(τt)− S(τt−1)) C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1, (1)\nwhere C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] with S(τ−1) = 0. The original return of\nthe sequence τ is G̃0 = ∑T t=0 R̃t+1 and the expectation of the return over demonstrations is Edemo. The constant C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence (Arjona-Medina et al., 2019) by G0 = ∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past: Rt+1 = h((s, a)0:t).\nSub-tasks. The reward redistribution identifies sub-tasks as alignment positions with high redistributed rewards. These sub-tasks are indicated by high scores s in the PSSM. Reward redistribution also determines the terminal states of sub-tasks since it assigns rewards for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the redistributed reward is Markov. For redistributed Markov reward, options (Sutton et al., 1999), MAXQ (Dietterich, 2000), or recursive option composition (Silver & Ciosek, 2012) can be used.\nHigher Order Markov Reward Redistributions. Align-RUDDER may lead to higher-order Markov redistribution. Corollary 1 in the appendix states that the optimality criterion from Theorem 2 in Arjona-Medina et al. (2019) also holds for higher-order Markov reward redistributions. If the expected redistributed higher-order Markov reward is the difference of Q-values. In that case the redistribution is optimal, and there is no delayed reward. Furthermore, the optimal policies are the same as for the original problem. This corollary is the motivation for redistributing the reward to the steps in the Q-function. In the Appendix, Corollary 2 states that under a condition, an optimal higher-order reward redistribution can be expressed as the difference of Q-values." }, { "heading": "4 EXPERIMENTS", "text": "Align-RUDDER is compared on three artificial tasks with sparse & delayed rewards and few demonstrations to Behavioral Cloning with Q-learning (BC+Q), Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), RUDDER (LSTM), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018). GAIL (Ho & Ermon, 2016) failed to solve the two artificial tasks, as reported previously for similar tasks (Reddy et al., 2020). Then, we test Align-RUDDER on the complex MineCraft ObtainDiamond task with episodic rewards (Guss et al., 2019b). All experiments use finite time MDPs with γ = 1 and episodic reward. More details are in Appendix Sec. A.6.\nAlignment vs LSTM in 1D key-chest environment. We use a 1D key-chest environment to show the effectiveness of sequence alignment in a low data regime compared to an LSTM model. The agent has to collect the key and then open the chest to get a positive reward at the last timestep. See Appendix Fig. A.9 for a schematic representation of the environment. As the key-events (important state-action pairs) in this environment are known, we can compute the key-event detection rate of\na reward redistribution model. A key event is detected if the redistributed reward of an important state-action pair is larger than the average redistributed reward in the sequence. We train the reward redistribution models with 2, 5, and 10 training episodes and test on 1000 test episodes, averaged over ten trials. Align-RUDDER significantly outperforms LSTM (RUDDER) for detecting these key events in all cases, with an average key-event detection rate of 0.96 for sequence alignment vs. 0.46 for the LSTM models overall dataset sizes. See Appendix Fig. A.10 for the detailed results.\nArtificial tasks (I) and (II). They are variations of the gridworld rooms example (Sutton et al., 1999), where cells are the MDP states. In our setting, the states do not have to be time-aware for ensuring stationary optimal policies, but the unobserved used-up time introduces a random effect. The grid is divided into rooms. The agent’s goal is to reach a target from an initial state with the fewest steps. It has to cross different rooms, which are connected by doors, except for the first room, which is only connected to the second room by a teleportation portal. The portal is introduced to avoid BC initialization alone, solving the task. It enforces that going to the portal entry cells is learned when they are at positions not observed in demonstrations. At every location, the agent can move up, down, left, right. The state transitions are stochastic. An episode ends after T = 200 time steps. Suppose the agent arrives at the target. In that case, it goes into an absorbing state where it stays until T = 200 without receiving further rewards. The reward is only given at the end of the episode. Demonstrations are generated by an optimal policy with a 0.2 exploration rate.\nThe five steps of Align-RUDDER’s reward redistribution are: (1) Events are clusters of states obtained by Affinity Propagation using as similarity the successor representation based on demonstrations. (2) The scoring matrix is obtained according to (II), using = 0 and setting all off-diagonal values of the scoring matrix to −1. (3) ClustalW is used for the MSA of the demonstrations with zero gap penalties and no biological options. (4) The MSA supplies a profile model and a PSSM as in (IV). (5) Sequences generated by the agent are mapped to sequences of events according to (I). The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. The reward redistribution determines sub-tasks like doors or portal arrival. The sub-tasks partition the Q-table into sub-tables that represent a sub-agent. However, we optimize a single Q-table in these experiments. Defining sub-tasks has no effect on learning in the tabular case.\nAll compared methods learn a Q-table and use an -greedy policy with = 0.2. The Q-table is initialized by behavioral cloning (BC). The state-action pairs which are not initialized since they are not visited in the demonstrations get an initialization by drawing a sample from a normal distribution. Align-RUDDER learns the Q-table via RUDDER’s Q-value estimation (learning method (A) from\nSec.2). For BC+Q, RUDDER (LSTM), SQIL, and DQfD a Q-table is learned by Q-learning. Hyperparameters are selected via grid search using the same amount of time for each method. For different numbers of demonstrations, performance is measured by the number of episodes to achieve 80% of the average return of the demonstrations. A Wilcoxon rank-sum test determines the significance of performance differences between Align-RUDDER and the other methods.\nTask (I) environment is a 12×12 gridworld with four rooms. The target is in room #4, and the start is in room #1 with 20 portal entry locations. The state contains the portal entry for each episode. Fig. 5 shows the number of episodes required for achieving 80% of the average reward of the demonstrations for different numbers of demonstrations. Results are averaged over 100 trials. Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−10). Task (II) is a 12×24 gridworld with eight rooms: target in room #8, and start in room #1 with 20 portal entry locations. Fig. 5 shows the results with settings as in Task (I). Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−19). We also conduct an ablation study to study performance of Align-RUDDER, while changing various parameters, like environment stochasticity (See Sec. A.6.4) and number of clusters (See Sec. A.6.5).\nMineCraft. We further test Align-RUDDER on MineCraft ObtainDiamond task from the MineRL dataset (Guss et al., 2019b). We do not use intermediate rewards given by achieving subgoals from the challenge, since Align-RUDDER is supposed to discover such sub-goals automatically via reward redistribution. We only give a reward for mining the diamond. This requires resource gathering and tool building in a hierarchical way. To the best of our knowledge, no pure learning method (sub-goals are also learned) has mined a diamond yet (Scheller et al., 2020). The dataset contains demonstrations which are insufficient to directly learn a single policy (117 demonstrations, 67 mined a diamond).\nImplementation: (1) A state consists of visual input and an inventory (incl. equip state). Both inputs are normalized to the same information, that is, the same number of components and the same variance. We cluster the differences of consecutive states (Arjona-Medina et al., 2019). Very large clusters are removed, and small merged, giving 19 clusters corresponding to events, which are characterized by inventory changes. Finally, demonstrations are mapped to sequences of events. (2) The scoring matrix is computed according to (II). (3) The ten shortest demonstrations that obtained a diamond are aligned by ClustalW with zero gap penalties and no biological options. (4) The multiple alignments gives a profile model and a PSSM. (5) The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. Based on the reward redistribution, we define sub-goals. Sub-goals are identified as profile model positions that obtain an average redistributed reward above a threshold for the demonstrations. Demonstration subsequences between sub-goals are considered as demonstrations for the sub-tasks. New sub-sequences generated by the agent are aligned to the profile model to determine whether a sub-goal is achieved. The redistributed reward between two sub-goals is given at the end of the sub-sequence, therefore, the sub-tasks also have an episodic reward. Fig. 4 shows how sub-goals are identified. Sub-agents are pre-trained on the demonstrations for the sub-tasks using BC, and further trained in the environment using Proximal Policy Optimization (PPO) (Schulman et al., 2018). BC pre-training corresponds to RUDDER’s Q-value estimation (learning method (A) from above), while PPO corresponds to RUDDER’s PPO training (learning method (B) from above).\nOur main agent can perform all actions but additionally can execute sub-agents and learns via the redistributed reward. The main agent corresponds to and is treated like a Manager module (Vezhnevets et al., 2017). The main agent is initialized by executing sub-agents according to the alignment but can deviate from this strategy. When a sub-agent successfully completes its task, the main agent executes the next sub-agent according to the alignment. More details can be found in Appendix Sec. A.7.1. Using only ten demonstrations, Align-RUDDER is able to learn to mine a diamond. A diamond is obtained in 0.1% of the cases. With 0.5 success probability for each of the 31 extracted sub-tasks (skilled agents not random agents), the resulting success rate for mining the diamond would be 4.66× 10−10. Tab. 1 shows a comparison of methods on the MineCraft MineRL dataset by the maximum item score (Milani et al., 2020). Results are taken from (Milani et al., 2020), in particular from Figure 2, and completed by (Skrynnik et al., 2019; Kanervisto et al., 2020; Scheller et al., 2020). Align-RUDDER was not evaluated during the MineCraft MineRL challenge, but it follows the timesteps limit (8 million) imposed by the challenge. Align-RUDDER did not receive the intermediate rewards provided by the challenge that hint at sub-tasks, thus tries to solve a more difficult task. Recently, ForgER++ (Skrynnik et al., 2020) was able to mine a diamond in 0.0667% of the cases. We do not include it in Table 1 as it did not have any limitations on the number of timesteps. Also, ForgER++ generates sub-goals for MineCraft using a heuristic, while Align-RUDDER uses redistributed reward to automatically obtain sub-goals.\nAnalysis of MineCraft Agent Behaviour. For each agent and its sub-task, we estimate the success rate and its improvement during fine-tuning by averaging over return of multiple runs (see Fig. 6). For earlier sub-tasks, the agent has a relatively higher sub-task success rate. This also corresponds to the agent having access to much more data for earlier sub-tasks. During learning from demonstrations, much less data is available for training for later sub-tasks, as not all expert demonstrations achieve the later tasks. During online training using reinforcement learning, an agent has to successfully complete all earlier sub-tasks to generate trajectories for later sub-tasks. This is exponentially difficult. Lack of demonstrations and difficulty of the learned agent to generate data for later sub-tasks leads to degradation of the success rate in MineCraft." }, { "heading": "5 RELATED WORK", "text": "Learning from demonstrations has been widely studied over the last 50 years (Billard et al., 2008). An example is imitation learning, which uses supervised techniques when the number of demonstrations is large enough (Michie et al., 1990; Pomerleau, 1991; Michie & Camacho, 1994; Schaal, 1996; Kakade & Langford, 2002). However, policies trained with imitation learning tend to drift away from demonstration trajectories due to a distribution shift (Ross & Bagnell, 2010). This effect can be mitigated (III et al., 2009; Ross & Bagnell, 2010; Ross et al., 2011; Judah et al., 2014; Sun et al., 2017;\n2018). Many approaches use demonstrations for initialization, e.g. of policy networks (Taylor et al., 2011; Silver et al., 2016), value function networks (Hester et al., 2017; 2018), both networks (Zhang & Ma, 2018; Nair et al., 2018), or an experience replay buffer (Hosu & Rebedea, 2016). Beyond initialization, demonstrations are used to define constraints (Kim et al., 2013), generate sub-goals (Eysenbach et al., 2019), enforce regularization (Reddy et al., 2020), guide exploration (Subramanian et al., 2016; Jing et al., 2019), or shape rewards (Judah et al., 2014; Brys et al., 2015; Suay et al., 2016). Demonstrations may serve for inverse reinforcement learning (Ng & Russell, 2000; Abbeel & Ng, 2004; Ho & Ermon, 2016), which aims at learning a (non-sparse) reward function that best explains the demonstrations. Learning reward functions requires a large number of demonstrations (Syed & Schapire, 2007; Ziebart et al., 2008; Silva et al., 2019). Some approaches rely on few-shot or/and meta learning (Duan et al., 2017; Finn et al., 2017; Zhou et al., 2020). However, few-shot and meta learning demand a large set of auxiliary tasks or prerecorded data. Concluding, most methods that learn from demonstrations rely on the availability of many demonstrations (Khardon, 1999; Lopes et al., 2009), in particular, if using deep learning methods (Bengio & Lecun, 2007; Lakshminarayanan et al., 2016). Some methods can learn on few demonstrations like Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018)." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "Discussion. Firstly, reward redistributions do not change the optimal policies (see Theorem 1 in Appendix). Thus, suboptimal reward redistributions due to alignment errors or choosing events that are non-essential for reaching the goal might not speed up learning, but also do not change the optimal policies. Secondly, while Align-RUDDER can speed up learning even in complex environments, the resulting performance depends on the quality of the alignment model. A low quality alignment model can arise from multiple factors, one of which is having large number ( 20) of distinct events. Clustering can be used to reduce the number of events, which could also lead to a low quality alignment model if too many relevant events are clustered together. While the optimal policy is not changed by poor demonstration alignment, the benefit of employing reward redistribution based on it diminishes. Thirdly, the alignment could fail if the demonstrations have different underlying strategies i.e no events are common in the demonstrations. We assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other and can be aligned. However, if no underlying strategy exists, then identifying those relevant events via alignment, which should receive high redistributed rewards, may fail. In this case, reward is given at sequence end, when the redistributed reward is corrected, which leads to an episodic reward without reducing the delay of the rewards and speeding up learning.\nConclusions. We have introduced Align-RUDDER to solve highly complex tasks with delayed and sparse reward from few demonstrations. We have shown experimentally that Align-RUDDER outperforms state of the art methods designed for learning from demonstrations in the regime of few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is, to the best of our knowledge, the first pure learning method to mine a diamond." }, { "heading": "ETHICS STATEMENT", "text": "Impact on ML and related scientific fields. Our research has the potential to positively impact a wide variety of fields of life due to its general applicability. Most importantly, it has the potential to reduce the cost for training and deploying agents in real world applications and therefore enable systems that have not been possible until now.\nHowever, any new development in machine learning can be applied for good or for bad. Our system can be used for medical applications where it can save life but it could be used for malevolent systems. It is the society that decides how new technology is employed. However, we as scientist have to inform the society and the decision makers about our technologies. We have to show the limits of our technology, to give ideas of possible applications, to point out possible misuse or erroneous operation of our new technology.\nImpact on society. A big danger is that users rely too much on our new approach and use it without reflecting on the outcomes. For example, in medical treatment decisions doctors may rely on the technical system and push are the responsibility toward the machine: “The machine suggested this treatment, therefore it is not my fault”. Another example is self-driving cars where we see that drivers become more careless even if they are supposed to pay attention and keep the hands on the steering wheel. They trust too much in the technology, even if the technology does not justify this trust or is not mature.\nFinally, our method can be deployed in companies for job automation. Therefore there is the danger that some people lose their jobs, particularly those whose work is to perform predictable and repetitive tasks. An often used example is the taxi driver who would lose their job because of self-driving cars. The same holds for many jobs in production industry where automation can replace jobs. However all industrialization led to loss of jobs but new jobs have been created.\nConsequences of failures of the method. Depending on the application area, a failure of this method might be of lesser concern, such as a failed execution of a computer program. If our method is employed within a larger automation system, a failure can result in damages such as a car accident. However, this holds for almost all reinforcement learning methods, and usage and testing falls within the responsibility of the application area. We note that in this work, the method was only used in computer game environments.\nLeveraging of biases in the data and potential discrimination. Our proposed method relies on human demonstrations and thereby human decisions, which are usually strongly biased. As almost all machine learning methods trained on human-influenced data, our method could learn to use and exploit those biases and make similar decisions (Solaiman et al., 2019). Therefore, the responsible use of our method depends on a careful selection of the training data and awareness of the potential biases within those." }, { "heading": "REPRODUCIBILITY STATEMENT", "text": "Code for experiments on the FourRooms and EightRooms environment is included as supplementary material. The README contains step-by-step instructions to set up an environment and run the experiments. We have specified all the training details ex. hyperparameters and how they were chosen in the Appendix (See Section A.6). We trained 100 replicates for each datapoint of the first set of experiments and are shown in Fig. 5. Using the code in the supplementary material, it is quite easy to reproduce our results for these experiments.\nWe also include code for the experiments done for MineCraft in the supplementary materials. All the preprocessing steps, hyperparameters and other implementation details are given in the Appendix (See Section A.7).\nWe also provide a deeper overview of the RUDDER (Arjona-Medina et al., 2019) theory in the Appendix (See Section A.2) as it is important for many design choices in Align-RUDDER.\nFinally, a video showcasing the MineCraft agent is also provided as supplementary material." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "CONTENTS OF THE APPENDIX", "text": "A.1 Introduction to the Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nA.2 Review Reward Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nA.3 The Five Steps of Align-RUDDER’s Reward Redistribution . . . . . . . . . . . . . 25\nA.4 Sequence Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\nA.5 Extended Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28\nA.6 Artificial Task Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\nA.6.1 Hyperparameter Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 29\nA.6.2 Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\nA.6.3 Artificial Task p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\nA.6.4 Stochastic Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 31\nA.6.5 Changing number of Clusters . . . . . . . . . . . . . . . . . . . . . . . . 32\nA.6.6 Key-Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33\nA.7 Minecraft Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34\nA.7.1 MineCraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34\nA.7.2 Related Work and Steps Towards a General Agent . . . . . . . . . . . . . 34\nA.7.3 The Five Steps of Align-RUDDER Demonstrated on Minecraft . . . . . . 36\nA.7.4 Implementation of our Algorithm for Minecraft . . . . . . . . . . . . . . . 38\nA.7.5 Policy and Value Network Architecture . . . . . . . . . . . . . . . . . . . 39\nA.7.6 Imitation Learning of Sub-Task Agents . . . . . . . . . . . . . . . . . . . 40\nA.7.7 Reinforcement Learning on Sub-Task Agents . . . . . . . . . . . . . . . . 41\nA.8 Reproducing the Artificial Task Results . . . . . . . . . . . . . . . . . . . . . . . 41\nA.9 Software Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42\nA.10 Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42" }, { "heading": "LIST OF FIGURES", "text": "A.2 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 29\nA.3 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30\nA.4 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30\nA.5 FourRooms and EightRooms environments . . . . . . . . . . . . . . . . . . . . . 31\nA.6 Reward redistribution for the FourRooms and EightRooms environments . . . . . . 31\nA.11 Step (I): Define events and map demonstrations into sequences of events. First, we extract the sequence of states from human demonstrations, transform images into feature vectors using a pre-trained network and transform them into a sequence of consecutive state deltas (concatenating image feature vectors and inventory states). We cluster the resulting state deltas and remove clusters with a large number of members and merge smaller clusters. In the case of demonstrations for the ObtainDiamond task in Minecraft the resulting clusters correspond to obtaining specific resources and items required to solve the task. Then we map the demonstrations to sequences of events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36\nA.12 Step (II): Construct a scoring matrix using event probabilities from demonstrations for diagonal elements and setting off-diagonal to a constant value. The scores in the diagonal position are proportional to the inverse of the event frequencies. Thus, aligning rare events has higher score. Darker colors signify higher score values. . . 36\nA.13 Step (III) Perform multipe sequence alignment (MSA) of the demonstrations. The MSA algorithm maximizes the pairwise sum of scores of all alignments. The score of an alignment at each position is given by the scoring matrix. As the off-diagonal entries are negative, the algorithm will always try to align an event to itself, while giving preference to events which give higher scores. . . . . . . . . . . . . . . . . 37\nA.14 Step (IV) Compute a position-specific scoring matrix (PSSM). This matrix can be computed using the MSA (Step (III)) and the scoring matrix (Step (II)). Every column entry is for a position from the MSA. The score at a position (column) and for an event (row) depends on the frequency of that event at that position in the MSA. For example, the event in the last position is present in all the sequences, and thus gets a high score at the last position. But it is absent in the remaining position, and thus gets a score of zero elsewhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37\nA.15 Step (V) A new sequence is aligned step by step to the profile model using the PSSM, resulting in an alignment score for each sub-sequence. The redistributed reward is then proportional to the difference of scores of subsequent alignments. . . . . . . . 37\nA.16 Conceptual overview of our MineRL agent . . . . . . . . . . . . . . . . . . . . . . 38\nA.17 Conceptual architecture of Align-RUDDER MineRL policy and value networks . . 39\nA.18 Discretization and interpolation of camera angles . . . . . . . . . . . . . . . . . . 40\nA.19 Mapping of clusters to letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40\nA.20 Trajectory replay given by an exemplary consensus . . . . . . . . . . . . . . . . . 41\nA.1 INTRODUCTION TO THE APPENDIX\nThis is the appendix to the paper “Align-RUDDER: Learning from few Demonstrations by Reward Redistribution”. The appendix aims at supporting the main document and provides more detailed information about the implementation of our method for different tasks. The content of this document is summarized as follows:\n• Section A.3 describes the five steps of Align-RUDDER’s reward redistribution in more detail. In particular, the scoring systems are described in more detail. • Section A.4 provides a brief overview of sequence alignment methods and the hyperparameters used in our experiments. • Section A.6 provides figures and tables to support the results of the experiments in Artificial Tasks (I) and (II). • Section A.7 explains in detail the experiments conducted in the Minecraft ObtainDiamond task." }, { "heading": "A.2 REVIEW REWARD REDISTRIBUTION", "text": "Reward redistribution and return decomposition are concepts introduced in RUDDER but also apply to Align-RUDDER as it is a variant of RUDDER. Reward redistribution based on return decomposition eliminates – or at least mitigates – delays of rewards while preserving the same optimal policies. Align-RUDDER is justified by the theory of return decomposition and reward redistribution when using multiple sequence alignment for constructing a reward redistribution model. In this section, we review the concepts of return decomposition and reward redistribution.\nPreliminaries. We consider a finite MDP defined by the 5-tuple P = (S,A,R, p, γ) where the state space S and the action space A are sets of finite states s and actions a and R the set of bounded rewards r. For a given time step t, the corresponding random variables are St, At and Rt+1. Furthermore, P has transition-reward distributions p(St+1 = s′, Rt+1 = r | St = s,At = a), and a discount factor γ ∈ (0, 1], which we keep at γ = 1. A Markov policy π(a | s) is a probability of an action a given a state s. We consider MDPs with finite time horizon or with an absorbing state. The discounted return of a sequence of length T at time t is Gt = ∑T−t k=0 γ\nkRt+k+1. As usual, the Q-function for a given policy π is qπ(s, a) = Eπ [Gt | St = s,At = a]. Eπ[x | s, a] is the expectation of x, where the random variable is a sequence of states, actions, and rewards that is generated with transition-reward distribution p, policy π, and starting at (s, a). The goal is to find an optimal policy π∗ = argmax π Eπ[G0] maximizing the expected return at t = 0. We assume that the states s are time-aware (time t can be extracted from each state) in order to assure stationary optimal policies. According to Proposition 4.4.3 in (Puterman, 2005), a deterministic optimal policy π∗ exists.\nDefinitions. A sequence-Markov decision process (SDP) is defined as a decision process that has Markov transition probabilities but a reward probability that is not required to be Markov. Two SDPs P̃ and P with different reward probabilities are return-equivalent if they have the same expected return at t = 0 for each policy π, and strictly return-equivalent if they additionally have the same expected return for every episode. Since for every π the expected return at t = 0 is the same, return-equivalent SDPs have the same optimal policies. A reward redistribution is a procedure that —for a given sequence of a delayed reward SDP P̃— redistributes the realization or expectation of its return G̃0 along the sequence. This yields a new SDP P with R as random variable for the redistributed reward and the same optimal policies as P̃: Theorem 1 (Arjona-Medina et al. (2019)). Both the SDP P̃ with delayed reward R̃t+1 and the SDP P with redistributed reward Rt+1 have the same optimal policies.\nProof. The proof can be found in (Arjona-Medina et al., 2019).\nThe delay of rewards is captured by the expected future rewards κ(m, t − 1) at time (t − 1). κ is defined as κ(m, t− 1) := Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1], that is, at time (t− 1) the expected sum of future rewards from Rt+1 to Rt+1+m but not the immediate reward Rt. A reward redistribution is defined to be optimal, if κ(T − t − 1, t) = 0 for 0 6 t 6 T − 1, which is equivalent to Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1): Theorem 2 (Arjona-Medina et al. (2019)). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a second order Markov reward redistribution, which ensures that P is return-equivalent to P̃ . For a specific π, the following two statements are equivalent:\n(I) κ(T − t− 1, t) = 0, i.e. the reward redistribution is optimal,\n(II) Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (2)" }, { "heading": "An optimal reward redistribution fulfills for 1 6 t 6 T and 0 6 m 6 T − t: κ(m, t− 1) = 0.", "text": "Proof. The proof can be found in (Arjona-Medina et al., 2019).\nThis theorem shows that an optimal reward redistribution relies on steps q̃π(st, at)− q̃π(st−1, at−1) of the Q-function. Identifying the largest steps in the Q-function detects the largest rewards that have to be redistributed, which makes the largest progress towards obtaining an optimal reward redistribution.\nCorollary 1 (Higher order Markov reward redistribution optimality conditions). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is return-equivalent to P̃ . If for a specific π\nEπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (3)\nholds, then the higher order reward redistribution Rt+1 is optimal, that is, κ(T − t− 1, t) = 0.\nProof. The proof is just PART (II) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.\nWe assume that\nEπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) , (4)\nwhere we abbreviate the expected Rt+1 by ht:\nEπ [Rt+1 | st−1, at−1, st, at] = ht . (5)\nThe expectations Eπ [. | st−1, at−1] like Eπ [ R̃T+1 | st−1, at−1 ] are expectations over all episodes\nthat contain the state-action pair (st−1, at−1) at time t−1. The expectations Eπ [. | st−1, at−1, st, at] like Eπ [ R̃T+1 | st−1, at−1, st, at ] are expectations over all episodes that contain the state-action\npairs (st−1, at−1) at time t− 1 and (st, at) at time t. The Q-values are defined as\nq̃π(st, at) = Eπ [ T−t∑ k=0 R̃t+k+1 | st, at ] = Eπ [ R̃T+1 | st, at ] , (6)\nqπ(st, at) = Eπ [ T−t∑ k=0 Rt+k+1 | st, at ] , (7)\nwhich are expectations over all trajectories that contain (st, at) at time t. Since P̃ is Markov, for q̃π only the suffix trajectories beginning at (st, at) enter the expectation.\nThe definition of κ(m, t − 1) for 1 6 t 6 T and 0 6 m 6 T − t was κ(m, t − 1) = Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1]. We have to proof κ(T − t− 1, t) = 0.\nFirst, we consider m = 0 and 1 6 t 6 T , therefore κ(0, t − 1) = Eπ [Rt+1 | st−1, at−1]. Since the original MDP P̃ has episodic reward, we have r̃(st−1, at−1) = E [ R̃t | st−1, at−1 ] = 0 for\n1 6 t 6 T . Therefore, we obtain: q̃π(st−1, at−1) = r̃(st−1, at−1) + ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) (8)\n= ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) .\nUsing this equation we obtain for 1 6 t 6 T :\nκ(0, t− 1) = Eπ [Rt+1 | st−1, at−1] (9) = Est,at [q̃ π(st, at) − q̃π(st−1, at−1) | st−1, at−1]\n= ∑ st,at p(st, at | st−1, at−1) (q̃π(st, at) − q̃π(st−1, at−1))\n= q̃π(st−1, at−1) − ∑ st,at p(st, at | st−1, at−1) q̃π(st−1, at−1) = q̃π(st−1, at−1) − q̃π(st−1, at−1) = 0 .\nNext, we consider the expectation of ∑m τ=0Rt+1+τ for 1 6 t 6 T and 1 6 m 6 T − t (for m > 0)\nκ(m, t− 1) = Eπ [ m∑ τ=0 Rt+1+τ | st−1, at−1 ] (10)\n= Eπ [ m∑ τ=0 (q̃π(sτ+t, aτ+t) − q̃π(sτ+t−1, aτ+t−1)) | st−1, at−1 ] = Eπ [q̃ π(st+m, at+m) − q̃π(st−1, at−1) | st−1, at−1]\n= Eπ [ Eπ [ T∑\nτ=t+m\nR̃τ+1 | st+m, at+m ] | st−1, at−1 ]\n− Eπ [ Eπ [ T∑\nτ=t−1 R̃τ+1 | st−1, at−1\n] | st−1, at−1 ] = Eπ [ R̃T+1 | st−1, at−1 ] − Eπ [ R̃T+1 | st−1, at−1\n] = 0 .\nWe used that R̃t+1 = 0 for t < T .\nFor the particualr cases t = τ + 1 and m = T − t = T − τ − 1 we have\nκ(T − τ − 1, τ) = 0 . (11)\nThat is exactly what we wanted to proof.\nCorollary 1 explicitly states that the optimality criterion ensures an optimal reward redistribution even if the reward redistribution is higher order Markov. For Align-RUDDER we may obtain a higher order Markov reward redistribution due to the profile alignment of the sub-sequences.\nCorollary 2 (Higher order Markov reward redistribution optimality representation). We assume a delayed reward MDP P̃ , with episodic reward and that a new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is strictly return-equivalent to P̃ . We assume that the reward redistribuition is optimal, that is, κ(T − t − 1, t) = 0. If the condition\nEπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] (12)\nholds, then\nEπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) . (13)\nProof. By and large, the proof is PART (I) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.\nWe assume that the reward redistribution is optimal, that is,\nκ(T − t− 1, t) = 0 . (14)\nWe abbreviate the expected Rt+1 by ht:\nEπ [Rt+1 | st−1, at−1, st, at] = ht . (15)\nIn (Arjona-Medina et al., 2019) Lemma A4 is as follows.\nLemma 1. Two strictly return-equivalent SDPs P̃ and P have the same expected return for each start state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :\nEπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] . (16)\nThe assumptions of Lemma 1 hold for for the delayed reward MDP P̃ and the redistributed reward SDP P , since a reward redistribution ensures strictly return-equivalent SDPs. Therefore for a given state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :\nEπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] (17)\nwith G0 = ∑T τ=0Rτ+1 and G̃0 = R̃T+1. The Markov property of the MDP P̃ ensures that the future reward from t+ 1 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:\nEπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] = Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] . (18)\nAccording to Eq. (12), the future reward from t + 2 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:\nEπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] . (19)\nUsing these properties we obtain\nq̃π(st, at) = Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] (20)\n= Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] = Eπ [ R̃T+1 | s0, a0, . . . , st, at\n] = Eπ\n[ T∑ τ=0 R̃τ+1 | s0, a0, . . . , st, at ] = Eπ [ G̃0 | s0, a0, . . . , st, at\n] = Eπ [G0 | s0, a0, . . . , st, at]\n= Eπ [ T∑ τ=0 Rτ+1 | s0, a0, . . . , st, at ]\n= Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] + t∑ τ=0 hτ\n= Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] + t∑ τ=0 hτ\n= κ(T − t− 1, t) + t∑\nτ=0\nhτ\n= t∑ τ=0 hτ .\nWe used the optimality condition\nκ(T − t− 1, t) = Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = 0 . (21)\nIt follows that\nEπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) . (22)\nThis is exactly what we wanted to proof.\nThis corollary shows that optimal reward redistributions can be expressed as difference of Q-values if Eq. (12) holds. Eq. (12) states that the past can be averaged out. However, there may exist optimal reward redistributions for which Eq. (12) does not hold.\nIf the reward redistribution is optimal, the Q-values of P are given by qπ(st, at) = q̃π(st, at) − ψπ(st) and therefore P̃ and P have the same advantage function: Theorem 3 (Arjona-Medina et al. (2019)). If the reward redistribution is optimal, then the Q-values of the SDP P are qπ(st, at) = r(st, at) and\nqπ(st, at) = q̃ π(st, at) − Est−1,at−1 [q̃π(st−1, at−1) | st] = q̃π(st, at) − ψπ(st) . (23)\nThe SDP P and the original MDP P̃ have the same advantage function.\nProof. The proof can be found in (Arjona-Medina et al., 2019).\nFor an optimal reward redistribution only the expectation of the immediate reward r(st, at) = Eπ [Rt+1 | st, at] must be estimated. This considerably simplifies learning. Learning methods according to Arjona-Medina et al. (2019). The redistributed reward serves as reward for a subsequent learning method, which can be Type A, B, and C as described in ArjonaMedina et al. (2019). Type A methods estimate the Q-values. They can be estimated directly according to Eq. (23) assuming an optimal redistribution (Type A variant i). Q-values can be corrected for a non-optimal reward redistribution by additionally estimating κ (Type A variant ii). Q-value estimation can use eligibility traces (Type A variant iii). Type B methods use the redistributed rewards for policy gradients like Proximal Policy Optimization (PPO) Schulman et al. (2018). Type C methods use TD learning like Q-learning Watkins (1989), where immediate and future reward must be drawn together as typically done. For all these learning methods, demonstrations can be used for initialization (e.g. experience replay buffer) or pre-training (e.g. policy network with behavioral cloning). Recently, the convergence of RUDDER learning methods has been proven under commonly used assumptions (Holzleitner et al., 2020).\nNon-optimal reward redistribution and Align-RUDDER. According to Theorem 1, non-optimal reward redistributions do not change the optimal policies. The value κ(T − t− 1, t) measures the remaining delayed reward. The smaller κ is, the faster is the learning process. For Monte Carlo (MC) estimates, smaller κ reduces the variance of the future rewards, and, therefore the variance of the estimation. For temporal difference (TD) estimates, smaller κ reduces the amount of information that has to flow back. Align-RUDDER dramatically reduces the amount of delayed rewards by identifying key events via multiple sequence alignment, to which reward is redistributed. For an episodic MDP, a reward that is redistributed to time t reduces all κ(m, τ) with t 6 τ < T by the expectation of the reward. Therefore, in most cases Align-RUDDER makes κ-values much smaller." }, { "heading": "A.3 THE FIVE STEPS OF ALIGN-RUDDER’S REWARD REDISTRIBUTION", "text": "The new reward redistribution approach consists of five steps, see Fig. A.1: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model and the PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1).\n(I) Defining Events. Alignment techniques assume that sequences consist of few symbols, e.g. about 20 symbols, the events. It is crucial to keep the number of events small in order to increase the difference between a random alignment and an alignment of demonstrations. If there are many events, then two demonstrations might have few events that can be matched, which cannot be well distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978). The events can be the original state-action pairs, clusters thereof, or other representations of state-action pairs, e.g. indicating changes of inventory, health, energy, skills etc. In general, we define events as a cluster of states or state-actions. A sequence of events is obtained from a state-action sequence by substituting states or state-actions by their cluster identifier. In order to cluster states, a similarity measure between them is required. We suggest to use the “successor representation” (Dayan, 1993) of the states, which gives a similarity matrix based on how connected two states are given a policy. Successor representation have been used before (Machado et al., 2017; Ramesh et al., 2019) to obtain important events, for option learning. For computing the successor representation, we use the demonstrations combined with state-action sequences generated by a random policy. For high dimensional state spaces “successor features” (Barreto et al., 2017) can be used. We use similarity-based clustering methods like affinity propagation (AP) (Frey & Dueck, 2007). For AP the similarity matrix does not have to be symmetric and the number of clusters need not be known. State action pairs (s, a) are mapped to events e.\n(II) Determining the Alignment Scoring System. Alignment algorithms distinguish similar sequences from dissimilar sequences using a scoring system. A scoring matrix S has entries si,j that give the score for aligning event i with j. The MSA score SMSA of a multiple sequence alignment is the sum of all pairwise scores: SMSA = ∑ i,j,i<j ∑L t=0 sxi,t,xj,t , where xi,t means that event xi,t is at position t for sequence τi = ei,0:T in the alignment, analog for xj,t and the sequence τj = ej,0:T , andL is the alignment length. Note thatL ≥ T and xi,t 6= ei,t, since gaps are present in the alignment. In the alignment, events should have the same probability of being aligned as they would have if we know the strategy and align demonstrations accordingly. The theory of high scoring segments gives a scoring scheme with these alignment probabilities (Karlin & Altschul, 1990; Karlin et al., 1990; Altschul et al., 1990). Event i is observed with probability pi in the demonstrations, therefore a random alignment aligns event i with j with probability pipj . An alignment algorithm maximizes the MSA score SMSA and, thereby, aligns events i and j with probability qij for demonstrations. High values of qij means that the MSA often aligns events i and j in the demonstrations using the scoring matrix S with entries si,j . According to Theorem 2 and Equation [3] in Karlin & Altschul (1990), asymptotically with the sequence length, we have si,j = ln(qij/(pipj))/λ∗, where λ∗ is the unique positive root of ∑n,n i=1,j=1 pipj exp(λsi,j) = 1 (Equation [4] in Karlin & Altschul (1990)). We can now choose a desired probability qij and then compute the scoring matrix S with entries si,j . High values of qij should indicate relevant events for the strategy. A priori, we only know that a relevant event should be aligned to itself, while we do not know which events are relevant. Therefore we set qij to large values for every i = j and to low values for i 6= j. Concretely, we set qij = pi − for i = j and qij = /(n− 1) for i 6= j, where n is the number of different possible events. Events with smaller pi receive a higher score si,i when aligned to themselves since this self-match is less often observed when randomly matching events (pipi is the probability of a random self-match). Any prior knowledge about events should be incorporated into qij .\n(III) Multiple sequence alignment (MSA). MSA first produces pairwise alignments between all demonstrations. Then, a guiding tree (agglomerative hierarchical clustering) is produced via hierarchical clustering sequences, according to their pairwise alignment scores. Demonstrations which follow the same strategy appear in the same cluster in the guiding tree. Each cluster is aligned separately via MSA to address different strategies. However, if there is not a cluster of demonstrations, then the alignment will fail. MSA methods like ClustalW (Thompson et al., 1994) or MUSCLE (Edgar, 2004) can be used.\n(IV) Position-Specific Scoring Matrix (PSSM) and Profile. From the final alignment, we construct a) an MSA profile (column-wise event frequencies qi,j) and b) a PSSM (Stormo et al., 1982) which is used for aligning new sequences to the profile of the MSA. To compute the PSSM (column-wise scores si,t), we apply Theorem 2 and Equation [3] in Karlin & Altschul (1990). Event i is observed with probability pi in the data. For each position t in the alignment, we compute qi,t, which indicates the frequency of event i at position t. The PSSM is si,t = ln(qi,t/pi)/λ∗t , where λ ∗ t is the single\nunique positive root of ∑n i=1 pi exp(λsi,t) = 1 (Equation [1] in Karlin & Altschul (1990)). If we\nalign a new sequence that follows the underlying strategy (a new demonstration) to the profile model, we would see that event i is aligned to position t in the profile with probability qi,t.\n(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L t=0 sxt,t. Here, si,t is the alignment score for event i and xt is the event of τ at position t in the alignment. L is the profile length, where L ≥ T and xt 6= et, because of gaps in the alignment. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is\nRt+1 = (S(τt) − S(τt−1))C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1,\n(24) where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] and G̃0 = ∑T t=0 R̃t+1 is the original\nreturn of the sequence τ and S(τt−1) = 0. Edemo is the expectation over demonstrations, and C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence, since G0 =∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past, that is, Rt+1 = h((s, a)0:t). For computational efficiency, the alignment of τt−1 can be extended to one for τt, like exact matches are extended to high-scoring sequence pairs with the BLAST algorithm (Altschul et al., 1990; 1997).\nSub-tasks. The reward redistribution identifies sub-tasks, which are alignment positions with high redistributed reward. It also determines the terminal states and automatically assigns reward for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the reward is Markov. For redistributed reward that is Markov, the option framework (Sutton et al., 1999), the MAXQ framework (Dietterich, 2000), or recursive composition of option models (Silver & Ciosek, 2012) can be used as subsequent approaches to hierarchical reinforcement learning." }, { "heading": "A.4 SEQUENCE ALIGNMENT", "text": "In bioinformatics, sequence alignment identifies regions of significant similarity among different biological sequences to establish evolutionary relationships between those sequences. In 1970, Needleman and Wunsch proposed a global alignment method based on dynamic programming (Needleman & Wunsch, 1970). This approach ensures the best possible alignment given a substitution matrix, such as PAM (Dayhoff, 1978) or BLOSUM(Henikoff & Henikoff, 1992), and other parameters to penalize gaps in the alignment. The method of Needlemann and Wunsch is of O(mn) complexity both in memory and time, which could be prohibitive in long sequences like genomes. An optimization of this method by Hirschberg (1975), reduces memory to O(m+ n), but still requires O(mn) time.\nLater, Smith and Waterman developed a local alignment method for sequences (Smith & Waterman, 1981). It is a variation of Needleman and Wunsch’s method, keeping the substitution matrix and the gap-scoring scheme but setting cells in the similarity matrix with negative scores to zero. The complexity for this algorithm is of O(n2M). Osamu Gotoh published an optimization of this method, running in O(mn) runtime (Gotoh, 1982).\nThe main difference between both methods is the following:\n• The global alignment method by Needleman and Wunsch aligns the sequences fixing the first and the last position of both sequences. It attempts to align every symbol in the sequence, allowing some gaps, but the main purpose is to get a global alignment. This is especially useful when the two sequences are highly similar. For instance:\nATCGGATCGACTGGCTAGATCATCGCTGG CGAGCATC-ACTGTCT-GATCGACCTTAG * *** **** ** **** * * *\n• As an alternative to global methods, the local method of Smith and Waterman aligns the sequences with a higher degree of freedom, allowing the alignment to start or end with gaps.\nThis is extremely useful when the two sequences are substantially dissimilar in general but suspected of having a highly related sub region.\nATCAAGGAGATCATCGCTGGACTGAGTGGCT----ACGTGGTATGT ATC----CGATCATCGCTGG-CTGATCGACCTTCTACGT------*** ************ **** * * ****\nA.4.0.1 Multiple Sequence Alignment algorithms. The sequence alignment algorithms by Needleman and Wunsch and Smith and Waterman are limited to aligning two sequences. The approaches for generalizing these algorithms to multiple sequences can be classified into four categories:\n• Exact methods (Wang & Jiang, 1994). • Progressive methods: ClustalW (Thompson et al., 1994), Clustal Omega (Sievers et al.,\n2014), T-Coffee (Notredame et al., 2000). • Iterative and search algorithms: DIALIGN (Morgenstern, 2004), MultiAlign (Corpet, 1988). • Local methods: eMOTIF (Mccammon & Wolynes, 1998), PROSITE (Bairoch & Bucher,\n1994).\nFor more details, visit Sequence Comparison: Theory and methods (Chao & Zhang, 2009).\nIn our experiments, we use ClustalW from Biopython (Cock et al., 2009) with the following parameters:\nclustalw2 -ALIGN -CLUSTERING=UPGMA -NEGATIVE \" \\ \"-INFILE={infile} -OUTFILE={outfile} \" \\ \"-PWMATRIX={scores} -PWGAPOPEN=0 -PWGAPEXT=0 \" \\ \"-MATRIX={scores} -GAPOPEN=0 -GAPEXT=0 -CASE=UPPER \" \\ \"-NOPGAP -NOHGAP -MAXDIV=0 -ENDGAPS -NOVGAP \" \\ \"-NEWTREE={outputtree} -TYPE=PROTEIN -OUTPUT=GDE\nwhere the PWMATRIX and MATRIX are computed according to step (II) in Sec. 3 of the main paper." }, { "heading": "A.5 EXTENDED RELATED WORK", "text": "Align-RUDDER allows to identify sub-goals and sub-tasks, therefore it is related to hierarchical reinforcement learning (HRL) approaches like the option framework (Sutton et al., 1999), the MAXQ framework (Dietterich, 2000), or the recursive composition of option models (Silver & Ciosek, 2012). However, these methods do not address the problem of finding good options, good sub-goals, or good sub-tasks. Methods to learn good options have been proposed. Frequently observed states in solutions are chosen as targets (Stolle & Precup, 2002). Gradient-based approaches improving the termination function for options (Comanici & Precup, 2010; Mankowitz et al., 2016). Policy gradient optimized a unified policy consisting of intra-option policies, option termination conditions, and an option selection policy (Levy & Shimkin, 2012). Parametrized options are learned by treating the termination functions as hidden variables and using expectation maximization (Daniel et al., 2016). Intrinsic rewards are used to learn the policies within options, and extrinsic rewards to learn the policy over options (Kulkarni et al., 2016). Options have been jointly learned with an associated policy using the policy gradient theorem for options (Bacon et al., 2017). A slow time-scale manager module learns sub-goals that are achieved by fast time-scale worker (Vezhnevets et al., 2017).\nNext, we relate Align-RUDDER to imitation learning and trajectory matching. Imitation learning aims at learning a behavior close to the data generating policy by matching the trajectories of single demonstrations. In contrast, Align-RUDDER does not try to match single trajectories but identifies relevant events that are shared among successful demonstrations. In complex tasks like MineCraft trajectory matching fails, since large state spaces do not allow to match one of the few demonstrations. However, relevant events can still be matched as they appear in most demonstrations, therefore Align-RUDDER excels in such complex tasks." }, { "heading": "A.6 ARTIFICIAL TASK EXPERIMENTS", "text": "This section provides additional information that supports the results reported in the main paper for Artificial Tasks (I) and (II)." }, { "heading": "A.6.1 HYPERPARAMETER SELECTION", "text": "For (BC)+Q-Learning and Align-RUDDER, we performed a grid search to select the learning rate from the following values [0.1, 0.05, 0.01]. We used 20 different seeds for each value and each number of demonstrations and then selected the setting with the highest success for all number of demonstrations. The final learning rate for (BC)+Q-Learning and DQfD is 0.01 and for AlignRUDDER it is 0.1.\nFor DQfD, we set the experience buffer size to 30, 000 and the number of experiences sampled at every timestep to 10. The DQfD loss weights are set to 0.01, 0.01 and 1.0 for the Q-learning loss term, n-step loss term and the expert loss respectively during pre-training. During online learning, we change the loss terms to 1.0, 1.0 and 0.01 for the Q-learning loss term, n-step loss term and the expert loss term. This was necessary to enable faster learning for DQfD. The expert action margin is 0.8.\nFor successor representation, we use a learning rate of 0.1 and a gamma of 0.99. We update the successor table multiple times using the same transition (state, action, next state) from the demonstration.\nFor affinity propagation, we use a damping factor of 0.5 and set the maximum number of iterations to 1000. Furthermore, if we obtain more than 15 clusters, then we combine clusters based on the similarity of the cluster centers." }, { "heading": "A.6.2 FIGURES", "text": "Figure A.5 shows sample trajectories in the FourRooms and EightRooms environment, with the initial and target positions marked in red and green respectively. Figure A.2 shows the clusters after performing clustering with Affinity Propagation using the successor representation with 25 demonstrations and an environment with 1% stochasticity on the transitions. Different colors indicate different clusters. Figures A.3 and A.4 show clusters for different environment settings. Figure A.3 shows clusters when using 10 demonstrations and for Figure A.4 environments with 5% stochastictiy on transitions was used. Figure A.6 shows the reward redistribution for the given example trajectories in the FourRooms and EightRooms environments." }, { "heading": "A.6.3 ARTIFICIAL TASK P-VALUES", "text": "Tables A.1 and A.2 show the p-values obtained by performing a Mann-Whitney-U test between Align-RUDDER and BC+Q-Learning and DQfD respectively." }, { "heading": "A.6.4 STOCHASTIC ENVIRONMENTS", "text": "Figure A.7 shows results for the FourRooms environment with different levels of stochasticity (5%, 10%, 15%, 25% and 40%) on the transitions. Figure A.8 shows results for the EightRooms environment with different levels of stochasticity (5% and 10%) on the transitions." }, { "heading": "A.6.5 CHANGING NUMBER OF CLUSTERS", "text": "We use Affinity Propagation for clustering, and do not set the number of clusters. Although, we set the max number of clusters allowed. If Affinity propagation results in more clusters, they are combined and reduced to the maximum clusters allowed. This is necessary due to the limitations of the underlying alignment library we are using. For the experiments on FourRooms and EightRooms in the main paper, we fix the max number of clusters to 15.\nWe conduct an experimental study on how changing the max number of clusters changes the performance of Align-RUDDER on the FourRooms environment. The results are in table A.3." }, { "heading": "A.6.6 KEY-EVENT DETECTION", "text": "1D key-chest environment. We use a 1D key-chest environment to show the effectiveness of sequence alignment in a low data regime compared to an LSTM model. The agent has to collect the key and then open the chest, to get a positive reward at the last timestep. See Appendix Fig. A.9 for a schematic representation of the environment. As the key-events (important state-action pairs) in this environment are known we can compute the key-event detection rate of a reward redistribution model. A key event is detected if the redistributed reward of an important state-action pair is larger than the average redistributed reward in the sequence. We train the reward redistribution models with 2, 5 and 10 training episodes and test on 1000 test episodes, averaged over 10 trials. Align-RUDDER significantly outperforms LSTM (RUDDER) for detecting these key events in all cases, with an average key-event detection rate of 0.96 for sequence alignment vs. 0.46 for the LSTM models over all dataset sizes. See Appendix Fig. A.10 for the detailed results." }, { "heading": "A.7 MINECRAFT EXPERIMENTS", "text": "In this section we explain in detail the implementation of Align-RUDDER for solving the task ObtainDiamond." }, { "heading": "A.7.1 MINECRAFT", "text": "We show that our approach can be applied to complex tasks by evaluating it on the MineRL Minecraft dataset (Guss et al., 2019b). This dataset provides a large collection of demonstrations from human players solving six different tasks in the sandbox game MineCraft. In addition to the human demonstrations the MineRL dataset also provides an OpenAI-Gym wrapper for MineCraft. The dataset includes demonstrations for the following tasks:\n• navigating to target location following a compass, • collecting wood by chopping trees, • obtaining an item by collecting resources and crafting, and • free play \"survival\" where the player is free to choose his own goal.\nThe demonstrations include the video showing the players’ view (without user interface), the players’ inventory at every time step and the actions performed by the player. We focus on the third task of obtaining a target item, namely a diamond. This task is very challenging as it is necessary to obtain several different resources and tools and has been the focus of a challenge (Guss et al., 2019a) at NeurIPS’19. By the end of this challenge no entry was able to obtain the diamond.\nWe show that our method is well suited for solving the task of obtaining the diamond, which can be decomposed into sub-tasks by reward redistribution after aligning successful demonstrations." }, { "heading": "A.7.2 RELATED WORK AND STEPS TOWARDS A GENERAL AGENT", "text": "In the following, we review two approaches Skrynnik et al. (2019); Scheller et al. (2020) where more details are available and compare them with our approach.\nSkrynnik et al. (2019) address the problem with a TD based hierarchical DeepQ-Network (DQN) and by utilizing the hierarchical structure of expert trajectories by extracting sequences of meta-actions and sub-goals. This approach allowed them to achieve the 1st place in the official NeurIPS’19 MineRL challenge (Skrynnik et al., 2019). In terms of pre-processing, our approaches have in common that both rely on frame skipping and action space discretization. However, they reduce the action space to ten distinct joint environment actions (e.g. move camera & attack) and treat inventory actions separately by executing a sequence of semantic actions. We aim at taking a next step towards a more general agent by introducing an action space preserving the agent’s full freedom of action in\nthe environment (more details are provided below). This allows us to avoid the distinction between item (environment) and semantic (inventory) agents and to train identically structured agents in the same fashion regardless of facing a mining, crafting, placing or smelting sub-task. Skrynnik et al. (2019) extract a sub-task chain by separately examining each expert trajectory and by considering the time of appearance of items in the inventory in chronological order. For agent training their approach follows a heuristic where they distinguish between collecting the item log and all remaining items. The log-agent is trained by starting with the TreeChop expert trajectories and then gradually injecting trajectories collected from interactions with the environment into the DQN’s replay buffer. For the remaining items they rely on the expert data of ObtainIronPickaxeDense and imitation learning. Given our proposed sequence alignment and reward redistribution methodology we are able to avoid this shift in training paradigm and to leverage all available training data (ObtainDiamond, ObtainIronPickaxe and TreeChop) at the same time. In short, we collect all expert trajectories in one pool, perform sequence alignment yielding a common diamond consensus along with the corresponding reward redistribution and the respective sub-task sequences. Given this restructuring of the problem into local sub-problems with redistributed reward all sub-task agents are then trained in the same fashion (e.g. imitation learning followed by RL-based fine-tuning). Reward redistribution guarantees that the optimal policies are preserved (Arjona-Medina et al., 2019).\nScheller et al. (2020) achieved the 3rd place on the official leader board following a different line of research and addressed the problem with a single end-to-end off-policy IMPALA (Espeholt et al., 2018) actor-critic agent, again utilizing experience replay to incorporate the expert data (Scheller et al., 2020). To prevent catastrophic forgetting of the behavior for later, less frequent sub-tasks they introduce value clipping and apply CLEAR (Rolnick et al., 2019) to both, policy and value networks. Treating the entire problem as a whole is already the main distinguishable feature compared to our method. To deal with long trajectories they rely on a trainable special form of frame skipping where the agent also has to predict how many frames to skip in each situation. This helps to reduce the effective length (step count) of the respective expert trajectories. In contrast to the approach of (Scheller et al., 2020) we rely on a constant frame skip irrespective of the states and actions we are facing. Finally, there are also several common features including:\n1. a supervised BC pre-training stage prior to RL fine-tuning, 2. separate networks for policy and value function, 3. independent action heads on top of a sub-sequence LSTM, 4. presenting the inventory state in a certain form to the agent and 5. applying a value-function-burn-in prior to RL fine-tuning." }, { "heading": "A.7.3 THE FIVE STEPS OF ALIGN-RUDDER DEMONSTRATED ON MINECRAFT", "text": "In this subsection, we give an example of the five steps of Align-RUDDER using demonstrations from the MineRL ObtainDiamond task. Figures A.11 to A.15 illustrate these steps.\nA.7.4 IMPLEMENTATION OF OUR ALGORITHM FOR MINECRAFT\nThe architecture of the training pipeline incorporates three learning stages:\n• sequence alignment and reward redistribution\n• learning from demonstrations via behavioral cloning (pre-training) and\n• model fine-tuning with reinforcement learning.\nFigure A.16 shows a conceptual overview of all components.\nSequence alignment and reward redistribution. First, we extract the sequence of states from human demonstrations, transform images into feature vectors using a standard pre-trained network and transform them into a sequence of consecutive state deltas (concatenating image feature vectors and inventory states). A pre-trained network can be model trained for image classification or an auto-encoder model trained on images. In our case, we used an auto-encoder model trained on the MineRL obtainDiamond dataset. We cluster the resulting state deltas and remove clusters with a large number of members and merged smaller clusters. This results in 19 events and we map the demonstrations to sequences of events. These events correspond to inventory changes. For each human demonstration we get a sequence of events which we map to letters from the amino acid code, resulting in a sequence of letters. In Fig. A.19 we show all events with their assigned letter encoding that we defined for the Minecraft environment.\nWe then calculate a scoring matrix according to step (II) in Sec. 3 in the main document. Then, we perform multiple sequence alignment to align sequences of events of the top N demonstrations, where shorter demonstrations are ranked higher. This results in a sequence of common events which we denote as the consensus. In order to redistribute the reward, we use the PSSM model and assign the respective reward. Reward redistribution allows the sub-goal definition i.e. positions where the reward redistribution is larger than a threshold or positions where the reward redistribution has a certain value. In our implementation sub-goals are obtained by applying a threshold to the reward redistribution. The main agent is initialized by executing sub-agents according to the alignment. Figure 4 shows how sub-goals are identified using reward redistribution.\nLearning from demonstrations via behavioral cloning. We extract demonstrations for each individual sub-task in the form of sub-sequences taken from all demonstrations. For each sub-task we train an individual sub-agent via behavioral cloning.\nModel fine-tuning with reinforcement learning. We fine-tune the agent in the environment using PPO (Schulman et al., 2018). During fine-tuning with PPO, an agent receives reward if it manages to reach its sub-goal.\nTo evaluate the performance of an agent for its current sub-goal, we average the return over multiple roll-outs. This gives us a good estimate of the success rate and if trained models have improved during fine tuning or not. In Fig. 6, we plot the overall success rate of all models evaluated sequentially from start to end.\nIn order to shorten the training time of our agent, we use trajectory replay and state resetting, similar to the idea proposed in (Ecoffet et al., 2019), allowing us to train sub-task agents in parallel. This is not necessary for the behavioral cloning stage, since here we can independently train agents according to the extracted sub-goals. However, fine-tuning a sub-task agent with reinforcement learning requires agents for all previous sub-tasks. To fine-tune agents for all sub-tasks, we record successful experiences (states, actions, rewards) for earlier goals and use them to reset the environment where a subsequent agent can start its training. In Fig. A.20, we illustrate a trajectory replay given by an exemplary consensus." }, { "heading": "A.7.5 POLICY AND VALUE NETWORK ARCHITECTURE", "text": "Figure A.17 shows a conceptual overview of the policy and value networks used in our MineRL experiments. The networks are structured as two separate convolutional encoders with an LSTM layer before the respective output layer, without sharing any model parameters.\nThe input to the model is the sequence of the 32 most recent frames, which are pre-processed in the following way: first, we add the currently equipped item as a color-coded border around each RGB frame. Next, the frames are augmented with an inventory status bar representing all 18 available inventory items (each inventory item is drawn as an item-square consisting of 3 × 3 pixels to the frame). Depending on the item count the respective square is drawn with a linearly interpolated gray-scale ranging from white (no item at all) to black (item count > 95). The count of 95 is the 75-quantile of the total amount of collected cobblestones and dirt derived from the inventory of all expert trajectories. Intuitively, this count should be related to the current depth (level) where an agent currently is or at least has been throughout the episode. In the last pre-processing step the frames are resized from 64× 64 to 48× 48 pixels and divided by 255 resulting in an input value range between zero and one.\nThe first network stage consists of four batch-normalized convolution layers with ReLU activation functions. The layers are structured as follows: Conv-Layer-1 (16 feature maps, kernel size 4, stride 2, zero padding 1), Conv-Layer-2 (32 feature maps, kernel size 4, stride 2, zero padding 1), Conv-Layer-3 (64 feature maps, kernel size 3, stride 2), and Conv-Layer-4 (32 feature maps, kernel size 3, stride 2). The flattened latent representation (∈ R32×288) of the convolution stage is further processed with an LSTM layer with 256 units. Given this recurrent representation we only keep the last time step (e.g. the prediction for the most recent frame).\nThe value head is a single linear layer without non-linearity predicting the state-value for the most recent state. For action prediction, two types of output heads are used depending on the underlying action distribution. The binary action head represents the actions attack, back, forward, jump, left, right, sneak and sprint which can be executed concurrently and are therefore modeled based on a\nBernoulli distribution. Since only one item can be equipped, placed, or crafted at a time these actions are modeled with a categorical distribution. The equip head selects from none, air, wooden-axe, wooden-pickaxe, stone-axe, stone-pickaxe, iron-axe and iron-pickaxe. The place head selects from none, dirt, stone, cobblestone, crafting-table, furnace and torch. The craft head selects from none, torch, stick, planks and crafting-table. Items which have to be crafted nearby are none, wooden-axe, wooden-pickaxe, stone-axe, stone-pickaxe, iron-axe, iron-pickaxe and furnace. Finally, items which are smelted nearby are none, iron-ingot and coal. For predicting the camera angles (up/down as well as left/right) we introduce a custom action space outlined in Figure A.18. This space discretizes the possible camera angles into 11 distinct bins for both orientations leading to the 22 output neurons of the camera action head. Each bin holds the probability for sampling the corresponding angle as a camera action, since in most of the cases the true camera angle lies in between two such bins. We share the bin selection probability by linear interpolation with respect to the distance of the neighboring bin centers to the true camera angle. This way we are able to train the model with standard categorical cross-entropy during behavioral cloning and sample actions from this categorical distribution during exploration and agent deployment.\nA.7.6 IMITATION LEARNING OF SUB-TASK AGENTS\nGiven the sub-sequences of expert data separated by task and the network architectures described above we perform imitation learning via behavioral cloning (BC) on the expert demonstrations. All sub-task policy networks are trained with a cross-entropy loss on the respective action distributions using stochastic gradient decent with a learning rate of 0.01 and a momentum of 0.9. Mini-batches of size 256 are sampled uniformly from the set of sub-task sequences. As we have the MineRL simulator available during training we are able to include all sub-sequences in the training set and evaluate the performance of the model by deploying it in the environment every 10 training epochs. Once training over 300 epochs is complete we select the model checkpoint based on the total count of collected target items over 12 evaluation trials per checkpoint. Due to presence of only successful sequences, the separate value network is not pre-trained with BC." }, { "heading": "A.7.7 REINFORCEMENT LEARNING ON SUB-TASK AGENTS", "text": "After the pretraining of the Sub-Task agents, we further fine tune the agents using PPO in the MineRL environment. The reward is the redistributed reward given by Align-RUDDER. The value function is initialized in a burn-in stage prior to policy improvement where the agent interacts with the environment for 50k timesteps and only updates the value function. Finally, both policy and the value function are trained jointly for all sub-tasks. All agents are trained between 50k timesteps and 500k timesteps. We evaluate each agent periodically during training and in the end select the best performing agent per sub-task. A.21 - A.25 present evaluation curves of some sub-task agents during learning from demonstrations using behavioral cloning and learning online using PPO." }, { "heading": "A.8 REPRODUCING THE ARTIFICIAL TASK RESULTS", "text": "The code to reproduce the results and figures of both artificial tasks is provided as supplementary material. The README contains step-by-step instructions to set up an environment and run the experiments. By default, instead of using 100 seeds per experiment only 10 are used in the demonstration code.\nFinally, a video showcasing the MineCraft agent is also provided as supplementary material." }, { "heading": "A.9 SOFTWARE LIBRARIES", "text": "We are thankful towards the developers of Mazelab Zuo (2018), PyTorch Paszke et al. (2019), OpenAI Gym Brockman et al. (2016), Numpy Harris et al. (2020), Matplotlib Hunter (2007) and Minecraft Guss et al. (2019b)." }, { "heading": "A.10 COMPUTE", "text": "Artificial task (I) and (II) experiments were performed using CPU only as GPU speed-up was negligible. The final results for all methods were created on an internal CPU cluster with 128 CPU\ncores with a measured wall-clock time of 10,360 hours. The majority of compute is spent on baseline methods.\nFor minecraft, during development 6 to 8 nodes each with 4 GPUs of an internal GPU cluster were used for roughly six months of GPU compute time (Nvidia Titan V and 2080 TI).\nThe compute required for training the final agent was well within the challenge parameters (4 days on a single node with one GPU)." } ]
2,021
ALIGN-RUDDER: LEARNING FROM FEW DEMON-
SP:233335a3dc327cf153bd2e8d35a9e4594cf5bc67
[ "This paper proposes a novel approach to modeling uncertainty, as an layer added-on to an otherwise black-box system. The ChePAN uses a neural network to estimate per-quantile roots of a chebyshev polynomial, then uses a quantile regression loss to fit these coefficients using backpropagation. Importantly, the Chebyshev polynomial formulation gives the practitioner some flexibility around deciding which statistic of the conditional CDF should be matched by the black box system. Examples are given for matching the 0 and 1 quantiles, for cases of min and max estimation, as well as matching either of median or mean." ]
Most predictive systems currently in use do not report any useful information for auditing their associated uncertainty and evaluating the corresponding risk. Taking it for granted that their replacement may not be advisable in the short term, in this paper we propose a novel approach to modelling confidence in such systems while preserving their predictions. The method is based on the Chebyshev Polynomial Approximation Network (the ChePAN), a new way of modelling aleatoric uncertainty in a regression scenario. In the case addressed here, uncertainty is modelled by building conditional quantiles on top of the original pointwise forecasting system considered as a black box, i.e. without making assumptions about its internal structure. Furthermore, the ChePAN allows users to consistently choose how to constrain any predicted quantile with respect to the original forecaster. Experiments show that the proposed method scales to large size data sets and transfers the advantages of quantile regression to estimating black-box uncertainty.
[]
[ { "authors": [ "M. Abadi", "P. Barham", "J. Chen", "Z. Chen", "A. Davis", "J. Dean", "M. Devin", "S. Ghemawat", "G. Irving", "M. Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th tUSENIXu Symposium on Operating Systems Design and Implementation (tOSDIu", "year": 2016 }, { "authors": [ "Christopher M Bishop" ], "title": "Mixture density networks", "venue": null, "year": 1994 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "Axel Brando", "Jose A Rodriguez-Serrano", "Jordi Vitria", "Alberto Rubio Muñoz" ], "title": "Modelling heterogeneous distributions with an uncountable mixture of asymmetric laplacians", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "Xgboost: A scalable tree boosting system", "venue": "In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "C.W. Clenshaw" ], "title": "A note on the summation of Chebyshev series", "venue": "Math. Tables Aids Comput.,", "year": 1955 }, { "authors": [ "Murray Cox" ], "title": "Inside airbnb: adding data to the debate", "venue": "Inside Airbnb [Internet].[cited", "year": 2019 }, { "authors": [ "Ward Cunningham" ], "title": "The wycash portfolio management system", "venue": "ACM SIGPLAN OOPS Messenger,", "year": 1992 }, { "authors": [ "W. Dabney", "G. Ostrovski", "D. Silver", "R. Munos" ], "title": "Implicit quantile networks for distributional reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "W. Dabney", "M. Rowland", "M.G. Bellemare", "R. Munos" ], "title": "Distributional reinforcement learning with quantile regression", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Germund Dahlquist", "Å ke Björck" ], "title": "Numerical methods in scientific computing", "venue": "Vol. I. Society for Industrial and Applied Mathematics (SIAM), Philadelphia,", "year": 2008 }, { "authors": [ "Armen Der Kiureghian", "Ove Ditlevsen" ], "title": "Aleatory or epistemic? does it matter", "venue": "Structural safety,", "year": 2009 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017a. URL http://archive. ics.uci.edu/ml", "venue": null, "year": 2017 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017b. URL http://archive. ics.uci.edu/ml", "venue": null, "year": 2017 }, { "authors": [ "D. Elliott" ], "title": "Error analysis of an algorithm for summing certain finite series", "venue": "J. Austral. Math. Soc.,", "year": 1968 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": null, "year": 2015 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "J.M. Hernández-Lobato", "R. Adams" ], "title": "Probabilistic backpropagation for scalable learning of bayesian neural networks", "venue": "In ICML, pp", "year": 2015 }, { "authors": [ "José Miguel Hernández-Lobato", "Ryan Adams" ], "title": "Probabilistic backpropagation for scalable learning of bayesian neural networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Andy Liaw", "Matthew Wiener" ], "title": "Classification and regression by randomforest", "venue": "R news,", "year": 2002 }, { "authors": [ "H. Majidian" ], "title": "On the decay rate of Chebyshev coefficients", "venue": "Appl. Numer. Math.,", "year": 2017 }, { "authors": [ "A.C.R. Newbery" ], "title": "Error analysis for polynomial evaluation", "venue": "Math. Comp.,", "year": 1974 }, { "authors": [ "Carl Edward Rasmussen" ], "title": "A practical monte carlo implementation of bayesian learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 1996 }, { "authors": [ "Cynthia Rudin" ], "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead", "venue": "Nature Machine Intelligence,", "year": 2019 }, { "authors": [ "David Sculley", "Gary Holt", "Daniel Golovin", "Eugene Davydov", "Todd Phillips", "Dietmar Ebner", "Vinay Chaudhary", "Michael Young", "Jean-Francois Crespo", "Dan Dennison" ], "title": "Hidden technical debt in machine learning systems", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "N. Tagasovska", "D. Lopez-Paz" ], "title": "Frequentist uncertainty estimates for deep learning", "venue": "Bayesian Deep Learning workshop NeurIPS,", "year": 2018 }, { "authors": [ "Natasa Tagasovska", "David Lopez-Paz" ], "title": "Single-model uncertainties for deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mattias Teye", "Hossein Azizpour", "Kevin Smith" ], "title": "Bayesian uncertainty estimation for batch normalized deep networks", "venue": "arXiv preprint arXiv:1802.06455,", "year": 2018 }, { "authors": [ "L.N. Trefethen" ], "title": "Is Gauss quadrature better than Clenshaw-Curtis", "venue": "SIAM Rev.,", "year": 2008 }, { "authors": [ "Berk Ustun", "Cynthia Rudin" ], "title": "Supersparse linear integer models for optimized medical scoring systems", "venue": "Machine Learning,", "year": 2016 }, { "authors": [ "A. Wehenkel", "G. Louppe" ], "title": "Unconstrained monotonic neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "R. Wen", "K. Torkkola", "B. Narayanaswamy", "D. Madeka" ], "title": "A multi-horizon quantile recurrent forecaster", "venue": "arXiv preprint arXiv:1711.11053,", "year": 2017 }, { "authors": [ "H. Zheng", "Z. Yang", "W. Liu", "J Liang", "Y. Li" ], "title": "Improving deep neural networks using softplus units", "venue": "In 2015 International Joint Conference on Neural Networks (IJCNN),", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The present paper proposes a novel method for adding aleatoric uncertainty estimation to any pointwise predictive system currently in use. Considering the system as a black box, i.e. avoiding any hypothesis about the internal structure of the system, the method offers a solution to the technical debt debate. The concept of technical debt was introduced in 1992 to initiate a debate on the long-term costs incurred when moving quickly in software engineering (Sculley et al. (2015); Cunningham (1992)). Specifically, most of the predictive systems currently in use have previously required much effort in terms of code development, documentation writing, unit test implementation, preparing dependencies or even their compliance with the appropriate regulations (e.g., medical (Ustun & Rudin (2016)) or financial models (Rudin (2019)) may have to satisfy interpretability constraints). However, once the system is being used with real-world problems, a new requirement can arise regarding the confidence of its predictions when the cost of an erroneous prediction is high. That being said, replacing the currently-in-use system may not be advisable in the short term. To address this issue,\nk“0 the coefficients of the\nintegrated polynomial, β the black box function and P the conditional prediction of the quantile τ .\nthe aim of this work is to report any information that is useful for auditing the system’s associated uncertainty without modifying its predictions.\nIn general terms, sources of uncertainty can be understood by analysing the conditional members of this joint distribution: ppy,xq “ ş\nM ppy | x,MqppM | xqppxq dM where M P M is the family (assumed non-finite) of models being considered.\nNot all methods developed to model uncertainty can be applied in the black-box scenario, since the main hypothesis is that the black box is a fixed single model and unknown internally. Here, we refer specifically to those solutions that model epistemic uncertainty, which requires modelling ppM | xq. By epistemic, we mean that uncertainty which can derive from ignorance about the model, including, for example, ensemble models (Lakshminarayanan et al. (2017)), Bayesian neural networks (Rasmussen (1996); Blundell et al. (2015); Hernández-Lobato & Adams (2015b); Teye et al. (2018)) or MC-Dropout (Gal & Ghahramani (2016)).\nHowever, the black box could be a non-parametric predictive system or even a handcrafted rulebased system, as shown in Figure 1. Hence the reason for studying aleatoric uncertainty (Der Kiureghian & Ditlevsen (2009); Kendall & Gal (2017); Brando et al. (2019)), which originates from the variability of possible correct answers given the same input data, ppy | xq. This type of uncertainty can be tackled by modelling the response variable distribution. For instance, imposing a conditional normal distribution where the location parameter is the black-box function and the corresponding scale parameter is learnt. However, the more restricted the assumptions made about this distribution, the more difficult it will be to model heterogeneous distributions. One solution to this limitation is the type of regression analysis used in statistics and econometrics known as Quantile Regression (QR), which will provide a more comprehensive estimation.\nUnlike classic regression methods, which only estimate a selected statistic such as the mean or the median, QR allows us to approximate any desired quantile. The main advantage of this method is that it allows confidence intervals to be captured without having to make strong assumptions about the distribution function to be approximated.\nRecently, several works (Dabney et al. (2018a); Tagasovska & Lopez-Paz (2018); Brando et al. (2019)) have proposed a single deep learning model that implicitly learns all the quantiles at the same time, i.e. the model can be evaluated for any real value τ P r0, 1s to give a pointwise estimation of any quantile value of the response variable. Nevertheless, these QR solutions are not directly applicable to the uncertainty modelling of a black box because the predicted quantiles need to be linked to the black-box prediction in some way.\nIn the present paper, we propose a novel method for QR based on estimating the derivative of the final function using a Chebyshev polynomial approximation to model the uncertainty of a blackbox system. Specifically, this method disentangles the estimation of a selected statistic β of the distribution ppy | xq from the estimation of the quantiles of ppy | xq (shown in Figure 2). Hence, our method is not restricted to scenarios where we can jointly train both estimators, but can also be applied to pre-existing regression systems as a wrapper that produces the necessary information to evaluate aleatoric uncertainty. Additionally, the proposed method scales to several real-world data sets.\nThis paper is organised as follows. Section 2 states the real-world motivation of the current research as well as the contribution it will be presented. Section 3 introduces the problem of QR and reviews the classic approach to use with neural networks, showing how it cannot be applied directly to constrained black-box uncertainty modelling. Section 4 explores an approach for modelling the derivative of a function using neural networks. The two previous sections provide the baseline for developing our proposed model and its properties, which is presented in Section 5. And finally, in Section 6, we show how our model can be applied in large data sets and defines a new way of modelling the aleatoric uncertainty of a black box. The results are then summarised in the conclusion." }, { "heading": "2 RESEARCH GOAL AND CONTRIBUTION", "text": "The present article was motivated by a real-world need that appears in a pointwise regression forecasting system of a large company. Due to the risk nature of the internal problem where it is applied, uncertainty modelling is important. However, similarly to the medical or financial cases presented in the introduction, interpretability requirements were essential in defining the model currently used by the company, which does not report confidence any prediction made. The need for this research arises in cases where the replacement of the aforementioned system is not advisable in the short term, despite the ongoing need for the uncertainty estimation of that system.\nDefinition of constrained black-box uncertainty modelling\nFrom the probabilistic perspective, solving a regression problem involves determining a conditional density model, qpy | xq. This model fits an observed set of samples D “ pX,Y q “\npxi, yiq | xi P RD, yi P R (n\ni“1, which we assume to be sampled from an unknown distribution ppy | xq. i.e. the real data. Given this context, the pointwise forecasting system mentioned above is a function, β : RD Ñ R, which tries to approximate a certain conditional summary statistic (a percentile or moment) of ppy | xq. Regarding the notation, we will call the “constraint” the known or assumed summary statistic that is approximated by βpxq (e.g. if β is reducing the mean square error, then it corresponds to the conditional mean. Otherwise, if it minimises the mean absolute error, it corresponds to the median).\nImportantly, in the constrained black-box uncertainty modelling context, the mismatch between the real conditional statistic and the black box, β, becomes a new source of aleatoric uncertainty that is different from the one derived from the data. However, the way to model it continues to be by estimating ppy | xq. Therefore, a poorly estimated β will impact the modelling of ppy | xq, given that we always force the constraint to be satisfied (as shown in Figure 3 of the Experiment section).\nSo far, we have attempted to highlight the fact that we do not have a strong hypothesis about the internals of this β function, we have only assumed that it approximates a certain statistic of ppy | xq. Accordingly, we call this function the “constrained black box”. This flexible assumption will enable us to consider several pointwise models as β, as shown in Figure 1.\nThe overall goal of the present article is, taking a pre-defined black box βpxq that estimates a certain conditional summary statistic of ppy | xq, to model qpy | xq under the constraint that if we calculate the summary statistic of this predicted conditional distribution, it will correspond to βpxq. As mentioned in the Introduction, since we have a fixed black box, we are unable to apply Bayesian techniques such as those that infer the distribution of parameters within the model, ppM | xq. In general, even though they are very common techniques in generic uncertainty modelling, no such epistemic uncertainty techniques can be applied in this context due to the limitation of only having a single fixed model.\nIn addition, it should be noted that not all models that estimate ppy | xq can be used in the constrained black-box uncertainty modelling context. To solve this problem, we require models that predict qpy | xq but also force the chosen conditional summary statistic of qpy | xq to have the same value as βpxq. The main contribution of this work is to present a new approach that allows us not only to outperform other baseline models when tackling this problem, but also to decide which kind of constraint we wish to impose between βpxq and qpy | xq. The qpy | xq will be approximated using Quantile Regression (explained in Section 3) and the constraint will be created considering the integration constant of the qpy | xq derivative (shown in Section 5.1)." }, { "heading": "3 CONDITIONAL QUANTILE REGRESSION", "text": "In Quantile Regression (QR), we estimate q in a discrete manner by means of quantiles, which does not assume any typical parametric family distribution to the predicted p, i.e. it goes beyond central tendency or unimodality assumptions.\nFor each quantile value τ P r0, 1s and each input value x P RD, the conditional quantile function will be f : r0, 1s ˆ RD Ñ R. In our case, we use deep learning as a generic function approximator (Hornik et al. (1989)) to build the model f , as we shall see later. Consequently, f is a parametric function that will be optimised by minimising the following loss function with respect to their weights w,\nLpx, y, τq “ ` y ´ fwpτ,xq ˘ ¨ ` τ ´ 1ry ă fwpτ,xqs ˘\n(1) where 1rcs denotes the indicator function that verifies the condition c. Equation 1 is an asymmetric convex loss function that penalises overestimation errors with weight τ and underestimation errors with weight 1´ τ . Recently, different works (Dabney et al. (2018b;a); Wen et al. (2017)) have proposed deep learning models that minimise a QR loss function similar to Equation 1. For instance, in the field of reinforcement learning, the Implicit Quantile Network (IQN) model was proposed (Dabney et al. (2018a)) and subsequently applied to solve regression problems as the Simultaneous Quantile Regression (SQR) model (Tagasovska & Lopez-Paz (2019)) or the IQN in (Brando et al. (2019)). These models consist of a neural network ψ : r0, 1s ˆ RD Ñ R such that it directly learns the function f that minimises Equation 1, i.e. f “ ψ. In order to optimise ψ for all possible τ values, these models pair up each input x with a sampled τ „ Up0, 1q from a uniform distribution in each iteration of the stochastic gradient descent method. Thus, the final loss function is an expectation over τ of Equation 1.\nHowever, these QR models cannot be applied to the constrained black-box scenario, given that they do not link their predicted quantiles with a pointwise forecasting system in a constrained way (Section 5.1). Other models, such as quantile forests, have a similar limitation. In the next section, we introduce the other main part required to define our proposed method." }, { "heading": "4 MODELLING THE DERIVATIVE WITH A NEURAL NETWORK", "text": "Recently, a non-QR approach was proposed to build a monotonic function based on deep learning: the Unconstrained Monotonic Neural Network (UMNN) (Wehenkel & Louppe (2019)). The UMNN estimates the derivative of a function by means of a neural network, φ, which has its output restricted to strictly positive values, i.e. approaching Hpzq such that\nHpzq “ ż z\n0\nφptq dt`Hp0q. (2)\nTherefore, if the neural network φpzq « BHBz pzq ą 0, this is in fact a sufficient condition to force Hpzq to be monotone.\nTo compute the integral of BHBz , the UMNN approximates the integral of Equation 2 using the Clenshaw-Curtis quadrature, which has a closed expression. The UMNN is designed to obtain a general monotonic function with respect to all the model inputs, z, but our interest is to build a partial monotonic function with respect to the quantile value, as we will explain hereafter.\nThe partial monotonic function will be obtained using the Clenshaw-Curtis Network (CCN) model, which is an extension of the UMNN model introduced in Section A.3 of the Appendix and an intermediate step we took to arrive at the main proposal of the current article. Importantly, we have not included it in the main article because it cannot be applied to the constrained black-box uncertainty modelling scenario (as described in Section A.3)." }, { "heading": "5 CHEPAN: THE CHEBYSHEV POLYNOMIAL APPROXIMATION NETWORK", "text": "In this section, we will extend the UMNN to a model that is only monotonic with respect to the quantile input τ . Moreover, we will exploit the fact that the quantile domain is in r0, 1s to provide\nan approach which is uniformly defined over all of the interval. We call this approach the Chebyshev Polynomial Approximation Network (ChePAN), which allows us to transfer the advantages of quantile regression to the constrained uncertainty modelling of a black box.\nAs Figure 2 shows, the ChePAN contains a neural network φ : r0, 1s ˆ RD Ñ R` that only produces positive outputs and models the derivative of the final function with respect to τ . The goal is to optimise the neural networks φpτ,xq by calculating the coefficients of a truncated Chebyshev polynomial expansion ppτ,x; dq of degree d with respect to τ . That is, we will use a Chebyshev polynomial (described in Section A.1 of the Appendix) to give a representation of the neural network, φ, uniformly defined in τ P r0, 1s. After that, we will use its properties to model the uncertainty of a black box in a constrained way (described in Section 5.1).\nInternally, the ChePAN considers a finite mesh of quantile values, called Chebyshev roots, ttkud´1k“0 Ă r0, 1s and defined by\ntk – 1\n2 cos πpk ` 12 q d ` 1 2 , 0 ď k ă d. (3)\nThe truncated Chebyshev expansion of a function can be interpreted as a linear transformation using a set of evaluations of φ at the roots, i.e. tφptk,xqud´1k“0. This linear transformation gives a vector of coefficients, which are known as Chebyshev coefficients and depend on x, i.e. tckpxqud´1k“0, as illustrated in Figure 2.\nThe implementation of a linear transformation generally has a square complexity. However, the transformation involved in Chebyshev coefficients can be computed efficiently with a Θpd log dq complexity. In fact, the algorithm that speeds the computation is based on the Fast Fourier Transform (FFT) and known as the Discrete Cosine Transform of type-II (DCT-II) (discussed in Section A.1 of the Appendix).\nOnce the Chebyshev coefficients ckpxq have been computed, we can write them in a linear combination of Chebyshev polynomials Tkptq, i.e.\nppτ,x; dq– 1 2 c0pxq `\nd´1 ÿ k“1 ckpxqTkp2τ ´ 1q, (4)\nwhere Tkptq are defined recurrently as T0ptq “ 1, T1ptq “ t, and Tk`1ptq “ 2tTkptq ´ Tk´1ptq for k ě 1. These polynomials Tk do not need to be explicitly computed to evaluate p on a quantile (Clenshaw (1955)).\nNote that, given the construction of the coefficients ckpxq, the pptk,x; dq is equal to φptk,xq at each of the root points tk. These equalities must be understood in terms of machine precision in the numerical representation system, classically „ 10´16 in double-precision or „ 10´8 in singleprecision arithmetic. In Figure 2, we denote this root evaluation step as ptk .\nThe final goal is to provide P pτ,x; dq so that it approximates the integral of p, that is şτ\n0 ppt,x; dq dt.\nSpecifically, the integral will also be the integral of the neural network φ,\nP pτ,x; dq « Φpτ,xq “ ż τ\n0\nφpt,xq dt`Kpxq. (5)\nSince φpτ,xq is defined as positive for all τ P r0, 1s, then P pτ,x; dq will be an increasing function with respect to τ .\nAdditionally, given that ppτ,x; dq is a Chebyshev polynomial (defined in Equation 4), its integral w.r.t. τ is simply the integral of the Chebyshev polynomial Tk, which corresponds to a new Chebyshev polynomial. Using the recurrent definition of Tk, we deduce the indefinite integrals ż\nT0ptq dt “ T1ptq, ż T1ptq dt “ T2ptq 4 ´ T0ptq 4 , ż Tkptq dt “ Tk´1ptq 2pk ´ 1q ´ Tk`1ptq 2pk ` 1q , (6)\nwhich leads to the conclusion that P can be given in terms of Chebyshev coefficients as well. Thus,\nP pτ,x; dq– 1 2 C0pxq `\nd´1 ÿ k“1 CkpxqTkp2τ ´ 1q, (7)\nwhere the coefficientsCkpxq have a recurrent expression in terms of a Toeplitz matrix (see Clenshaw (1955)). Indeed, by ordering the coefficients of the integral in Equation 4, we deduce that\nCkpxq– ck´1pxq ´ ck`1pxq\n4k , 0 ă k ă d´ 1, Cd´1pxq– cd´2pxq 4pd´ 1q , (8)\nand C0pxq depends on the constant of integration Kpxq in Equation 5 and the other coefficient values in Equation 7. This freedom of the predicted τ in C0pxq allows us to impose a new condition, which becomes a uniform condition in all of the intervals r0, 1s. In Section 5.1, we will discuss how to define the C0pxq depending on the black box desired." }, { "heading": "5.1 ADDING AN UNCERTAINTY ESTIMATION TO A BLACK-BOX PREDICTION SYSTEM", "text": "In this subsection, we tackle the constrained black-box uncertainty modelling problem introduced in Section 2. The main assumption is that we have a pointwise predictive system, which we will refer to as βpxq and approximates a desired statistic such as the mean, median or a certain quantile of ppy | xq, as shown in Figure 1. It is not necessary for this system to be a deep learning model or even parametric. All that the ChePAN requires to train its neural network, φ, are the corresponding β-evaluation values of the training set, i.e. tx, βpxqu. Thus, the ChePAN calculates the conditioned response distribution to the input without assuming asymmetry or unimodality with respect to this distribution, as well as associating the desired statistic of this distribution to βpxq. The formula used to calculate the constant of integration, C0pxq, will depend on which statistic we choose1. If we impose the quantile τ “ 0 to be β (which we shall call ChePAN-β=q0), then\nC0pxq “ 2βpxq ´ 2 d´1 ÿ\nk“1 Ckpxqp´1qk. (9)\nHowever, if we force the quantile τ “ 1 to be the β (which we shall call ChePAN-β=q1), then\nC0pxq “ 2βpxq ´ 2 d´1 ÿ\nk“1 Ckpxq. (10)\nFor instance, the prediction of extreme weather events involves the forecasting system to predict the maximum or minimum values of ppy | xq. In these cases, this pre-trained system could be used as β in Equation 9 or Equation 10, respectively, to determine the overall quantile distribution of ppy | xq, taking β as a reference point.\nIf the median (equivalently, τ “ 0.5) is the β (which we shall call ChePAN-β=Med), then\nC0pxq “ 2βpxq ´ 2 d´1 ÿ\nk“1 k even\np´1qk{2Ckpxq. (11)\nFinally, the mean is forced to be the β (which we shall call ChePAN-β=Mean), then\nC0pxq “ 2βpxq ´ 2 d´1 ÿ\nk“1 k odd\nCkpxq k2 ´ 4 . (12)\nAdditionally, βpxq can be approximated by means of another neural network, which can be simultaneously optimised with φpτ,xq. We will use this approach to compare the ChePAN and other baseline models in the results section regarding black-box modelling." }, { "heading": "6 EXPERIMENTS", "text": "The source code used to reproduce the results of the ChePAN in the following experiments can be found in the Github repository2. The DCT-II method referred to in Section 5 was used in the aforementioned source code.\n1All details of how such formulas are reached can be found in the supplementary material. 2The camera-ready version of this paper will include all of the source codes to reproduce the experiments.\nTable 1: Mean and standard deviation of the QR loss value, mean ˘ std, of 10 executions for each\nBlack box -Uncertainty wrapper using all of the test distributions in Figure 3 and three data sets (described in Section A.6). The ranges that overlap with the best range are highlighted in bold.\nAsymmetric Symmetric Uniform Multimodal Year-MSD BCN-RPF YVC-RPF RF -N 42.37˘ 0.04 23.19˘ 1.00 66.44˘ 0.26 151.51˘ 0.24 57.50˘ .05 23.47˘ .14 27.27˘ .39 RF -LP 42.88˘ 0.04 22.10˘ 0.03 67.13˘ 0.09 153.06˘ 0.22 57.58˘ .02 23.07˘ .17 28.06˘ .12\nRF -ChePAN 41.52˘ 0.35 23.19˘ 0.70 65.98˘ 0.20 148.39˘ 0.16 48.28˘ .18 23.17˘ .07 28.16˘ .14 XGBoost -N 42.42˘ 0.05 23.35˘ 0.99 66.38˘ 0.26 149.35˘ 0.40 51.17˘ .08 24.52˘ .26 27.79˘ .08 XGBoost -LP 42.90˘ 0.02 23.02˘ 0.43 67.13˘ 0.17 150.94˘ 0.12 51.24˘ .02 22.63˘ .11 27.86˘ .07\nXGB. -ChePAN 41.95˘ 0.40 23.69˘ 0.68 65.89˘ 0.17 146.20˘ 0.30 48.54˘ .08 22.00˘ .04 27.51˘ .13 N 43.63˘ 2.89 23.70˘ 6.85 67.45˘ 1.68 148.78˘ 2.88 49.00˘ .24 27.28˘ 1.25 28.62˘ 1.61\nLP 43.46˘ 0.15 20.72˘ 0.47 68.06˘ 0.82 149.99˘ 0.64 48.67˘ .28 23.51˘ .28 22.32˘ .06 ChePAN 41.72˘ 0.24 22.94˘ 1.81 68.55˘ 6.61 145.93˘ 3.14 46.76˘ .25 20.67˘ .40 21.97˘ .12\nIn this section, we describe the performance of the proposed models compared to other baselines. The main goal is to show that by using QR the ChePAN is an improvement on other black-box uncertainty modelling baselines because it avoids centrality or unimodality assumptions, while also allowing users to choose how to constrain the predicted quantiles with respect to the black-box prediction." }, { "heading": "6.1 MODELS UNDER EVALUATION", "text": "Exponential power distributions satisfy the condition that one of the parameters corresponds to the mode. Thus, those models that approximate such parametric distributions where the mode parameter is the black-box function and estimate the other parameter related to uncertainty can be used as baselines.\n• The Heteroscedastic Normal distribution (N) Similarly to (Bishop (1994); Kendall & Gal (2017); Tagasovska & Lopez-Paz (2019); Brando et al. (2019)), two neural networks, µ and σ, can be used to approximate the conditional normal distribution, N pµpxq, σpxqq, such that they maximise the likelihood. In the black-box scenario proposed here, µ is the black-box function and we only need to optimise the σ neural network. Once optimised, the desired quantile τ can be obtained with F pτ,xq “ µpxq ` σpxq ? 2 ¨ erf´1p2τ ´ 1q, τ P p0, 1q, where erf´1 is the inverse error\nfunction.\n• The Heteroscedastic Laplace distribution (LP) As a more robust alternative to outlier values, a conditional Laplace distribution, LP ` µpxq, bpxq ˘\n, can be considered. Here, the quantile function is F pτ,xq “ µpxq` ` b logp2τq ˘\n¨1rτ ď 12 s´ ` b logp2´2τq ˘ ¨1rτ ą 12 s, τ P p0, 1q.\n• The Chebyshev Polynomial Approximation Network (ChePAN) In order to use the same black boxes as the other baselines, Equation 12 is considered, given that these black boxes are optimising the mean square error. Other alternative equations are considered in the pseudo code and in Figure 6 of the supplementary material." }, { "heading": "6.2 DATA SETS AND EXPERIMENT SETTINGS", "text": "All experiments were implemented in TensorFlow (Abadi et al. (2015)) and Keras (Chollet et al. (2019)), running in a workstation with Titan X (Pascal) GPU and GeForce RTX 2080 GPU. All the details of the data sets used and model hyper-parameters for the results section are described in the supplementary material." }, { "heading": "6.3 RESULTS", "text": "Table 1 shows a comparison of uncertainty modelled for two given black-box systems (a Random Forest (RF) (Liaw et al. (2002)) and an XGBoost (Chen & Guestrin (2016))) in four data sets. The\nfirst four columns correspond to each part of the synthetic distribution proposed by (Brando et al. (2019)) and shown in Figure 3, the fifth column is the full Year Prediction MSD UCI dataset (Dua & Graff (2017a)), predicting the release year of a song from 90 audio features and, finally, the last two columns correspond to predicting the room price forecasting of Airbnb flats (RPF) in Barcelona and Vancouver, extracted from (Brando et al. (2019)). The mean of the QR loss value (see Equation 1) is evaluated for ten thousand randomly selected quantiles for ten executions of each model tmku10k“1,\nLmkpXtest, Ytestq “ Ntest ÿ\ni“1\nNτ ÿ\nj“1\n` yi ´ fmkpτj ,xiq ˘ ¨ ` τj ´ 1ryi ă fmkpτj ,xiqs ˘\nNtest ¨Nτ , (13)\nwhere Ntest is the number of points in the test set, Nτ “ 10, 000 the number of Monte Carlo samplings and fmk any of the models considered in Table 1. Considering how the QR loss is defined in Equation 1, its value not only informs us about each system’s performance but also how generically calibrated its predicted quantiles are.\nFurthermore, in Table 1 we observe that the ChePAN outperforms other methods in most cases due to it transferring the capacity to capture asymmetries and multimodalities of QR in ppy | xq to the black-box problem, where our uncertainty modelling needs to be restricted in order to maintain the corresponding statistic associated with the black box.\nThis restriction of conserving the black box can be seen qualitatively in the upper part of Figure 3, where such a restriction must be met in any situation, i.e. even if performance worsens because the black box, βpxq, is not correctly fitted (as described in Section 2). In this case, βpxq is an inaccurate Random Forest predicting the mean. Importantly, the ChePAN propagates the βpxq noise to the predicted quantiles (in blue) because the constraint is always forced. On the other hand, the ability of ChePAN to model heterogeneous distributions using QR is better displayed in the lower part of Figure 3. In this case, the black box is a neural network that is learnt concurrently with the quantiles. Since the black box is better approximated, the quantiles are better.\nFinally, since Table 1 shows that there is a similar performance order between the baselines when using the RF or XGBoost, we also want to show additional experiments that directly measure the calibration of the predicted quantiles and compare the predicted width of certain desired intervals. Following the UCI data sets used in (Hernández-Lobato & Adams (2015b); Gal & Ghahramani (2016); Lakshminarayanan et al. (2017); Tagasovska & Lopez-Paz (2019)), we performed two empirical studies to assess this point in a black-box scenario where the black box is an MSE-XGBoost. Following the proposed hidden layers architecture in (Tagasovska & Lopez-Paz (2019)), the Prediction Interval Coverage Probability (PICP) and the Mean Prediction Interval Width (MPIW) are reported in Table 3 of the appendix considering the 0.025 and the 0.975 quantiles. For the sake of\ncompleteness, in Figure 4 and its associated table we have also computed an additional metric not only to verify the calibration of the 0.025 and 0.975 quantiles, but also to obtain a measure of general calibration considering the entire quantile distribution. Given Nτ -equidistant set of quantiles to evaluate, τ “ r10´2, . . . , 1´ 10´2s, the % of actual test data that falls into each predicted quantile can be compared to each real quantile value as follows,\nCalpf ;Xtest, Ytest, τ q “ 1\nNτ\nNτ ÿ j“1 |τj ´ 1 Ntest Ntest ÿ i“1 1ryi ă fpτj ,xiqs| (14)\nIn addition, two extra figures showing the disentangled visualisation of this calibration metric from each quantile can be found in Figure 5 of the Appendix. As all of the figures and tables show, in terms of calibration, the ChePAN generally displays a better performance in the black-box scenario than the other models." }, { "heading": "7 CONCLUSION", "text": "The uncertainty modelling of a black-box predictive system requires the designing of wrapper solutions that avoid assumptions about the internal structure of the system. Specifically, this could be a non-deep learning model (such as the one presented in Table 1 and Figure 3) or even a nonparametric predictive system, as proposed in Figure 1. Therefore, not all models or types of uncertainties can be considered using this framework.\nThe present paper introduces the Chebyshev Polynomial Approximation Network (ChePAN) model, which is based on Chebyshev polynomials and deep learning models and has a dual purpose: firstly, it predicts the aleatoric uncertainty of any pointwise predictive system; and secondly, it respects the statistic predicted by the pointwise system.\nTo conclude, then, the ChePAN transfers the advantages of Quantile Regression (QR) to the problem of modelling aleatoric uncertainty estimation in another existing and fixed pointwise predictive system (denoted as β and referred to as a black box). Experiments using different large-scale real data sets and a synthetic one that contains several heterogeneous distributions confirm these novel features." } ]
2,020
null
SP:eff774eddcc60e943c0a41207c21a1c9d6d5d950
[ "This paper proposes an approach to improve (supervised and unsupervised) representation learning for text using constrastive learning. The proposed approach augments standard contrastive learning with: (1) Spectral-norm regularization of the critic to estimate the Wasserstein distance instead of the KL (as in the Wasserstein GAN-style approach), (2) Active negative sampling to select hard negative examples, and (3) momentum-based updates of intermediate features. The resulting contrastive learning objective can be combined with standard supervised and unsupervised objectives to improve downstream tasks." ]
There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence. One important direction involves leveraging contrastive learning to improve learned representations. We propose an application of contrastive learning for intermediate textual feature pairs, to explicitly encourage the model to learn more distinguishable representations. To overcome the learner’s degeneracy due to vanishing contrasting signals, we impose Wasserstein constraints on the critic via spectral regularization. Finally, to moderate such an objective from overly regularized training and to enhance learning efficiency, with theoretical justification, we further leverage an active negative-sample-selection procedure to only use high-quality contrast examples. We evaluate the proposed method over a wide range of natural language processing applications, from the perspectives of both supervised and unsupervised learning. Empirical results show consistent improvement over baselines.
[]
[ { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Orestis Plevrakis", "Nikunj" ], "title": "Saunshi. A theoretical analysis of contrastive unsupervised representation learning", "venue": null, "year": 2019 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeswar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "R Devon Hjelm" ], "title": "Mine: mutual information neural estimation", "venue": null, "year": 2018 }, { "authors": [ "Anthony J Bell", "Terrence J Sejnowski" ], "title": "An information-maximization approach to blind separation and blind deconvolution", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "NeurIPS,", "year": 2013 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": null, "year": 2016 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance weighted autoencoders", "venue": "arXiv preprint arXiv:1509.00519,", "year": 2015 }, { "authors": [ "Gal Chechik", "Varun Sharma", "Uri Shalit", "Samy Bengio" ], "title": "Large scale online learning of image similarity through ranking", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Liqun Chen" ], "title": "Adversarial text generation via feature-mover’s distance", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V. Le", "Christopher D. Manning" ], "title": "ELECTRA: Pre-training text encoders as discriminators rather than generators", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Ronan Collobert", "Jason Weston", "Léon Bottou", "Michael Karlen", "Koray Kavukcuoglu", "Pavel Kuksa" ], "title": "Natural language processing (almost) from scratch", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Bo Dai", "Dahua Lin" ], "title": "Contrastive learning for image captioning", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Yuntian Deng", "Anton Bakhtin", "Myle Ott", "Arthur Szlam", "Marc’Aurelio Ranzato" ], "title": "Residual energy-based models for text generation", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2018 }, { "authors": [ "Fartash Faghri", "David J Fleet", "Jamie Ryan Kiros", "Sanja Fidler" ], "title": "Vse++: Improved visual-semantic embeddings", "venue": "In BMVC,", "year": 2018 }, { "authors": [ "Le Fang", "Chunyuan Li", "Jianfeng Gao", "Wen Dong", "Changyou Chen" ], "title": "Implicit deep latent variable models for text generation", "venue": null, "year": 2019 }, { "authors": [ "Yash Goyal", "Tejas Khot", "Douglas Summers-Stay", "Dhruv Batra", "Devi Parikh" ], "title": "Making the v in vqa matter: Elevating the role of image understanding in visual question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Michael U Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": null, "year": 2010 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": null, "year": 2019 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 2020 }, { "authors": [ "Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Philip Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR 2019. ICLR,", "year": 2019 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization. ICLR, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Charles AR Hoare" ], "title": "Algorithm 65: find", "venue": "Communications of the ACM,", "year": 1961 }, { "authors": [ "Chen Huang", "Chen Change Loy", "Xiaoou Tang" ], "title": "Local similarity-aware deep feature embedding", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "Yan Huang", "Qi Wu", "Chunfeng Song", "Liang Wang" ], "title": "Learning semantic concepts and order for image and sentence matching", "venue": null, "year": 2018 }, { "authors": [ "Kevin Jarrett", "Koray Kavukcuoglu", "Marc’Aurelio Ranzato", "Yann LeCun" ], "title": "What is the best multi-stage architecture for object recognition", "venue": "In ICCV,", "year": 2009 }, { "authors": [ "Andrej Karpathy", "Li Fei-Fei" ], "title": "Deep visual-semantic alignments for generating image descriptions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Jin-Hwa Kim", "Jaehyun Jun", "Byoung-Tak Zhang" ], "title": "Bilinear attention networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Klemens Lagler", "Michael Schindelegger", "Johannes Böhm", "Hana Krásná", "Tobias Nilsson" ], "title": "Gpt2: Empirical slant delay model for radio space geodetic techniques", "venue": "Geophysical research letters,", "year": 2013 }, { "authors": [ "Kuang-Huei Lee" ], "title": "Stacked cross attention for image-text matching", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft COCO: Common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin" ], "title": "Microsoft coco: Common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Ralph Linsker" ], "title": "Self-organization in a perceptual network", "venue": null, "year": 1988 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "In arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "Andrew L Maas", "Awni Y Hannun", "Andrew Y Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In ICML,", "year": 2013 }, { "authors": [ "Mitchell Marcus", "Beatrice Santorini", "Mary Ann Marcinkiewicz" ], "title": "Building a large annotated corpus of english: The penn treebank", "venue": null, "year": 1993 }, { "authors": [ "David McAllester", "Karl Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "arXiv preprint arXiv:1811.04251,", "year": 2018 }, { "authors": [ "David McAllester", "Karl Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": null, "year": 2020 }, { "authors": [ "Tomas Mikolov", "Quoc V Le", "Ilya Sutskever" ], "title": "Exploiting similarities among languages for machine translation", "venue": null, "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "NeurIPS,", "year": 2013 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A pac-bayesian approach to spectrallynormalized margin bounds for neural networks", "venue": null, "year": 2018 }, { "authors": [ "Hyun Oh Song", "Yu Xiang", "Stefanie Jegelka", "Silvio Savarese" ], "title": "Deep metric learning via lifted structured feature embedding", "venue": null, "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Sherjil Ozair", "Corey Lynch", "Yoshua Bengio", "Aaron van den Oord", "Sergey Levine", "Pierre Sermanet" ], "title": "Wasserstein dependency measure for representation learning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Bryan A Plummer" ], "title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Anshumali Shrivastava", "Ping Li" ], "title": "Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips)", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Noah A. Smith", "Jason Eisner" ], "title": "Contrastive estimation: Training log-linear models on unlabeled data", "venue": "In ACL,", "year": 2005 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive representation distillation", "venue": "ICLR,", "year": 2020 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 2020 }, { "authors": [ "Evgeniya Ustinova", "Victor Lempitsky" ], "title": "Learning deep embeddings with histogram loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Yinggong Zhao", "Victoria Fossum", "David Chiang" ], "title": "Decoding with large-scale neural language models improves translation", "venue": "In EMNLP,", "year": 2013 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": null, "year": 1910 }, { "authors": [ "Chao-Yuan Wu", "R Manmatha", "Alexander J Smola", "Philipp Krahenbuhl" ], "title": "Sampling matters in deep embedding learning", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via non-parametric instance discrimination", "venue": null, "year": 2018 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "title": "Improved variational autoencoders for text modeling using dilated convolutions", "venue": null, "year": 2017 }, { "authors": [ "Zhedong Zheng" ], "title": "Dual-path convolutional image-text embedding with instance loss", "venue": "In arXiv,", "year": 2017 }, { "authors": [ "Kim" ], "title": "2018), we take the answers that appear more than 9 times in the training set as candidate answers, which results in 3129 candidates. Classification accuracy is used as the evaluation metric, defined as min(1, # humans provided ans", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representation learning is one of the pivotal topics in natural language processing (NLP), in both supervised and unsupervised settings. It has been widely recognized that some forms of “general representation” exist beyond specific applications (Oord et al., 2018). To extract such generalizable features, unsupervised representation models are generally pretrained on large-scale text corpora (e.g., BERT (Devlin et al., 2018; Liu et al., 2019; Clark et al., 2020; Lagler et al., 2013)) to avoid data bias. In supervised learning, models are typically built on top of these pre-trained representations and further fine-tuned on downstream tasks. Representation learning greatly expedites model deployment and meanwhile yields performance gains.\nThere has been growing interest in exploiting contrastive learning (CL) techniques to refine context representations in NLP (Mikolov et al., 2013a;b). These techniques aim to avoid representation collapse for downstream tasks, i.e., getting similar output sentences with different input in conditional generation tasks (Dai & Lin, 2017). Intuitively, these methods carefully engineer features from crafted (“negative”) examples to contrast against the features from real (“positive”) examples. A feature encoder can then enhance its representation power by characterizing input texts at a finer granularity. Efforts have been made to empirically investigate and theoretically understand the effectiveness of CL in NLP, including noise contrastive estimation (NCE) of word embeddings (Mikolov et al., 2013b) and probabilistic machine translation (Vaswani et al., 2013) with theoretical developments (Gutmann & Hyvärinen, 2010). More recently, InfoNCE (Oord et al., 2018) further links the CL to the optimization of mutual information, which inspired a series of practical followup works (Tian et al., 2020; Hjelm et al., 2019a; He et al., 2020; Chen et al., 2020).\nDespite the significant empirical success of CL, there are still many open challenges in its application to NLP, including (i) the propagation of stable contrastive signals. An unregularized critic function in CL can suffer from unstable training and gradient vanishing issues, especially in NLP tasks due to the discrete nature of text. The inherent differences between positive and negative textual features make those examples easily distinguished, resulting in a weak learning signal in contrastive schemes (Arora et al., 2019). (ii) Empirical evidence (Wu et al., 2017) shows that it is crucial to compare each positive example with adequate negative examples. However, recent works suggest using abundant negative examples, which are not akin to the positive examples, which can result in sub-optimal results and unstable training with additional computational overhead (Ozair et al., 2019; McAllester & Stratos, 2020).\nIn this paper, we propose two methods to mitigate the above issues. In order to stabilize the training and enhance the model’s generalization ability, we propose to use the Wasserstein dependency measure (Ozair et al., 2019) as a substitute for the Kullback-Leibler (KL) measure in the vanilla CL objective. We further actively select K high-quality negative samples to contrast with each positive sample under the current learned representations. These supply the training procedure with necessarily large and non-trivial contrastive samples, encouraging the representation network to generate more distinguishable features. Notably, our approach also significantly alleviates the computational burden of massive features compared with previous works (Tian et al., 2020; Hjelm et al., 2019b).\nContributions: (i) We propose a Wasserstein-regularized critic to stabilize training in a generic CL framework for learning better textual representations. (ii) We further employ an active negativesample selection method to find high-quality contrastive samples, thus reducing the gradient noise and mitigating the computation concerns. (iii) We empirically verify the effectiveness of our approach under various NLP tasks, including variational text generation (Bowman et al., 2016), natural language understanding tasks on GLUE with supervised and semi-supervised setups (Wang et al., 2018), and image-text retrieval (Lee et al., 2018)." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 NOISE CONTRASTIVE ESTIMATION", "text": "Our formulation is inspired by Noise Contrastive Estimation (NCE) (Gutmann & Hyvärinen, 2010), which was originally introduced for unnormalized density estimation, where the partition functions is intractable. To estimate a parametric distribution p, which we refer to as our target distribution, NCE leverages not only the observed samples A = (a1,a2, ...,an1) (positive samples), but also the samples drawn from a reference distribution q, denoted as B = (b1, b2, ..., bn2) (negative samples). Instead of estimating p directly, the density ratio p/q is estimated by training a critic between samples from A and B.\nSpecifically, let Z = (z1, ...,zn1+n2) denote the union of A and B. A binary class label Ct is assigned to each zt, where Ct = 1 if zt ∈ A and Ct = 0 otherwise. The label probability is therefore\nP (C = 1|z) = p(z) p(z) + γq(z) , P (C = 0|z) = γq(z) p(z) + γq(z) , (1)\nwhere γ = n2n1 is a balancing hyperparameter accounting for the difference in number of samples between A and B.\nIn practice, we do not know the analytic form of p; therefore, a classifier g : z 7→ [0, 1] to estimate p(C = 1|z) is trained. To get an estimation of the critic function g, NCE maximizes the log likelihood of the data for a binary classification task:\nL(A,B) = n1∑ t=1 log[g(at)] + n2∑ t=1 log[1− g(bt)] . (2)" }, { "heading": "2.2 CONTRASTIVE TEXTUAL REPRESENTATION LEARNING AND ITS CHALLENGES", "text": "Let {wi}ni=1 be the observed text instances. We are interested in finding a vector representation u of the text w, i.e., via an encoder u = Enc(w), that can be repurposed for downstream tasks. A positive pair refers to paired instances ai = (ui,vi) associated with wi, where we are mostly interested in learning u; v is a feature at a different representation level. In unsupervised scenarios, vi can be the feature representation at the layer next to the input text wi. In supervised scenarios, vi can be either the feature representation layer immediately after wi or immediately before the label yi that corresponds to the input wi. We will use π(u,v) to denote the joint distribution of the positive pairs, with πu(u) and πv(v) for the respective marginals.\nContrastive learning follows the principle of “learning by comparison.” Specifically, one designs a negative sample distribution τ(u′,v′), and attempts to distinguish samples from π(u,v) and τ(u′,v′) with a critic function g(u,v). The heuristic is that, using samples from τ as references (i.e., to contrast against), the learner is advantaged to capture important properties that could have been\notherwise missed (Hjelm et al., 2019a; Oord et al., 2018). A popular choice of τ is the product of marginals, i.e., τ ← π0(u′,v′) = πu(u′)πv(v′) where (u,v) are independent of each other, so that bi = (u ′ i,v ′ i) ∼ π0. Inputting the new ai and bi to (2), we obtain the new CL loss: LNCE = Eu,v∼π[log g(u,v)] + γEu′,v′∼π0 [log(1− g(u′,v′))] . (3) Note that when g is trained to optimality g∗(u,v) = p(C = 1|u,v) under π0, it establishes a lower bound of mutual information (MI) between u and v for the positive distribution (Tian et al., 2020; Neyshabur et al., 2018):\nMI(πu, πv) = KL(π(u,v)||πu(u)πv(v)) ≥ Eu,v∼π[log g∗(u,v)] + log γ . (4) However, there are three concerns regarding why directly applying equation 3 might not be good in practice for learning the contrastive representation of the input text w.\n• Robustness. The first issue concerns the MI’s strong sensitivity to small differences in data samples (Ozair et al., 2019; Tschannen et al., 2020). By definition in equation 4, mutual information is a KL divergence. It is well known that the KL divergence is not a metric-aware divergence measure, which implies a minor difference in representation can induce drastic changes in the mutual information, as a special case of KL. Consequently, the learned g could be numerically unstable (Ozair et al., 2019), which makes the learned representations less robust and does not generalize well to downstream tasks, especially when features come from text (Chen et al., 2018).\n• Weak/vanishing contrastive signal. With a poor initialization or a poor choice of negative samples, the MI will vanish as the π(u,v) and πu(u)πv(v) become far apart, delivering a faint and nonsmooth gradient for training. In an extreme case, the support for π(u,v) and πu(u)πv(v) do not overlap, and the MI and the gradient will vanish to zero (Arjovsky et al., 2017).\n• Negative-sample selection strategy. Learning MI is generally considered sample inefficient. This point can be corroborated from several perspectives, ranging from theoretical arguments to practical considerations. To confidently estimate a lower bound to the MI, one would need a sample size exponential to the mutual information (i.e., N ≥ exp(Iπ(u,v))) (Ozair et al., 2019; McAllester & Stratos, 2018). Also, both theoretical prediction and empirical evidence suggest a large ratio γ is needed for good performance (Tian et al., 2020; Hjelm et al., 2019a), imposing potential computational concerns for large training datasets. On the other hand, some studies report a large γ can instead deteriorate model performance (Tschannen et al., 2020; Arora et al., 2019). Such a large γ is also believed to be problematic especially when a strong association is expected between u and v. In that case, the majority of negative samples are so different from positive samples that the comparisons do not lend effective learning signals, but instead randomly drift the training (Gutmann & Hyvärinen, 2010)." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 MODEL OVERVIEW", "text": "We consider two remedies to mitigate the three issues mentioned in Section 2.2, (i) Regarding the robustness and the gradient-vanishing issues, we switch from the MI-based NCE to a Wassersteinbased NCE, by imposing a Wasserstein constraint to the critic function g. The Wasserstein distance yields a continuous discrepancy measure over two distributions, even when they have no overlapping support, and suffers less from the issue of numerical instability (Neyshabur et al., 2018). (ii) Regarding the issue of negative samples, we propose an active negative-sample-selection strategy, to dynamically select the most challenging negative examples on-the-fly. This strategy smooths the learning signal (Wu et al., 2017), effectively improving the CL, meanwhile significantly reducing the computational overhead.\nTo this end, we propose RECLAIM (RElaxed Contrastive Learning with ActIve Memory selection) as a robust and efficient CL framework. Our learning framework is illustrated in Figure 1. Details are explained in the following sections." }, { "heading": "3.2 WASSERSTEIN CONSTRAINED CRITIC", "text": "In previous work, the critic g is usually chosen to be a naive feed-forward neural network (Tian et al., 2020). However, as discussed in Section 2.2, such a choice of critic function typically leads to a KL-based objective, which suffers from instability and vanishing-gradient issues. Inspired by (Ozair et al., 2019), we replace the KL divergence in equation 4 with a Wasserstein distance. Specifically,\nUnder review as a conference paper at ICLR 2021\n<latexit sha1_base64=\"mAYx4i+XW2bHpL6iy5o2aYv/zes=\">AAACC3icdVDLSgMxFM3UV62vqktFgkVw4zAzttjuRDcuW7BaaEvJpJk2NPMguSOWoUvBjb/ixoVF3PoD7vwGf8J0qqCiF0IO55zLvfe4keAKLOvNyMzMzs0vZBdzS8srq2v59Y0LFcaSsjoNRSgbLlFM8IDVgYNgjUgy4ruCXbqD04l+ecWk4mFwDsOItX3SC7jHKQFNdfK7LWDX4HpJ+nNITggdHEQyjEgvtYxGnXzBMouVouOUsWWWKsVKeQLKTunQsbFtWmkVjrfHtfebnXG1k39tdUMa+ywAKohSTduKoJ0QCZwKNsq1YsUiPYb0WFPDgPhMtZP0lhHe00wXe6HULwCcst87EuIrNfRd7fQJ9NVvbUL+pTVj8MrthAdRDCyg00FeLDCEeBIM7nLJKIihBoRKrnfFtE8koaDjy+kQvi7F/4MLx7SLZqWm0zhB08qiLbSL9pGNjtAxOkNVVEcU3aJ79IjGxp3xYDwZz1Nrxvjs2UQ/ynj5AC6poCg=</latexit>\n<latexit sha1_base64=\"o/XikOdEzZSKLgO7JgcbBXjtAyw=\">AAACAnicdVBNSyNBEO3xY9VR16yexEtjEDwNM2PE5CCKIuxRwaiQhNDTqdHGnp6hu0Y2DMGL/0S8eFDEq/e9exH/jZ3EhVX0QVGP96rorhdlUhj0/VdnZHRs/MfE5JQ7PTP7c670a/7IpLnmUOepTPVJxAxIoaCOAiWcZBpYEkk4js53+/7xBWgjUnWI3QxaCTtVIhacoZXapcUmwh+M4mLQBRZ7iqcd0L1eu1T2vUqtEoZV6nvrtUqt2ifVcH0tDGjg+QOUt/66m9n1i7vfLj03OynPE1DIJTOmEfgZtgqmUXAJPbeZG8gYP2en0LBUsQRMqxic0KMrVunQONW2FNKB+v9GwRJjuklkJxOGZ+az1xe/8ho5xtVWIVSWIyg+fCjOJcWU9vOgHaGBo+xawrgW9q+UnzHNONrUXBvCv0vp9+Qo9IKKVzvwy9s7ZIhJskSWySoJyAbZJr/JPqkTTi7JDbkj986Vc+s8OI/D0RHnfWeBfIDz9AaXNJvi</latexit>\n<latexit sha1_base64=\"rVgwapQXKq4Mc1s5jragEyOQ0Mg=\">AAACHXicdVDLSsQwFE19W1+jLt0EB8FVaWvFmZ3oxo2g4KgwMwxp5laD6YPkVhxKf8SNv+LGhSIu3Ih/YzqOoKIHQg7n3HuTe8JMCo2u+26NjU9MTk3PzNpz8wuLS7XllVOd5opDi6cyVech0yBFAi0UKOE8U8DiUMJZeLVf+WfXoLRIkxMcZNCN2UUiIsEZGqlXCzoINxhGxfAWWBymMSSYx2Vp27+9VtZnCGXZq9VdJ2gGvt+grrPdDJqNijT87S3fo57jDlEnIxz1aq+dfsrzajKXTOu252bYLZhCwSWUdifXkDF+xS6gbWjCYtDdYrhdSTeM0qdRqsxJkA7V7x0Fi7UexKGpjBle6t9eJf7ltXOMGt1CJFmOkPDPh6JcUkxpFRXtCwUc5cAQxpUwf6X8kinG0QRqmxC+NqX/k1Pf8QKneezXd/dGccyQNbJONolHdsguOSBHpEU4uSX35JE8WXfWg/VsvXyWjlmjnlXyA9bbB0a6pJE=</latexit>\nwe ensure the 1-Lipschitz constraint on the critic function g (Arjovsky et al., 2017). A function f is said to be L-Lipschitz if |f(x) − f(y)| ≤ L‖x − y‖, i.e., the difference in function outputs is controlled by the discrepancy in the inputs.\nInstead of using a gradient penalty as in (Gulrajani et al., 2017; Ozair et al., 2019), we employ the spectral normalization (SN) (Miyato et al., 2018). We use the SN because it is efficient and also stable. It provides a more strict 1-Lipschitz constraint than the gradient penalty. Specifically, SN controls the Lipschitz constant of the critic function, by constraining each layer gl in g. Formally, it can be formulated as ‖gl‖Lip = supx δ(∇(gl(x))), where δ(·) is the spectral norm, i.e., the largest singular value of the input. For an affine transformation, such as a linear function gl(x) = Wlx, its spectral norm is ‖gl‖Lip = δ(W ). When the activation function al has Lipschitz norm equal to 1 (such as ReLU (Jarrett et al., 2009) and Leaky ReLU (Maas et al., 2013)), we have the following inequality: ‖g1 ◦ g2‖Lip ≤ ‖g1‖Lip‖g2‖Lip. With this inequality, we obtain the following bound:\n‖g‖Lip = ‖g1 ◦ a1 ◦ g2 ◦ a2 . . . ◦ gd‖Lip ≤ ‖g1‖Lip‖a1‖Lip‖g2‖Lip‖a2‖Lip . . . ‖gd‖Lip (5) = ΠLl=1‖gl‖Lips = ΠLl=1δ(Wl) .\nApplying the spectral normalization operation to each weight Wl using WSN = W /δ(W ) enforces δ(WSN) = 1, so that the right hand side of 5 is above-bounded by 1. This imposes the 1-Lipschitz constraint to the critic function g, thus stabilizing the learning signal for training. In practice, the spectral normalization can be estimated efficiently using power iteration." }, { "heading": "3.3 ACTIVE NEGATIVE-SAMPLE SELECTION (ANS)", "text": "Following the discussion in Section 2.2, stable training requires adequate and high-quality negative samples. However, involving too many negative samples far apart from their positive counterparts does not yield effective training, and wastes computational resources. Inspired by the Triplet loss in deep metric learning (Wu et al., 2017; Tschannen et al., 2020), we propose to actively select a relatively small set of negative examples that are most challenging to the discriminator at the current step, thus enabling the model to effectively extract features to distinguish the positives from the negatives.\nSpecifically, we use two memory banks Bu and Bv, which store all previously extracted features u and v from seen training instances, respectively. When processing with each new training instances, we actively select the top K nearest neighbors Unn ⊂ Bu and Vnn ⊂ Bv for u and v via cosine distance. With the QuickSelect algorithm (Hoare, 1961), we are able to identify the top-K negative samples with time complexity O(KN). Under this setup, 3 can be written as:\nLANS = Eπ(u,v)[log(g(u,v)− 1\n2 ∑ v′∈Vnn (log(1− g(u,v′)))− 1 2 ∑ u′∈Unn (log(1− g(u′,v)))] (6)\nWhen the dataset is large, this can still cost much time in feature searching. Asymmetric Locality Sensitivity Hashing (ALSH) (Shrivastava & Li, 2014) can be applied to hash the representations in a proximity-relevant manner. This helps to reduce the time complexity of ANS to sub-linear (Shrivastava & Li, 2014). In (Schroff et al., 2015) it was found empirically that relaxing the most challenging negative samples to semi-hard negative samples sometimes leads to better results in supervised tasks like classification. This indicates that an approximate retrieval method like ALSH can still perform well. Our observations in experiments are consistent with these findings." }, { "heading": "3.4 RECLAIM LEARNING PROCEDURE", "text": "Momentum update Before training, the two memory banks Bu,Bv are initialized with Gaussian noise: {u′ ∼ N (0, I)|∀u′ ∈ Bu}, {v′ ∼ N (0, I)|∀v′ ∈ Bv}. Naively, one can directly replace old features in memory banks with new processed features corresponding to the same input data. However, the performance of such a solution is suboptimal in practice. This is because the target model may change rapidly during the early training stages; such a practice reduces the consistency of the representations, and results in noisy learning signals (He et al., 2020; Wu et al., 2018).\nTherefore, we apply the momentum update to mitigate such an issue. Specifically, assume a seen input instance x reappears. There would be a snapshot feature pair previously stored in the Bu,Bv of this x, denoted as {ũ, ṽ}. We further denote the newly computed (with the current encoder) feature pair correspond to x as {u,v}, and let ρ ∈ (0, 1] be the momentum update parameter. We update the feature pairs in Bu,Bv as:\nũ = (1− ρ)u+ ρũ, ṽ = (1− ρ)v + ρṽ (7)\nNote that the features in the memory bank are only updated in this way, and they are detached from the computational graph.\nJoint Objective Optimization With our ANS module, we can obtain the top-K nearest negative features for both u and v, denoted {u′i}K1 and {v′i}K1 , respectively. To get our CL loss, we apply the Wasserstein-distance-based NCE loss by simply imposing the 1-Lipschitz constraint on critic g in (6). Assuming the task-specific loss is Ltask, the total loss therefore is formulated as\nLtotal = Ltask − λLANS , (8)\nwhere λ > 0 is the hyper-parameter controlling the importance of the NCE loss. By minimizing equation 8, we can improve the training model performance. The detailed training procedure is listed in Algorithm 1." }, { "heading": "4 RELATED WORK", "text": "Connection to Mutual Information Estimation (MIE): MIE represents a family of MI-based representation learning methods. Early works on MIE, such as InfoMax (Linsker, 1988; Bell & Sejnowski, 1995), advocate maximizing the MI between the input and output. MINE (Belghazi et al., 2018) uses a bi-directional generative model to estimate the MI via the Donsker-Varadhan representation theorem. Deep InfoMax (Hjelm et al., 2019a) further developed MINE by removing one generator, seeking to maximize MI between local features and global features. Our method is more related to the InfoNCE and NCE objectives introduced in (Oord et al., 2018; Gutmann & Hyvärinen, 2010), which are used to estimate the lower bound of the MI. In the proposed RECLAIM, Wasserstein distance is employed to estimate the distribution gap rather than KL-divergence, because it is more stable and robust in practice.\nContrastive Learning in NLP: Contrastive learning has been widely used in NLP (Smith & Eisner, 2005; Collobert et al., 2011; Bordes et al., 2013; Hjelm et al., 2019a; Deng et al., 2020). Broadly, CL methods attempt to differentiate observed data from artificial negative examples, to learn representations. In (Gutmann & Hyvärinen, 2010), the Noise Contrastive Estimation (NCE) metric is leveraged to differentiate the target sample from noise samples, to learn word embeddings. Negative sampling proposed by Mikolov et al. (2013b) is a simplified variation of NCE loss, and it achieves great success in learning embeddings. Our CL approach is different from the above, since (i) we incorporate Wasserstein distance in NCE, and (ii) the proposed negative-sample-selection strategy is different from all other NCE-related works.\nConnection to deep metric learning: Triplet loss is a classic approach in deep metric learning, and has already been widely used in retrieval models (Lee et al., 2018) and recommendation systems (Chechik et al., 2010). It minimizes the distance between an anchor point to a positive input, meanwhile maximizing its distance to a negative input. Motivated by the triplet loss, some works enforce constraints on more than one negative examples. For example, PDDM (Huang et al., 2016) and Histogram Loss (Ustinova & Lempitsky, 2016) use quadruplets. Moreover, the n-pair loss (Sohn, 2016) and Lifted Structure (Oh Song et al., 2016) define constraints on all images in a batch. In these previous works, researchers have focused on enlarging the batch size, since they are only sampling the negative examples within the batch. Our approach incorporates the advantages of those works,\nand moves beyond them to allow active sampling of the most challenging negative pairs from all seen instances within the memory banks. Such a global sampling strategy ensures the selected negative pairs for the same positive pair to be more consistent during training, so that the training can be more stable." }, { "heading": "5 EXPERIMENTS", "text": "Three experiments are conducted to evaluate our approach: (i) supervised and semi-supervised natural language understanding (NLU) tasks on the GLUE dataset via BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019); (ii) image-text cross-domain retrieval task, and (iii) text generation task with the VAE framework (Bowman et al., 2016). We set λ = 0.1 in (6) across all experiment, which are tested on two NVIDIA TITAN X GPUs." }, { "heading": "5.1 GLUE DATASET CLASSIFICATION", "text": "Dataset: The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of 9 datasets for evaluating natural language understanding models. Six of these tasks are either single sentence classification or paired sentence classification tasks.\nImplementation details: We develop our approach on the open-sourced Hugging face Transformers codebase(Wolf et al., 2019)1. The hyper-parameters settings, i.e., learning rate, batch size, epoch numbers, etc., are all set to the default setup recommended by the official Transformer repository, for fair comparisons and reproducibility. We report results on the development sets after fine-tuning the pre-trained models (BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019)) on each downstream task training data (no ensemble model or multi-task training is used). We utilize the 12-layer architecture (base model) for both BERT and RoBERTa, due to limited computational resources. Parameters u,v are set to be the classification token embedding (CLS) and the embedded input text representation in BERT/RoBERTa, respectively.\nWe present comparisons on fully supervised tasks in Table 1, with K = 100 for RECLAIM. According to this table, RECLAIM consistently improves the performance on all GLUE datasets. For the semi-supervised experiments, we randomly draw {1%, 5%, 10%, 20%, 30%, 50%} from each training dataset in GLUE, and results are provided in Table 2. Note that in this case, we only use top-10 (K = 10) negative examples, since the training dataset size is largely reduced. It can be seen from Table 2 that our approach generally achieves better results than just fine-tuning BERT on a small dataset.\nWe note that both fully-supervised and semi-supervised tasks on GLUE do not only serve as classic natural language inference (NLI) problems, but also serve as a testbed for the generalization ability of a representation learner to new tasks. Presumably, results in Tables 1 and 2 suggest that the large-scale pre-trained BERT knowledge has been better transferred to each task in GLUE in our CL approach, as improving the Wasserstein dependency measure (Ozair et al., 2019) can be seen as encouraging the model to be as “lossless” as possible." }, { "heading": "5.2 TEXT-IMAGE RETRIEVAL", "text": "Dataset: We evaluate the proposed RECLAIM on both the Flickr30K (Plummer et al., 2015) and MS-COCO (Lin et al., 2014b) datasets. Flickr30K contains 31,000 images, and each photo has five captions. The Flickr30K dataset is split following the same setup as (Karpathy & Fei-Fei, 2015;\n1https://github.com/huggingface/transformers, version 2.5.1 using Pytorch 1.2.0 (Paszke et al., 2017)\nFaghri et al., 2018). We have a training dataset with 29,000 images, a validation set with 1,000 images, and a test dataset with 1,000 images. MS-COCO contains 123,287 images, with 5 human annotated descriptions per image. We use the same split as in (Faghri et al., 2018), i.e., the dataset is split into 113,287 training images, 5,000 validation images, and 5,000 test images.\nImplementation details. For this image-text matching task, we use the Adam optimizer (Kingma & Ba, 2015) to train the models. Note that we developed our approach upon SCAN (Lee et al., 2018)2, by substituting the triplet loss with our RECLAIM. In this experiment, u is the textual feature extracted from a GRU, and v is the image feature extracted from a ResNet. Training details are provided in the Appendix.\nThe performance of sentence retrieval with image query or image retrieval with sentence query is measured by recall at T (R@T) (Karpathy & Fei-Fei, 2015), defined as the percentage of queries that retrieve the correct objects within those with top T highest similarity scores as determined by the model. In this experiment, we can further improve the image-text retrieval results with the basic model design of SCAN. The improvement indicates that features extracted by our CL approach can better capture the common salient information between text and image pairs.\nDataset: We further evaluate our model on an unsupervised text generation task. Two commonlyused datasets are employed for this task, the Penn Treebank (PTB) (Marcus et al., 1993; Bowman et al., 2016) and the Yelp corpora (Yang et al., 2017; He et al., 2019). PTB is a relatively small dataset with sentences of varying lengths, whereas Yelp contains larger amounts of long sentences. Detailed statistics of these two datasets are summarized in the Appendix.\nImplementation details: To ensure a fair comparison and reproducibility between models, we develop our model based on an existing codebase3. The encoder and decoder are both 1-layer LSTMs, and the hyper-parameter setup follows the instructions within the original codebase. In this task, the u is the latent code in text VAE and v is the word embedding vectors of the input text. The most commonly used metrics are applied to evaluate the learned language model, as listed in Table 4.\nAccording to Table 4, by simply adding our proposed CL method, the base model can be further improved in terms of most of the automatic metrics. Lower negative ELBO indicates our approach\n2https://github.com/kuanghuei/SCAN 3https://github.com/fangleai/Implicit-LVM\nyields a better language model. Larger KL divergence and larger Active Unit (AU) (Burda et al., 2015) indicate that the latent space is more sufficiently made use of. We also observed that the posterior-collapsing problem is alleviated, with improved mutual information and KL (Fang et al., 2019); this is presumably due to the fact that we add additional CL objective to the latent code and output to improve the MI between them." }, { "heading": "5.4 ABLATION STUDY", "text": "Choice ofK We seek to further investigate how the negative sample size K influences the effectiveness of our model. To this end, we choose different K = {1, 10, 100, 300} in ANS for comparison. Besides testing different Ks with ANS, we also test an alternative approach, where we random sample 80% of features from the memory bank as negative samples instead of applying ANS. We denote this method simply as the 80% Method. Also, the in-batch method denotes that we only use negative samples within the batch. The results can be found in Table 5. Note that those two tricks can be viewed as two different ways for constructing vanilla contrastive learning (Vanilla CL) algorithm.\nFrom Table 5 we observe that when K is small, the improvement is limited. In some tasks, such as MNLI, it is even worse than the BERT-base model. This finding is consistent with arguments from previous works (Wu et al., 2017; Tschannen et al., 2020). For K = 100 and K = 300, comparable results are often observed, and either of them outperforms the others on certain tasks; K = 300 seems to work better on tasks with larger datasets such as MNLI/QNLI. We hypothesize that this is because larger datasets contain more high-quality contrastive examples than smaller datasets, thus allow using a large K without introducing much noise. Both of them show better results than the 80% Method without ANS.\nComputational Efficiency MNLI, the biggest dataset in GLUE, is employed as a running-time benchmark to evaluate the computational efficiency among different approaches. We record the training time for the original BERT, K = 100, and the 80% Method. Without any contrastive regularization, BERT takes approximately 45 minutes per epoch. For K = 100, RECLAIM needs 47 minutes per epoch. The 80% Method takes 81 minutes per epoch. The memory usage for BERT-base is around 7.5GB for a batchsize (per GPU) of TITAN X. K = 100 takes an additional 200MB, and the 80% Method takes full 12GB memory capacity. These empirical findings provide evidence that our method can be both efficient and effective. Due to space limitations, other ablation studies, including the investigation of different λ choices, are provided in the Appendix." }, { "heading": "6 CONCLUSIONS", "text": "We have proposed a novel contrastive learning (CL) framework, RECLAIM, for natural language processing tasks. Our approach improves the “contrast” in the feature space, in an attempt to make the features learned by the representation encoder to be more distinguishable, representative and informative. We identified the challenges in CL training and proposed several remedies to alleviate\nthese issues. Extensive experiments show that consistent improvements over a variety of NLP tasks, demonstrating the effectiveness of our approach." }, { "heading": "A ALGORITHM", "text": "Here is the detailed algorithm for our RECLAIM framework.\nAlgorithm 1 RECLAIM training procedure. 1: Input: batch size n, dataset X, momentum parameter η, maximum number of iterations N . 2: for itr = 1, . . . N do 3: Sample input x ∼ X and initialize two memory bank Bu, Bv 4: Get features: u,v = Network(x) 5: Update Bu,Bv by using Equation 7 6: Using ANS algorithm to draw K negative features for u and v:\n{u′i}K1 ∼ Bu, {v′i}K1 ∼ Bv 7: Stop gradient for {u′i}K1 , {v′i}K1 , Bu, Bv 8: Calculate Equation 6 with Wasserstein constrained critic g 9: Minimize the Ensemble loss: Ltotal = Ltask − λLANS\n10: end for" }, { "heading": "B MORE RESULTS", "text": "B.1 MORE RESULTS FOR SEMI-SUPERVISED EXPERIMENT\nResults for QQP, MRPC and QNLI are shown here in 6.\nB.2 MORE VAE RESULTS\nB.3 ABLATION STUDY\nImportance of the Wasserstein regularizer\nimportance of λ choices We also test the effect of choosing different λ ∈ {0.01, 0.1, 0.5, 1}. As shown in Table 9, We can see that when λ ≤ 0.5, RECLAIM can consistently outperform the BERT-base model. It may because we need to re-scale the contrastive loss to the same numerical scale as the task-specific loss.\nExperiment on BERT-Large We also test GLUE experiment on BERT-large model, to see whether our proposed algorithm can still be effective. Results can be found in Table 10.\nB.4 VQA TEST\nWe also tested our approach on Visual Question Answering (VQA) 2.0 task Goyal et al. (2017), which contains human-annotated QA pairs on COCO images (Lin et al., 2014a). For each image, an average of 3 questions are collected, with 10 candidate answers per question. The most frequent answer from the annotators is selected as the correct answer.\nFollowing previous work Kim et al. (2018), we take the answers that appear more than 9 times in the training set as candidate answers, which results in 3129 candidates. Classification accuracy is used as the evaluation metric, defined as min(1, # humans provided ans.3 ).\nIn this setup, we choose u,v as question features and image features respectively. By applying our RECLAIM approach directly to BAN model, we can see a improvement over the VQA task as shown in Table 2." }, { "heading": "C TRAINING DETAILS", "text": "Image-Text Retrieval For the Flickr30K data, we train the model for 30 epochs. The initial learning rate is set to 0.0002, and decays by a factor of 10 after 15 epochs. For MS-COCO data, we train the model for 20 epochs. The initial learning rate is set to 0.0005, and decays by 10 after 10 epochs. We set the batch size to 128, and threshold the maximum gradient norm to 2.0 for gradient clipping. We also set the dimension of the GRU and joint embedding space to 1024, and the dimension of the word embedding to 300.\nGLUE We choose batch size as 32 for all 9 GLUE tasks, and 2× 10−5 is the starting learning rate. For each task, we only perform 3 epochs, since some datasets, such as RTE is quite small, they can be easily got over-fitted.\ndimension of features u and v Note that our CLAIM formulation do not require u and v to have a matching dimension. Contrastive learning seeks to compare π(u,v) to π(u)π(v), not comparing u directly with v. Though in practice, we often map u and v to matching dimensions via MLP or RNN to balance their respective contribution to the loss.\nArchitecture for g: We use two different MLP layers first to map u,v into the same dimension, and then we feed both into a three layer MLP. Details will be included in our next revision. For instance, in BERT model, we will map word representations and hidden states to dim = 64.\nchoice of u and v : In GLUE experiment: u is chosen as word vectors (from one-hot tokens to real vectors), v is chosen as the BERT output features with dim=768.\nIn VAE setup: u is chosen as word vectors (from one-hot tokens to real vectors), v is chosen as the encoded latent variable z.\nIn Image-Text retrieval setup: u is chosen as word vectors (from one-hot tokens to real vectors), v is chosen as the image features." } ]
2,020
null
SP:a8bb14b514e474691be63b51582544a9befa7125
[ "The paper finds that at extreme sparsities (>95%), existing approaches to pruning neural networks at initialization devolve to worse than random pruning. The paper posits that this degenerate behavior is due to the fact that weights are pruned in groups, though the saliency metrics only capture pointwise changes. The paper presents a modified saliency metric based on SNIP, allowing for calculating salience of partially pruned networks; this in turn allows for applying an iterative version of SNIP, as well as a variant of iterative SNIP that allows for rejuvenation. These pruning techniques are evaluated, showing that they maintain accuracy at high sparsities." ]
Recent studies have shown that skeletonization (pruning parameters) of networks at initialization provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance. However, we observe that beyond a certain level of sparsity (approx 95%), these approaches fail to preserve the network performance, and to our surprise, in many cases perform even worse than trivial random pruning. To this end, we propose an objective to find a skeletonized network with maximum foresight connection sensitivity (FORCE) whereby the trainability, in terms of connection sensitivity, of a pruned network is taken into consideration. We then propose two approximate procedures to maximize our objective (1) Iterative SNIP: allows parameters that were unimportant at earlier stages of skeletonization to become important at later stages; and (2) FORCE: iterative process that allows exploration by allowing already pruned parameters to resurrect at later stages of skeletonization. Empirical analysis on a large suite of experiments show that our approach, while providing at least as good a performance as other recent approaches on moderate pruning levels, provide remarkably improved performance on higher pruning levels (could remove up to 99.5% parameters while keeping the networks trainable).
[ { "affiliations": [], "name": "Pau de Jorge" }, { "affiliations": [], "name": "Amartya Sanyal" }, { "affiliations": [], "name": "Harkirat S. Behl" }, { "affiliations": [], "name": "Puneet K. Dokania" } ]
[ { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Miguel Á. Carreira-Perpiñán", "Yerlan Idelbayev" ], "title": "learning-compression” algorithms for neural net pruning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Yves Chauvin" ], "title": "A back-propagation algorithm with optimal use of hidden units", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "Xiaoliang Dai", "Hongxu Yin", "Niraj K Jha" ], "title": "Nest: A neural network synthesis tool based on a grow-and-prune paradigm", "venue": "IEEE Transactions on Computers,", "year": 2019 }, { "authors": [ "Tim Dettmers", "Luke Zettlemoyer" ], "title": "Sparse networks from scratch: Faster training without losing performance, 2020", "venue": "URL https://openreview.net/forum?id=ByeSYa4KPS", "year": 2020 }, { "authors": [ "Erich Elsen", "Marat Dukhan", "Trevor Gale", "Karen Simonyan" ], "title": "Fast sparse convnets", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Utku Evci", "Trevor Gale", "Jacob Menick", "Pablo Samuel Castro", "Erich Elsen" ], "title": "Rigging the lottery: Making all tickets winners", "venue": null, "year": 1911 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M. Roy", "Michael Carbin" ], "title": "Stabilizing the lottery ticket hypothesis, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yiwen Guo", "Anbang Yao", "Yurong Chen" ], "title": "Dynamic network surgery for efficient dnns", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Babak Hassibi", "David G Stork", "Gregory J Wolff" ], "title": "Optimal brain surgeon and general network pruning", "venue": "In IEEE international conference on neural networks,", "year": 1993 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Aditya Kusupati", "Vivek Ramanujan", "Raghav Somani", "Mitchell Wortsman", "Prateek Jain", "Sham M. Kakade", "Ali Farhadi" ], "title": "Soft threshold weight reparameterization for learnable sparsity", "venue": null, "year": 2002 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip Torr" ], "title": "SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Stephen Gould", "Philip H.S. Torr" ], "title": "A signal propagation perspective for pruning neural networks at initialization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tao Lin", "Sebastian U. Stich", "Luis Barba", "Daniil Dmitriev", "Martin Jaggi" ], "title": "Dynamic model pruning with feedback", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P. Kingma" ], "title": "Learning sparse neural networks through l0 regularization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Arun Mallya", "Dillon Davis", "Svetlana Lazebnik" ], "title": "Piggyback: Adapting a single network to multiple tasks by learning to mask weights", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz" ], "title": "Pruning convolutional neural networks for resource efficient inference", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Hesham Mostafa", "Xin Wang" ], "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization, 2019", "venue": "URL https://openreview.net/forum? id=S1xBioR5KX", "year": 2019 }, { "authors": [ "Michael C Mozer", "Paul Smolensky" ], "title": "Skeletonization: A technique for trimming the fat from a network via relevance assessment", "venue": "In Advances in Neural Information Processing Systems,", "year": 1989 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch, 2017", "venue": null, "year": 2017 }, { "authors": [ "M. Sandler", "A. Howard", "M. Zhu", "A. Zhmoginov", "L. Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hidenori Tanaka", "Daniel Kunin", "Daniel L Yamins", "Surya Ganguli" ], "title": "Pruning neural networks without any data by iteratively conserving synaptic flow", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Stijn Verdenius", "Maarten Stol", "Patrick Forré" ], "title": "Pruning via iterative ranking of sensitivity statistics", "venue": "arXiv preprint arXiv:2006.00896,", "year": 2020 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "of Paszke" ], "title": "2017) and we use the default settings. In this case, we use the Resnet50 and VGG19", "venue": null, "year": 2017 }, { "authors": [ "Wang" ], "title": "Table 2: Percentage of weights per layer for each network and dataset. Layer type Conv Fully connected BatchNorm Bias Prunable Total CIFAR10 Resnet50", "venue": null, "year": 2020 }, { "authors": [ "Wang" ], "title": "In Fig 8 we visualize the structure of the networks after pruning 99.9% of the parameters. We show the fraction of remaining weights and the total number of remaining weights per layer after pruning. As seen in (a) and (d), all analysed methods show a tendency to preserve the initial and final layers and to prune more heavily the deep convolutional layers, this is consistent with results reported", "venue": null, "year": 2020 }, { "authors": [ "Liu" ], "title": "GRASP’s approximation iteratively", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "2018), reason that manually designed networks have layers which are more redundant than others. Therefore, pruning methods even this redundancies by pruning layers with different percentages. We extend this reasoning, and hypothesize that pruning algorithms should always have preference for pruning the same (redundant) layers across all levels of sparsity", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The majority of pruning algorithms for Deep Neural Networks require training dense models and often fine-tuning sparse sub-networks in order to obtain their pruned counterparts. In Frankle & Carbin (2019), the authors provide empirical evidence to support the hypothesis that there exist sparse sub-networks that can be trained from scratch to achieve similar performance as the dense ones. However, their method to find such sub-networks requires training the full-sized model and intermediate sub-networks, making the process much more expensive.\nRecently, Lee et al. (2019) presented SNIP. Building upon almost a three decades old saliency criterion for pruning trained models (Mozer & Smolensky, 1989), they are able to predict, at initialization, the importance each weight will have later in training. Pruning at initialization methods are much cheaper than conventional pruning methods. Moreover, while traditional pruning methods can help accelerate inference tasks, pruning at initialization may go one step further and provide the same benefits at train time Elsen et al. (2020).\nWang et al. (2020) (GRASP) noted that after applying the pruning mask, gradients are modified due to non-trivial interactions between weights. Thus, maximizing SNIP criterion before pruning might be sub-optimal. They present an approximation to maximize the gradient norm after pruning, where they treat pruning as a perturbation on the weight matrix and use the first order Taylor’s approximation. While they show improved performance, their approximation involves computing a Hessian-vector product which is expensive both in terms of memory and computation. ∗Correspondence to pau@robots.ox.ac.uk †www.europe.naverlabs.com\nWe argue that both SNIP and GRASP approximations of the gradients after pruning do not hold for high pruning levels, where a large portion of the weights are removed at once. In this work, while we rely on the saliency criteria introduced by Mozer & Smolensky (1989), we optimize what this saliency would be after pruning, rather than before. Hence, we name our criteria Foresight Connection sEnsitivity (FORCE). We introduce two approximate procedures to progressively optimize our objective. The first, which turns out to be equivalent to applying SNIP iteratively, removes a small fraction of weights at each step and re-computes the gradients after each pruning round. This allows to take into account the intricate interactions between weights, re-adjusting the importance of connections at each step. The second procedure, which we name FORCE, is also iterative in nature, but contrary to the first, it allows pruned parameters to resurrect. Hence, it supports exploration, which otherwise is not possible in the case of iterative SNIP. Moreover, one-shot SNIP can be viewed as a particular case of using only one iteration. Empirically, we find that both SNIP and GRASP have a sharp drop in performance when targeting higher pruning levels. Surprisingly, they perform even worse than random pruning as can be seen in Fig 1. In contrast, our proposed pruning procedures prove to be significantly more robust on a wide range of pruning levels." }, { "heading": "2 RELATED WORK", "text": "Pruning trained models Most of the pruning works follow the train – prune – fine-tune cycle (Mozer & Smolensky, 1989; LeCun et al., 1990; Hassibi et al., 1993; Han et al., 2015; Molchanov et al., 2017; Guo et al., 2016), which requires training the dense network until convergence, followed by multiple iterations of pruning and fine-tuning until a target sparsity is reached. Particularly, Molchanov et al. (2017) present a criterion very similar to Mozer & Smolensky (1989) and therefore similar to Lee et al. (2019) and our FORCE, but they focus on pruning whole neurons, and involve training rounds while pruning. Frankle & Carbin (2019) and Frankle et al. (2020) showed that it was possible to find sparse sub-networks that, when trained from scratch or an early training iteration, were able to match or even surpass the performance of their dense counterparts. Nevertheless, to find them they use a costly procedure based on Han et al. (2015). All these methods rely on having a trained network, thus, they are not applicable before training. In contrast, our algorithm is able to find a trainable sub-network with randomly initialized weights. Making the overall pruning cost much cheaper and presenting an opportunity to leverage the sparsity during training as well.\nInduce sparsity during training Another popular approach has been to induce sparsity during training. This can be achieved by modifying the loss function to consider sparsity as part of the optimization (Chauvin, 1989; Carreira-Perpiñán & Idelbayev, 2018; Louizos et al., 2018) or by dynamically pruning during training (Bellec et al., 2018; Mocanu et al., 2018; Mostafa & Wang, 2019; Dai et al., 2019; Dettmers & Zettlemoyer, 2020; Lin et al., 2020; Kusupati et al., 2020; Evci et al., 2019). These methods are usually cheaper than pruning after training, but they still need to train the network to select the final sparse sub-network. We focus on finding sparse sub-networks before any weight update, which is not directly comparable.\nPruning at initialization These methods present a significant leap with respect to other pruning methods. While traditional pruning mechanisms focused on bringing speed-up and memory reduction at inference time, pruning at initialization methods bring the same gains both at training and inference time. Moreover, they can be seen as a form of Neural Architecture Search (Zoph & Le, 2016) to find more efficient network topologies. Thus, they have both a theoretical and practical interest.\nLee et al. (2019) presented SNIP, a method to estimate, at initialization, the importance that each weight could have later during training. SNIP analyses the effect of each weight on the loss function when perturbed at initialization. In Lee et al. (2020), the authors studied pruning at initialization from a signal propagation perspective, focusing on the initialization scheme. Recently, Wang et al. (2020) proposed GRASP, a different method based on the gradient norm after pruning and showed a significant improvement for higher levels of sparsity. However, neither SNIP nor GRASP perform sufficiently well when larger compressions and speed-ups are required and a larger fraction of the weights need to be pruned. In this paper, we analyse the approximations made by SNIP and GRASP, and present a more suitable solution to maximize the saliency after pruning." }, { "heading": "3 PROBLEM FORMULATION: PRUNING AT INITIALIZATION", "text": "Given a dataset D = {(xi,yi)}ni=1, the training of a neural network f parameterized by θ ∈ Rm can be written as minimizing the following empirical risk:\narg min θ\n1\nn ∑ i L((f(xi;θ)),yi) s.t. θ ∈ C, (1)\nwhere L and C denote the loss function and the constraint set, respectively. Unconstrained (standard) training corresponds to C = Rm. Assuming we have access to the gradients (batch-wise) of the empirical risk, an optimization algorithm (e.g. SGD) is generally used to optimize the above objective, that, during the optimization process, produces a sequence of iterates {θi}Ti=0, where θ0 and θT denote the initial and the final (optimal) parameters, respectively. Given a target sparsity level of k < m, the general parameter pruning problem involves C with a constraint ‖θT ‖0 ≤ k, i.e., the final optimal iterate must have a maximum of k non-zero elements. Note that there is no such constraint with the intermediate iterates.\nPruning at initialization, the main focus of this work, adds further restrictions to the above mentioned formulation by constraining all the iterates to lie in a fixed subspace of C. Precisely, the constraints are to find an initialization θ0 such that ‖θ0‖0 ≤ k 1, and the intermediate iterates are θi ∈ C̄ ⊂ C, ∀i ∈ {1, . . . , T}, where C̄ is the subspace of Rm spanned by the natural basis vectors {ej}j∈supp(θ0). Here, supp(θ0) denotes the support of θ0, i.e., the set of indices with non-zero entries. The first condition defines the sub-network at initialization with k parameters, and the second fixes its topology throughout the training process. Since there are ( m k ) such possible sub-spaces, exhaustive search\nto find the optimal sub-space to optimize (1) is impractical as it would require training ( m k ) neural networks. Below we discuss two recent approaches that circumvent this problem by maximizing a hand-designed data-dependent objective function. These objectives are tailored to preserve some relationships between the parameters, the loss, and the dataset, that might be sufficient to obtain a reliable θ0. For the ease of notation, we will use θ to denote the dense initialization.\nSNIP Lee et al. (2019) present a method based on the saliency criterion from Mozer & Smolensky (1989). They add a key insight and show this criteria works surprisingly well to predict, at initialization, the importance each connection will have during training. The idea is to preserve the parameters that will have maximum impact on the loss when perturbed. Let c ∈ {0, 1}m be a binary vector, and the Hadamard product. Then, the connection sensitivity in SNIP is computed as:\ng(θ) := ∂L(θ c)\n∂c\n∣∣∣∣ c=1 = ∂L(θ) ∂θ θ. (2)\nOnce g(θ) is obtained, the parameters corresponding to the top-k values of |g(θ)i| are then kept. Intuitively, SNIP favors those weights that are far from the origin and provide high gradients (irrespective of the direction). We note that SNIP objective can be written as the following problem:\nmax c S(θ, c) := ∑ i∈supp(c) |θi ∇L(θ)i| s.t. c ∈ {0, 1}m, ‖c‖0 = k. (3)\nIt is trivial to note that the optimal solution to the above problem can be obtained by selecting the indices corresponding to the top-k values of |θi ∇L(θ)i|.\n1In practice, as will be done in this work as well, a subset of a given dense initialization is found using some saliency criterion (will be discussed soon), however, note that our problem statement is more general than that.\nGRASP Wang et al. (2020) note that the SNIP saliency is measuring the connection sensitivity of the weights before pruning, however, it is likely to change after pruning. Moreover, they argue that, at initialization, it is more important to preserve the gradient signal than the loss itself. They propose to use as saliency the gradient norm of the loss ∆L(θ) = ∇L(θ)T∇L(θ), but measured after pruning. To maximize it, Wang et al. (2020) adopt the same approximation introduced in LeCun et al. (1990) and treat pruning as a perturbation on the initial weights. Their method is equivalent to solving:\nmax c G(θ, c) := ∑ {i: ci=0} −θi [Hg]i s.t. c ∈ {0, 1}m, ‖c‖0 = k. (4)\nWhere H and g denote the Hessian and the gradient of the loss respectively." }, { "heading": "4 FORESIGHT CONNECTION SENSITIVITY", "text": "Since removing connections of a neural network will have significant impact on its forward and backward signals, we are interested in obtaining a pruned network that is easy to train. We use connection sensitivity of the loss function as a proxy for the so-called trainability of a network. To this end, we first define connection sensitivity after pruning which we name Foresight Connection sEnsitivity (FORCE), and then propose two procedures to optimize it in order to obtain the desired pruned network. Let θ̄ = θ c denotes the pruned parameters once a binary mask c with ‖c‖0 = k ≤ m is applied. The FORCE at θ̄ for a given mask ĉ is then obtained as:\ng(θ̄) := ∂L(θ̄) ∂c ∣∣∣∣ c=ĉ = ∂L(θ̄) ∂θ̄ ∣∣∣∣ c=ĉ\n∂θ̄ ∂c ∣∣∣∣ c=ĉ = ∂L(θ̄) ∂θ̄ ∣∣∣∣ c=ĉ θ. (5)\nThe last equality is obtained by rewriting θ̄ as diag(θ)c, where diag(θ) is a diagonal matrix with θ as its elements, and then differentiating w.r.t. c. Note, when k < m, the sub-network θ̄ is obtained by removing connections corresponding to all the weights for which the binary variable is zero. Therefore, only the weights corresponding to the indices for which c(i) = 1 contribute in equation (5), all other weights do not participate in forward and backward propagation and are to be ignored. We now discuss the crucial differences between our formulation (5), SNIP (2) and GRASP (4).\n• When ĉ = 1, the formulation is exactly the same as the connection sensitivity used in SNIP. However, ĉ = 1 is too restrictive in the sense that it assumes that all the parameters are active in the network and they are removed one by one with replacement, therefore, it fails to capture the impact of removing a group of parameters. • Our formulation uses weights and gradients corresponding to θ̄ thus, compared to SNIP,\nprovides a better indication of the training dynamics of the pruned network. However, GRASP formulation is based on the assumption that pruning is a small perturbation on the Gradient Norm which, also shown experimentally, is not always a reliable assumption. • When ‖ĉ‖0 ‖1‖0, i.e., extreme pruning, the gradients before and after pruning will\nhave very different values as ‖θ ĉ‖2 ‖θ‖2, making SNIP and GRASP unreliable (empirically we find SNIP and GRASP fail in the case of high sparsity).\nFORCE saliency Note FORCE (5) is defined for a given sub-network which is unknown a priori, as our objective itself is to find the sub-network with maximum connection sensitivity. Similar to the reformulation of SNIP in (3), the objective to find such sub-network corresponding to the foresight connection sensitivity can be written as:\nmax c S(θ, c) := ∑ i∈supp(c) |θi ∇L(θ c)i| s.t. c ∈ {0, 1}m, ‖c‖0 = k. (6)\nHere∇L(θ c)i represents the i-th index of ∂L(θ̄)∂θ̄ ∣∣∣ c . As opposed to (3), finding the optimal solution\nof (6) is non trivial as it requires computing the gradients of all possible ( m k ) sub-networks in order to find the one with maximum sensitivity. To this end, we present two approximate solutions to the above problem that primarily involve (i) progressively increasing the degree of pruning, and (ii) solving an approximation of (6) at each stage of pruning.\nProgressive Pruning (Iterative SNIP) Let k be the number of parameters to be kept after pruning. Let us assume that we know a schedule (will be discussed later) to divide k into a set of natural\nnumbers {kt}Tt=1 such that kt > kt+1 and kT = k. Now, given the mask ct corresponding to kt, pruning from kt to kt+1 can be formulated using the connection sensitivity (5) as:\nct+1 = arg max c\nS(θ̄, c) s.t. c ∈ {0, 1}m, ‖c‖0 = kt+1, c ct = c, (7)\nwhere θ̄ = θ ct. The additional constraint c ct = c ensures that no parameter that had been pruned earlier is activated again. Assuming that the pruning schedule ensures a smooth transition from one topology to another (‖ct‖0 ≈ ‖ct+1‖0) such that the gradient approximation ∂L(θ̄) ∂θ̄ ∣∣∣ ct ≈ ∂L(θ̄) ∂θ̄ ∣∣∣ ct+1 is valid, (7) can be approximated as solving (3) at θ̄. Thus, for a given schedule over k, our first approximate solution to (6) involves solving (3) iteratively. This allows re-assessing the importance of connections after changing the sub-network. For a schedule with T = 1, we recover SNIP where a crude gradient approximation between the dense network c0 = 1 and the final mask c is being used. This approach of ours turns out to be algorithmically similar to a concurrent work (Verdenius et al., 2020). However, our motivation comes from a novel objective function (5) which also gives place to our second approach (FORCE). Tanaka et al. (2020) also concurrently study the effect of iterative pruning and report, similar to our findings, pruning progressively is needed for high sparsities.\nProgressive Sparsification (FORCE) The constraint c ct = c in (7) (Iterative SNIP) might be restrictive in the sense that while re-assessing the importance of unpruned parameters, it does not allow previously pruned parameters to resurrect (even if they could become important). This hinders exploration which can be unfavourable in finding a suitable sub-network. Here we remove this constraint, meaning, the weights for which c(i) = 0 are not removed from the network, rather they are assigned a value of zero. Therefore, while not contributing to the forward signal, they might have a non-zero gradient. This relaxation modifies the saliency in (5) whereby the gradient is now computed at a sparsified network instead of a pruned network. Similar to the above approach, we sparsify the network progressively and once the desired sparsity is reached, all connections with c(i) = 0 are pruned. Note, the step of removing zero weights is valid if removing such connections does not adversely impact the gradient flow of the unpruned parameters. We, in fact, found it to be true in our experiments shown in Fig 7 (Appendix). However, this assumption might not hold always.\nAn overview of Iterative SNIP and FORCE is presented in Algorithm 1.\nSparsity schedule Both the above discussed iterative procedures approximately optimize (5), however, they depend on a sparsity/pruning schedule favouring small steps to be able to reliably apply the mentioned gradient approximation. One such valid schedule would be where the portion of newly removed weights with respect to the remaining weights is small. We find a simple exponential decay schedule, defined below, to work very well on all our experiments:\nExp mode: kt = exp {α log k + (1− α) logm} , α = t\nT . (8)\nIn section 5.3 we empirically show that these methods are very robust to the hyperparameter T .\nSome theoretical insights When pruning weights gradually, we are looking for the best possible sub-network in a neighbourhood defined by the previous mask and the amount of weights removed at that step. The problem being non-convex and non-smooth makes it challenging to prove if the mask\nAlgorithm 1 FORCE/Iter SNIP algorithms to find a pruning mask 1: Inputs: Training set D, final sparsity k, number of steps T , weights θ0 ∈ Rm. 2: Obtain {kt}t=1:T using the chosen schedule (refer to Eq (8)) 3: Define intial mask c0 = 1 4: for t = 0, . . . , T − 1 do 5: Sample mini-batch {zi}ni=1 from D 6: Define θ̄ = θ ct (as sparsified (FORCE) vs pruned (Iter SNIP) network) 7: Compute g(θ̄) (refer to Eq (5) ) 8: I = {i1, . . . , ikt+1} are top-kt+1 values of |gi| 9: Build ct+1 by setting to 0 all indices not included in I . 10: end for 11: Return: cT .\nobtained by our method is globally optimal. However, in Appendix D we prove that each intermediate mask obtained with Iterative SNIP is indeed an approximate local minima, where the degree of sub-optimality increases with the pruning step size. This gives some validation on why SNIP fails on higher sparsity. We can not provide the same guarantees for FORCE (there is no obvious link between the step size and the distance between masks), nevertheless, we empirically observe that FORCE is quite robust and more often than not improves over the performance of Iterative SNIP, which is not able to recover weights once pruned. We present further analysis in Appendix C.4." }, { "heading": "5 EXPERIMENTS", "text": "In the following we evaluate the efficacy of our approaches accross different architectures and datasets. Training settings, architecture descriptions, and implementation details are provided in Appendix A." }, { "heading": "5.1 RESULTS ON CIFAR-10", "text": "Fig 2 compares the accuracy of the described iterative approaches with both SNIP and GRASP. We also report the performance of a dense and a random pruning baseline. Both SNIP and GRASP consider a single batch to approximate the saliencies, while we employ a different batch of data at each stage of our gradual skeletonization process. For a fair comparison, and to understand how the number of batches impacts performance, we also run these methods averaging the saliencies over T batches, where T is the number of iterations. SNIP-MB and GRASP-MB respectively refer to these multi-batch (MB) counterparts. In these experiments, we use T = 300. We study the hyper-parameter robustness regarding T later in section 5.3.\nWe observe that for moderate sparsity levels, one batch is sufficient for both SNIP and GRASP as reported in Lee et al. (2019); Wang et al. (2020). However, as we increase the level of sparsity, the performance of SNIP and GRASP degrades dramatically. For example, at 99.0% sparsity, SNIP drops down to 10% accuracy for both ResNet50 and VGG19, which is equivalent to random guessing as there are 10 classes. Note, in the case of randomly pruned networks, accuracy is nearly 75% and 82% for ResNet50 and VGG19, respectively, which is significantly better than the performance of SNIP. However, to our surprise, just using multiple batches to compute the connection sensitivity used in SNIP improves it from 10% to almost 90%. This clearly indicates that a better approximation of the connection sensitivity is necessary for good performance in the case of high sparsity regime. Similar trends, although not this extreme, can be observed in the case of GRASP as well. On the other hand, gradual pruning approaches are much more robust in terms of sparsity for example, in the case of 99.9% pruning, while one-shot approaches perform as good as a random classifier (nearly 10% accuracy), both FORCE and Iterative SNIP obtain more than 80% accuracy. While the accuracies obtained at higher sparsities might have degraded too much for some use cases, we argue this is an encouraging result, as no approach before has pruned a network at initialization to such extremes while keeping the network trainable and these results might encourage the community to improve the performance further. Finally, gradual pruning methods consistently improve over other methods even at moderate sparsity levels (refer to Fig 5), this motivates the use of FORCE or Iterative SNIP instead of other methods by default at any sparsity regime. Moreover, the additional cost of using iterative\npruning instead of SNIP-MB is negligible compared to the cost of training and is significantly cheaper than GRASP-MB, further discussed in section 5.3." }, { "heading": "5.2 RESULTS ON LARGER DATASETS", "text": "We now present experiments on large datasets. Wang et al. (2020) and Lee et al. (2019) suggest using a batch of size ∼10 times the number of classes, which is very large in these experiments. Instead, for memory efficiency, we average the saliencies over several mini-batches. For CIFAR100 and Tiny-ImageNet, we average 10 and 20 batches per iteration respectively, with 128 examples per batch. As we increase the number of batches per iteration, computing the pruning mask becomes more expensive. From Fig 4, we observe that the accuracy converges after just a few iterations. Thus, for the following experiments we used 60 iterations. For a fair comparison, we run SNIP and GRASP with T ×B batches, where T is the number of iterations and B the number of batches per iteration in our method. We find the results, presented in Fig 3, consistent with trends in CIFAR-10.\nIn the case of Imagenet, we use a batch size of 256 examples and 40 batches per iteration. We use the official implementation of VGG19 with batch norm and Resnet50 from Paszke et al. (2017). As presented in Table 1, gradual pruning methods are consistently better than SNIP, with a larger gap as we increase sparsity. We would like to emphasize that FORCE is able to prune 90% of the weights of VGG while losing less than 3% of the accuracy, we find this remarkable for a method that prunes before any weight update. Interestingly, GRASP performs better than other methods at 95% sparsity (VGG), moreover, it also slightly surpasses FORCE for Resnet50 at 90%, however, it under-performs random pruning at 95%. In fact, we find all other methods to perform worse than random pruning for Resnet50. We hypothesize that, for a much more challenging task (Imagenet with 1000 classes), Resnet50 architecture might not be extremely overparametrized. For instance,\nVGG19 has 143.68M parameters while Resnet50 uses 25.56M (refer to Table 2). On the other hand, the fact that random pruning can yield relatively trainable architectures for these sparsity levels is somewhat surprising and might indicate that there still is room for improvement in this direction. Results seem to indicate that the FORCE saliency is a step in the right direction and we hypothesize further improvements on its optimization might lead to even better performance. In Appendix C.6, we show superior performance of our approach on the Mobilenet-v2 architecture (Sandler et al., 2018) as well, which is much more \"slim\" than Resnet and VGG 2." }, { "heading": "5.3 ANALYSIS", "text": "Saliency optimization To experimentally validate our approach (7), we conduct an ablation study where we compute the FORCE saliency after pruning (5) while varying the number of iterations T for different sparsity levels. In Fig 4 (left) we present the relative change in saliency as we vary the number of iterations T , note when T = 1 we recover one-shot SNIP. As expected, for moderate levels of sparsity, using multiple iterations does not have a significant impact on the saliency. Nevertheless, as we target higher sparsity levels, we can see that the saliency can be better optimized when pruning iteratively. In Appendix C.1 we include results for FORCE where we observe similar trends.\nHyperparameter robustness As shown in Figures 2 and 3, for low sparsity levels, all methods are comparable, but as we move to higher sparsity levels, the gap becomes larger. In Fig 4 (middle) we fix sparsity at 99.5% and study the accuracy as we vary the number of iterations T . Each point is averaged over 3 trials. SNIP (T = 1) yields sub-networks unable to train (10% acc), but as we move to iterative pruning (T > 1) accuracy increases up to 90% for FORCE and 89% for Iter SNIP. Moreover, accuracy is remarkably robust to the choice of T , the best performance for both FORCE and Iter SNIP is with more iterations, however a small number of iterations already brings a huge boost. This suggests these methods might be used by default by a user without worrying too much about hyper-parameter tuning, easily adapting the amount of iterations to their budget.\nPruning cost As shown in Fig 2, SNIP performance quickly degrades beyond 95% sparsity. Wang et al. (2020) suggested GRASP as a more robust alternative, however, it needs to compute a Hessian vector product which is significantly more expensive in terms of memory and time. In Fig 4 (right), we compare the time cost of different methods to obtain the pruning masks along with the corresponding accuracy. We observe that both SNIP and GRASP are fragile when using only one batch (red accuracy indicates performance below random baseline). When using multiple batches their robustness increases, but so does the pruning cost. Moreover, we find that gradual pruning based on the FORCE saliency is much cheaper than GRASP-MB when using equal amount of batches, this is because GRASP involves an expensive Hessian vector product. Thus, FORCE (or Iterative SNIP) would be preferable over GRASP-MB even when they have comparable accuracies.\nFORCE vs Iterative SNIP Empirically, we find that FORCE tends to outperform Iter SNIP more often than not, suggesting that allowing weights to recover is indeed beneficial despite having less theoretical guarantees (see gradient approximation in Section 4). Thus, we would make FORCE\n2Mobilenet has 2.3M params compared to 20.03M and 23.5M of VGG and Resnet, respectively.\nalgorithm our default choice, especially for Resnet architectures. In Appendix C.4 we empirically observe two distinct phases when pruning with FORCE. The first one involves exploration (early phase) when the amount of pruned and recovered weights seem to increase, indicating exploration of masks that are quite different from each other. The second, however, shows rapid decrease in weight recovery, indicating a phase where the algorithm converges to a more constrained topology. As opposed to Iter SNIP, the possibility of the exploration of many possible sub-networks before converging to a final topology might be the reason behind the slightly improved performance of FORCE. But this exploration comes at a price, in Fig 4 (middle and left) we observe how, despite FORCE reaching a higher accuracy when using enough steps, if we are under a highly constrained computational budget and can only afford a few pruning iterations, Iter SNIP is more likely to obtain a better pruning mask. This is indeed expected as FORCE might need more iterations to converge to a good sub-space, while Iter SNIP will be forced to converge by construction. A combination of FORCE and Iter SNIP might lead to an even better approach, we leave this for future work.\nEarly pruning as an additional baseline Our gradual pruning approaches (SNIP-MB and GRASPMB as well) use multiple batches to obtain a pruned mask, considering that pruning can be regarded as a form of training (Mallya et al., 2018), we create another baseline for the sake of completeness. We train a network for one epoch, a similar number of iterations as used by our approach, and then use magnitude pruning to obtain the final mask, we call this approach early pruning (more details in Appendix C.5). Interestingly, we find that early pruning tends to perform worse than SNIP-MB (and gradual pruning) for Resnet, and shows competitive performance at low sparsity level for VGG but with a sharp drop in the performance as the sparsity level increases. Even though these experiments support the superiority of our approach, we would like to emphasize that they do not conclude that any early pruning strategy would be suboptimal compared to pruning at initialization as an effective approach in this direction might require devising a well thought objective function.\nIterative pruning to maximize the Gradient Norm In Sec 4, we have seen Iterative SNIP can be used to optimize the FORCE saliency. We also tried to use GRASP iteratively, however, after a few iterations the resulting networks were not trainable. Interestingly, if we apply the gradient approximation to GRASP saliency (instead of Taylor), we can come up with a different iterative approximation to maximize the gradient norm after pruning. We empirically observe this method is more robust than GRASP to high sparsity levels. This suggests that 1) Iterative pruning, although beneficial, can not be trivially applied to any method. 2) The gradient approximation is more general than in the context of FORCE/SNIP sensitivity. We present further details and results in Appendix E." }, { "heading": "6 DISCUSSION", "text": "Pruning at initialization has become an active area of research both for its practical and theoretical interests. In this work, we discovered that existing methods mostly perform below random pruning at extreme sparsity regime. We presented FORCE, a new saliency to compute the connection sensitivity after pruning, and two approximations to progressively optimize FORCE in order to prune networks at initialization. We showed that our methods are significantly better than the existing approaches for pruning at extreme sparsity levels, and are at least as good as the existing ones for pruning at moderate sparsity levels. We also provided theoretical insights on why progressive skeletonization is beneficial at initialization, and showed that the cost of iterative methods is reasonable compared to the existing ones. Although pruning iteratively has been ubiquitous in the pruning community, it was not evident that pruning at initialization might benefit from this scheme. Particularly, not every approximation could be used for gradual pruning as we have shown with GRASP. However, the gradient approximation allowed us to gradually prune while maximizing either the gradient norm or FORCE. We consider our results might encourage future work to further investigate the exploration/exploitation trade-off in pruning and find more efficient pruning schedules, not limited to the pruning at initialization." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the Royal Academy of Engineering under the Research Chair and Senior Research Fellowships scheme, EPSRC/MURI grant EP/N019474/1 and Five AI Limited. Pau de Jorge was fully funded by NAVER LABS Europe. Amartya Sanyal acknowledges support from The Alan Turing Institute under the Turing Doctoral Studentship grant TU/C/000023. Harkirat was supported using a Tencent studentship through the University of Oxford." }, { "heading": "A PRUNING IMPLEMENTATION DETAILS", "text": "We present experiments on CIFAR-10/100 (Krizhevsky et al., 2009), which consists of 60k 32×32 colour images divided into 10/100 classes, and also on Imagenet challenge ILSVRC-2012 (Russakovsky et al., 2015) and its smaller version Tiny-ImageNet, which respectively consist of 1.2M/1k and 100k/200 images/classes. Networks are initialized using the Kaiming normal initialization (He et al., 2015). For CIFAR datasets, we train Resnet503 and VGG194 architectures during 350 epochs with a batch size of 128. We start with a learning rate of 0.1 and divide it by 10 at 150 and 250 epochs. As optimizer we use SGD with momentum 0.9 and weight decay 5× 10−4. We separate 10% of the training data for validation and report results on the test set. We perform mean and std normalization and augment the data with random crops and horizontal flips. For Tiny-Imagenet, we use the same architectures. We train during 300 epochs and divide the learning rate by 10 at 1/2 and 3/4 of the training. Other hyper-parameters remain the same. For ImageNet training, we adapt the official code5 of Paszke et al. (2017) and we use the default settings. In this case, we use the Resnet50 and VGG19 with batch normalization architectures as implemented in Paszke et al. (2017).\nIn the case of FORCE and Iter SNIP, we adapt the same public implementation6 of SNIP as Wang et al. (2020). Instead of defining an auxiliary mask to compute the saliencies, we compute the product of the weight times the gradient, which was shown to be equivalent in Lee et al. (2020). As for GRASP, we use their public code.7 After pruning, we implement pruned connections by setting the corresponding weight to 0 and forcing the gradient to be 0. This way, a pruned weight will remain 0 during training.\nAn important difference between SNIP and GRASP implementations is in the way they select the mini-batch to compute the saliency. SNIP implementation simply loads a batch from the dataloader. In contrast, in GRASP implementation they keep loading batches of data until they obtain exactly 10 examples of each class, discarding redundant samples. In order to compare the methods in equal conditions, we decided to use the way SNIP collects the data since it is simpler to implement and does not require extra memory. This might cause small discrepancies between our results and the ones reported in Wang et al. (2020).\nA meaningful design choice regarding SNIP and GRASP implementations is that they only prune convolutional and fully connected layers. These layers constitute the vast majority of parameters in most networks, however, as we move to high sparsity regimes, batch norm layers constitute a non-negligible amount. For CIFAR10, batch norm plus biases constitute 0.2% and 0.05% of the parameters of Resnet50 and VGG19 networks respectively. For consistency, we have as well restricted pruning to convolutional and fully connected layers and reported percentage sparsity with respect to the prunable parameters, as is also done in Lee et al. (2019) and Wang et al. (2020) to the best of our knowledge. In Table 2 we show the percentage of prunable weights for each network and\n3https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py 4https://github.com/alecwangcq/GraSP/blob/master/models/base/vgg.py 5https://github.com/pytorch/examples/tree/master/imagenet 6https://github.com/mi-lad/snip 7https://github.com/alecwangcq/GraSP\ndataset we use. In future experiments we will explore the performance of pruning at initialization when including batch norm layers and biases as well." }, { "heading": "B ADDITIONAL ACCURACY-SPARSITY PLOTS", "text": "In the main text we show the complete range of the accuracy-sparsity curves for the different methods so it is clear why more robust methods are needed. However, it makes it more difficult to appreciate the smaller differences at lower sparsities. In Fig 5 we show the accuracy-sparsity curves where we cut the y axis to show only the higher accuracies." }, { "heading": "C FURTHER ANALYSIS OF PRUNING AT INITIALIZATION", "text": "" }, { "heading": "C.1 SALIENCY VS T", "text": "In Fig 4 (left) we have seen that for higher sparsity levels, FORCE obtains a higher saliency when we increase the number of iterations. In Fig 6 we compare the relative saliencies as we increase the number of iterations for FORCE and Iterative SNIP. As can be seen, both have a similar behaviour." }, { "heading": "C.2 PRUNING VS SPARSIFICATION", "text": "FORCE algorithm is able to recover pruned weights in later iterations of pruning. In order to do that, we do not consider the intermediate masks as pruning masks but rather as sparsification masks, where connections are set to 0 but not their gradients. In order to understand how does computing the FORCE (5) on a sparsified vs pruned network affect the saliency, we prune several masks with FORCE algorithm at varying sparsity levels. For each mask, we then compute their FORCE saliency either considering the pruned network (gradients of pruned connections will be set to 0 during the backward pass) or the sparsified network (we only set to 0 the connections, but let the gradient signal\nflow through all the connections). Results are presented in Fig 7. We observe that the two methods to compute the saliency are strongly correlated, thus, we can assume that when we use the FORCE algorithm that maximizes the saliency of sparsified networks we will also maximize the saliency of the corresponding pruned networks." }, { "heading": "C.3 NETWORK STRUCTURE AFTER PRUNING", "text": "In Fig 8 we visualize the structure of the networks after pruning 99.9% of the parameters. We show the fraction of remaining weights and the total number of remaining weights per layer after pruning. As seen in (a) and (d), all analysed methods show a tendency to preserve the initial and final layers and to prune more heavily the deep convolutional layers, this is consistent with results reported in Wang et al. (2020). In (b) and (e), we note that FORCE has a structure that stands out compared to other methods that are more similar. This is reasonable since, it is the only method that allows pruned weights to recover. In the zoomed plots (c) and (f) we would like to point out that FORCE and Iterative SNIP preserve more weights on the deeper layers than GRASP and SNIP for VGG19 while we observe the opposite behaviour for Resnet50.\nIn Fig 2, we observe that gradual pruning is able to prune the Resnet50 network up to 99.99% sparsity without falling to random accuracy. In contrast, with VGG19 we observe Iterative SNIP is not able to prune more than 99.9%. In Fig 8 we observe that for Resnet50, all methods prune some layers completely. However, in the case of ResNets, even if a convolutional layer is entirely pruned, skip connections still allow the flow of forward and backward signal. On the other hand, architectures without skip connections, such as VGG, require non-empty layers to keep the flow of information. Interestingly, in (c) we observe how FORCE and Iter SNIP have a larger amount of completely pruned layers than GRASP, however, there are a few deep layers with a significantly larger amount of unpruned weights. This seems to indicate that when a high sparsity is required, it is more efficient to have fewer layers with more weights than several extremely sparse layers." }, { "heading": "C.4 EVOLUTION OF PRUNING MASKS", "text": "As discussed in the main text, FORCE allows weights that have been pruned at earlier iterations to become non-zero again, we argue this might be beneficial compared to Iterative SNIP which will not be able to correct any possible \"mistakes\" made in earlier iterations. In particular, it seems to give certain advantage to prune VGG to high sparsities without breaking the flow of information (pruning a layer entirely) as can be seen in Fig 2 (b). In order to gain a better intuition of how does the amount of pruned/recovered weights ratio evolve during pruning, in Fig 9 we plot the normalized amount of pruned and recovered weights (globally on the whole network) at each iteration of FORCE and also for Iter SNIP as a sanity check. Note that Iterative SNIP does not recover weights and the amount of weights pruned at each step decays exponentially (this is expected since we derived Iterative SNIP as a constrained optimization of FORCE where each network needs to be a sub-network of the previous iteration. On the other hand, FORCE does recover weights. Moreover, the amount of pruned/recovered weights does not decay monotonically but has a clear peak, indicating there are two phases during pruning: While the amount of pruned weights increases, the algorithm explores masks which are quite far away from each other, although this might be harmful for the gradient approximation (refer to section 4), we argue that during the initial pruning iterations the network is still quite over-parametrized. After reaching a peak, both the pruning and recovery rapidly decay, thus the masks converge to a more constrained subset." }, { "heading": "C.5 COMPARISON WITH EARLY PRUNING", "text": "For fair comparison, we provided the same amount of data to SNIP and GRASP as was used by our approach and call this variant SNIP-MB and GRASP-MB. Similarly, under this new baseline which we call early pruning, we train the network on 1 epoch of data of CIFAR-10, that is slightly more examples than our pruning at initialization methods which use 128 · 300 = 38400 examples (see section 5). After training for 1 epoch we perform magnitude pruning which requires no extra cost\n(results presented in Fig 10). Although early pruning yields competitive results for VGG at moderate sparsity levels, it soon degrades its performance as we prune more weights. On the other hand, for Resnet architecture it is sub-optimal at all evaluated sparsity levels. Note, this result does not mean that any early pruning strategy would be sub-optimal compared to pruning at initialization, however exploring this further is out of the scope of this work." }, { "heading": "C.6 MOBILENET EXPERIMENTS", "text": "All our experiments were on overparameterized architectures such as Resnet and VGG. To test the wider usability of our methods, in this section we prune the Mobilenet-v2 architecture8 (Sandler et al., 2018) which is much more \"slim\" than Resnet and VGG (Mobilenet has 2.3M params compared to 20.03M and 23.5M of VGG and Resnet respectively). Results are provided in Fig 11. Similarly to Resnet and VGG architectures, we see that gradual pruning tends to better preserve accuracy at higher sparsity levels than one-shot methods. Moreover, both FORCE and Iter SNIP improve over SNIP at moderate sparsity levels as well. FORCE and Iter SNIP have comparable accuracies except for high sparsity where Iter SNIP surpasses FORCE. We hypothesize for such a slim architecture (≈ 10 × fewer parameters than Resnet and VGG) the gradient approximation becomes even more sensitive to the distance between iterative masks and perhaps the exploration of FORCE is harmful in this case. As discussed in the main paper, we believe that further research to understand the exploration/exploitation trade-off when pruning might yield to even more efficient pruning schemes, especially for very high sparsity levels. We train using the same settings as described in Appendix A except for the weight decay which is set to 4× 10−5, following the settings of the original Mobilenet paper." }, { "heading": "D LOCAL OPTIMAL MASKS", "text": "Definition 1 ((p, )-local optimal mask). Consider any two sets9 ct ⊆ {1, · · · ,m} and ct+1 ⊂ ct. For any > 0 and 0 ≤ p ≤ |ct \\ ct+1|, ct+1 is a (p, ) local optimal with respect to ct if the following holds\nS (θ, ct+1) ≥ S (θ, (ct+1 \\ S−) ∪ S+)− (9) for all S− ⊂ ct+1, |S−| = p and S+ ⊂ (ct \\ ct+1) , |S+| = p. Definition 2 (CRS; Coordinate-Restricted-Smoothness). Given a function L : Rm → R (which encodes both the network architecture and the dataset), L is said to be λc-Coordinated Restricted\n8https://github.com/kuangliu/pytorch-cifar/blob/master/models/ mobilenetv2.py\n9For ease of notation, we will use this representation interchangeably with its binary encoding i.e. a m-dimensional binary vector with its support equal to c\nSmooth with respect to c ⊆ {1, · · · ,m} if there exists a real number λc such that\n‖c ∇L (w c)− c ∇L (w ĉ)‖∞ ≤ λc ‖w c−w ĉ‖1 (10)\nfor all w ∈ Rm and ĉ ⊂ c. When s = |c \\ ĉ|, an application of Holder’s inequality shows\nλc ‖w c−w ĉ‖1 ≤ λc ‖w‖∞ ‖c− ĉ‖1 = λcs ‖w‖∞\nWe defineL to be Λ-total CRS if there exists a function Λ : {0, 1}m → R such that for all c ∈ {0, 1}m L is Λ (c)-Coordinate-Restricted-Smooth with respect to c (for ease of notation we use Λ (c) = λc).\nTheorem 1 (Informal). The mask ct+1 produced from ct by FORCE is ( p, 2λp ‖θ‖2∞ |ct| ) -local optimal if the L is Λ-CRS..\nProof. Consider the masks ct and ct+1 where the latter is obtained by one step of FORCE on the former. Let S− and S+ be any set of size p such that S− ⊂ ct+1 and S+ ⊂ (ct \\ ct+1). Finally, for ease of notation we define ζ = (ct+1 \\ S−) ∪ S+\nS (θ, ζ)− S (θ, ct+1) = ∑ i∈ζ |θi · ∇L (θ ζ)i| − ∑ i∈ct+1 |θi · ∇L (θ ct+1)i|\n= ∑ i∈ct+1 |θi · ∇L (θ ζ)i| − ∑ i∈ct+1\n|θi · ∇L (θ ct+1)i|︸ ︷︷ ︸ Γ1\n+ ∑ i∈S+\n|θi · ∇L (θ ζ)i|︸ ︷︷ ︸ Γ2\n− ∑ i∈S−\n|θi · ∇L (θ ζ)i|︸ ︷︷ ︸ Γ3\n(11)\nLet us look at the three terms individually. We assume that L is Λ-CRS. Γ1 = ∑ i∈ct+1 |θi · ∇L (θ ζ)i| − |θi · ∇L (θ ct+1)i|\n≤ ‖ct+1 θ ∇L (θ ct+1)− ct+1 θ ∇L (θ ζ)‖1 By Triangle Inequality ≤ ‖ct+1 θ‖1 ‖ct+1 ∇L (θ ct+1)− ct+1 ∇L (θ ζ)‖∞ By Holder’s Inequality ≤ 2λct+1 |ct+1| p ‖θ‖ 2 ∞ ∵ L is Λ-CRS (12)\nΓ2 = ∑ i∈S+ |θi · ∇L (θ ζ)i| − |θi · ∇L (θ ct)i|+ |θi · ∇L (θ ct)i|\n≤ ∑ i∈S+ |θi · ∇L (θ ct)i|+ λctp ‖θ‖ 2 ∞ (|ct| − |ct+1|) ∵ |ct \\ ζ| = |ct| − |ct+1| ,\n(13) Γ3 = − ∑ i∈S− |θi · ∇L (θ ζ)i|+ |θi · ∇L (θ ct)i| − |θi · ∇L (θ ct)i|\n≤ − ∑ i∈S− |θi · ∇L (θ ct)i|+ λctp ‖θ‖ 2 ∞ (|ct| − |ct+1|) , (14)\nAdding eqs. (13) and (14), we get Γ2 + Γ3 ≤ ∑ i∈S+ |θi · ∇L (θ ct)i| − ∑ i∈S− |θi · ∇L (θ ct)i|+ 2λctp ‖θ‖ 2 ∞ (|ct| − |ct+1|)\n≤ 2λctp ‖θ‖ 2 ∞ (|ct| − |ct+1|)− γp (15) where γ = mini∈ct+1,j∈(ct\\ct+1) |θi∇L (θ ct)i| − ∣∣∣θj∇L (θ ct)j∣∣∣ ≥ 0\nSubstituting eqs. (12) and (15) into (11) we get\nS (θ, ζ)− S (θ, ct+1) ≤ 2λct+1 |ct+1| p ‖θ‖ 2 ∞ + 2λctp ‖θ‖ 2 ∞ (|ct| − |ct+1|)− γp\n= 2λct+1p ‖θ‖ 2 ∞ (|ct+1|+ (|ct| − |ct+1|))− γp\nS (θ, ct+1) ≥ S (θ, ζ)− 2λct+1p ‖θ‖ 2 ∞ |ct|+ γp S (θ, ct+1) ≥ S (θ, ζ)− 2λct+1p ‖θ‖ 2 ∞ |ct|\nE ITERATIVE PRUNING TO MAXIMIZE THE GRADIENT NORM" }, { "heading": "E.1 MAXIMIZING THE GRADIENT NORM USING THE GRADIENT APPROXIMATION", "text": "In order to maximize the gradient norm after pruning, the authors in Wang et al. (2020) use the first order Taylor’s approximation. While this seems to be better suited than SNIP for higher levels of sparsity, it assumes that pruning is a small perturbation on the weight matrix. We argue that this approximation will not be valid as we push towards extreme sparsity values. Our gradient approximation (refer to section 4) can also be applied to maximize the gradient norm after pruning. In this case, we have\nG(θ, c) := ∆L(θ c)−∆L(θ) ≈ ∑\n{i: ci=0}\n−[∇L(θ)i]2, (16)\nwhere we assume pruned connections have null gradients (this is equivalent to the restriction used for Iterative SNIP) and we assume gradients remain unchanged for unpruned weights (gradient\napproximation). Combining this approximation with Eq. (7), we obtain a new pruning method we name Iterative GRASP, although it is not the same as applying GRASP iteratively. Unlike FORCE, Iterative GRASP does not recover GRASP when T = 1.\nIn Fig 12 we compare Iterative GRASP to other pruning methods. We use the same settings as described in section 5. We observe that Iterative GRASP outperforms GRASP in the high sparsity region. Moreover, for VGG19 architecture Iterative GRASP achieves comparable performance to Iterative SNIP. Nevertheless, for Resnet50 we see that Iterative GRASP performance falls below that of FORCE and Iterative SNIP as we prune more weights. FORCE saliency takes into account both the gradient and the magnitude of the weights when computing the saliency, on the other hand, the Gradient Norm only takes gradients into account, therefore it is using less information. We hypothesize this might the reason why Iterative GRASP does not match Iterative SNIP.\nE.2 ITERATIVE GRASP (PRUNING CONSISTENCY)\nAs explained in the main text, we tried to apply GRASP iteratively with the Taylor approximation described in Wang et al. (2020). Unfortunately, we found that all resulting masks yield networks unable to train. In light of this result, we performed some analysis of the behaviour of GRASP compared to that of SNIP and, in the following, we provide some insights as to why we can not use GRASP’s approximation iteratively.\nIn Liu et al. (2018), the authors show that applying a pruning method to the same architecture with different random initializations would yield consistent pruning masks. Specifically, they find that the percentage of pruned weights in each layer had very low variance. We reproduce the same experiment and additionally explore another dimension, the (global) sparsity level. Given an architecture, we prune it at varying levels of sparsity and extract the percentage of remaining weights at each layer. For each level of sparsity, we average the results over 3 trials of initialize-prune. As shown in Fig 2, both SNIP and GRASP have very low variance across initializations, on the other hand, as we vary the global sparsity with GRASP, the percentage of remaining weights for each layer is inconsistent. The layers that are most preserved at high levels of sparsity, such as the initial and last layers, are the most heavily pruned at low sparsity levels.\nThe authors in Liu et al. (2018), reason that manually designed networks have layers which are more redundant than others. Therefore, pruning methods even this redundancies by pruning layers with different percentages. We extend this reasoning, and hypothesize that pruning algorithms should always have preference for pruning the same (redundant) layers across all levels of sparsity. We denote this as pruning consistency. We observe that when applying iterative pruning to GRASP, the resulting masks tend to prune almost all of the weights at the initial and final layers, producing\nnetworks that are unable to converge. When using iterative pruning, we prune a small portion of remaining weights at each step. Thus, we are always in the low sparsity regime, where the GRASP behaviour is reversed. Conversely, when we use SNIP the behaviour changes completely. In this case, the preserved layers are consistent across sparsity levels, and when we use iterative pruning we obtain networks that reach high accuracy values." } ]
2,021
PROGRESSIVE SKELETONIZATION: TRIMMING MORE FAT FROM A NETWORK AT INITIALIZATION
SP:ee89d3273df8b3b082c0e72a8768dff7cd3b7f56
[ "Paper proposed to generate the communication message in MARL with the predicted trajectories of all the agents (include the agent itself). An extra self-attention model is also stacked over the trajectories to trade off the length of prediction and the possible explaining away issue. The whole model is trained via a canonical MARL objective while the trajectory prediction model utilizes direct supervision collected from the environments. Experiments on several toy MARL benchmark demonstrates the effectiveness of the proposed method." ]
Communication is one of the core components for learning coordinated behavior in multi-agent systems. In this paper, we propose a new communication scheme named Intention Sharing (IS) for multi-agent reinforcement learning in order to enhance the coordination among agents. In the proposed IS scheme, each agent generates an imagined trajectory by modeling the environment dynamics and other agents’ actions. The imagined trajectory is a simulated future trajectory of each agent based on the learned model of the environment dynamics and other agents and represents each agent’s future action plan. Each agent compresses this imagined trajectory capturing its future action plan to generate its intention message for communication by applying an attention mechanism to learn the relative importance of the components in the imagined trajectory based on the received message from other agents. Numeral results show that the proposed IS scheme significantly outperforms other communication schemes in multi-agent reinforcement learning.
[ { "affiliations": [], "name": "INTENTION SHARING" }, { "affiliations": [], "name": "Woojun Kim" }, { "affiliations": [], "name": "Jongeui Park" }, { "affiliations": [], "name": "Youngchul Sung" } ]
[ { "authors": [ "Abhishek Das", "Théophile Gervet", "Joshua Romoff", "Dhruv Batra", "Devi Parikh", "Mike Rabbat", "Joelle Pineau" ], "title": "Tarmac: Targeted multi-agent communication", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jakob Foerster", "Ioannis Alexandros Assael", "Nando De Freitas", "Shimon Whiteson" ], "title": "Learning to communicate with deep multi-agent reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jakob N Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy Lillicrap", "Sergey Levine" ], "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "Jayesh K Gupta", "Maxim Egorov", "Mykel Kochenderfer" ], "title": "Cooperative multi-agent control using deep reinforcement learning", "venue": "In International Conference on Autonomous Agents and Multiagent Systems,", "year": 2017 }, { "authors": [ "Shariq Iqbal", "Fei Sha" ], "title": "Actor-attention-critic for multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1810.02912,", "year": 2018 }, { "authors": [ "Natasha Jaques", "Angeliki Lazaridou", "Edward Hughes", "Caglar Gulcehre", "Pedro A Ortega", "DJ Strouse", "Joel Z Leibo", "Nando De Freitas" ], "title": "Social influence as intrinsic motivation for multi-agent deep reinforcement learning", "venue": "arXiv preprint arXiv:1810.08647,", "year": 2018 }, { "authors": [ "Jiechuan Jiang", "Zongqing Lu" ], "title": "Learning attentional communication for multi-agent cooperation", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Woojun Kim", "Whiyoung Jung", "Myungsik Cho", "Youngchul Sung" ], "title": "A maximum mutual information framework for multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:2006.02732,", "year": 2020 }, { "authors": [ "Michael L Littman" ], "title": "Markov games as a framework for multi-agent reinforcement learning", "venue": "In Machine learning proceedings", "year": 1994 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Emanuele Pesce", "Giovanni Montana" ], "title": "Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication", "venue": "arXiv preprint arXiv:1901.03887,", "year": 2019 }, { "authors": [ "Neil C Rabinowitz", "Frank Perbet", "H Francis Song", "Chiyuan Zhang", "SM Eslami", "Matthew Botvinick" ], "title": "Machine theory of mind", "venue": "arXiv preprint arXiv:1802.07740,", "year": 2018 }, { "authors": [ "Sébastien Racanière", "Théophane Weber", "David Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomenech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Roberta Raileanu", "Emily Denton", "Arthur Szlam", "Rob Fergus" ], "title": "Modeling others using oneself in multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1802.09640,", "year": 2018 }, { "authors": [ "Mariacristina Roscia", "Michela Longo", "George Cristian" ], "title": "Lazaroiu. Smart city by multi-agent systems", "venue": "In 2013 International Conference on Renewable Energy Research and Applications (ICRERA),", "year": 2013 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "DJ Strouse", "Max Kleiman-Weiner", "Josh Tenenbaum", "Matt Botvinick", "David J Schwab" ], "title": "Learning to share and hide intentions using information regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning multiagent communication with backpropagation", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) has achieved remarkable success in various complex control problems such as robotics and games (Gu et al. (2017); Mnih et al. (2013); Silver et al. (2017)). Multi-agent reinforcement learning (MARL) extends RL to multi-agent systems, which model many practical real-world problems such as connected cars and smart cities (Roscia et al. (2013)). There exist several distinct problems in MARL inherent to the nature of multi-agent learning (Gupta et al. (2017); Lowe et al. (2017)). One such problem is how to learn coordinated behavior among multiple agents and various approaches to tackling this problem have been proposed (Jaques et al. (2018); Pesce & Montana (2019); Kim et al. (2020)). One promising approach to learning coordinated behavior is learning communication protocol among multiple agents (Foerster et al. (2016); Sukhbaatar et al. (2016); Jiang & Lu (2018); Das et al. (2019)). The line of recent researches on communication for MARL adopts end-to-end training based on differential communication channel (Foerster et al. (2016); Jiang & Lu (2018); Das et al. (2019)). That is, a message-generation network is defined at each agent and connected to other agents’ policies or critic networks through communication channels. Then, the message-generation network is trained by using the gradient of other agents’ policy or critic losses. Typically, the message-generation network is conditioned on the current observation or the hidden state of a recurrent network with observations as input. Thus, the trained message encodes the past and current observation information to minimize other agents’ policy or critic loss. It has been shown that due to the capability of sharing observation information, this kind of communication scheme has good performance as compared to communication-free MARL algorithms such as independent learning, which is widely used in MARL, in partially observable environments.\nIn this paper, we consider the following further question for communication in MARL:\n”How to harness the benefit of communication beyond sharing partial observation.”\nWe propose intention of each agent as the content of message to address the above question. Sharing intention using communication has been used in natural multi-agent systems like human society.\n∗Corresponding author\nFor example, drivers use signal light to inform other drivers of their intentions. A car driver may slow down if a driver in his or her left lane turns the right signal light on. In this case, the signal light encodes the driver’s intention, which indicates the driver’s future behavior, not current or past observation such as the field view. By sharing intention using signal light, drivers coordinate their drive with each other. In this paper, we formalize and propose a new communication scheme for MARL named Intention sharing (IS) in order to go beyond existing observation-sharing schemes for communication in MARL. The proposed IS scheme allows each agent to share its intention with other agents in the form of encoded imagined trajectory. That is, each agent generates an imagined trajectory by modeling the environment dynamics and other agents’ actions. Then, each agent learns the relative importance of the components in the imagined trajectory based on the received messages from other agents by using an attention model. The output of the attention model is an encoded imagined trajectory capturing the intention of the agent and used as the communication message. We evaluate the proposed IS scheme in several multi-agent environments requiring coordination among agents. Numerical result shows that the proposed IS scheme significantly outperforms other existing communication schemes for MARL including the state-of-the-art algorithms such as ATOC and TarMAC." }, { "heading": "2 RELATED WORKS", "text": "Under the asymmetry in learning resources between the training and execution phases, the framework of centralized training and decentralized execution (CTDE), which assumes the availability of all system information in the training phase and distributed policy in the execution phase, has been adopted in most recent MARL researches (Lowe et al. (2017); Foerster et al. (2018); Iqbal & Sha (2018); Kim et al. (2020)). Under the framework of CTDE, learning communication protocol has been considered to enhance performance in the decentralized execution phase for various multi-agent tasks (Foerster et al. (2016); Jiang & Lu (2018); Das et al. (2019)). For this purpose, Foerster et al. (2016) proposed Differentiable Inter-Agent Learning (DIAL). DIAL trains a message-generation network by connecting it to other agents’ Q-networks and allowing gradient flow through communication channels in the training phase. Then, in the execution phase the messages are generated and passed to other agents through communication channels. Jiang & Lu (2018) proposed an attentional communication model named ATOC to learn when to communicate and how to combine information received from other agents through communication based on attention mechanism. Das et al. (2019) proposed Targeted Multi-Agent Communication (TarMAC) to learn the message-generation network in order to produce different messages for different agents based on a signature-based attention model. The message-generation networks in the aforementioned algorithms are conditioned on the current observation or a hidden state of LSTM. Under partially observable environments, such messages which encode past and current observations are useful but do not capture any future information. In our approach, we use not only the current information but also future information to generate messages and the weight between the current and future information is adaptively learned according to the environment. This yields further performance enhancement, as we will see in Section 5.\nIn our approach, the encoded imagined trajectory capturing the intention of each agent is used as the communication message in MARL. Imagined trajectory was used in other problems too. Racanière et al. (2017) used imagined trajectory to augment it into the policy and critic for combining modelbased and model-free approaches in single-agent RL. It is shown that arbitrary imagined trajectory (rolled-out trajectory by using a random policy or own policy) is useful for single-agent RL in terms of performance and data efficiency. Strouse et al. (2018) introduced information-regularizer to share or hide agent’s intention to other agents for a multi-goal MARL setting in which some agents know the goal and other agents do not know the goal. By maximizing (or minimizing) the mutual information between the goal and action, an agent knowing the goal learns to share (or hide) its intention to other agents not knowing the goal in cooperative (or competitive) tasks. They showed that sharing intention is effective in the cooperative case.\nIn addition to our approach, Theory of Mind (ToM) and Opponent Modeling (OM) use the notion of intention. Rabinowitz et al. (2018) proposed the Theory of Mind network (ToM-net) to predict other agents’ behaviors by using meta-learning. Raileanu et al. (2018) proposed Self Other-Modeling (SOM) to infer other agents’ goal in an online manner. Both ToM and OM take advantage of predicting other agents’ behaviors capturing the intention. One difference between our approach and\nthe aforementioned two methods is that we use communication to share the intention instead of inference. That is, the agents in our approach allow other agents to know their intention directly through communication, whereas the agents in ToM and OM should figure out other agents’ intention by themselves. Furthermore, the messages in our approach include future information by rolling out the policy, whereas ToM and CM predict only the current or just next time-step information." }, { "heading": "3 SYSTEM MODEL", "text": "We consider a partially observable N -agent Markov game (Littman (1994)) and assume that communication among agents is available. At time step t, Agent i observes its own observation oit, which is a part of the global environment state st, and selects action ait ∈ Ai and messagemit ∈Mi based on its own observation oit and its own previous time step message m i t−1 plus the received messages from other agents, i.e., mt−1 = (m1t−1, · · · ,mNt−1) . We assume that the message mit of Agent i is sent to all other agents and available at other agents at the next time step, i.e., time step t + 1. The joint actions at = (a1t , · · · , aNt ) yield the next environment state st+1 and rewards {rit}Ni=1 according to the transition probability T : S × A × S → [0, 1] and the reward function Ri : S × A → R, respectively, where S and A = ∏N i=1Ai are the environment state space and the joint action space, respectively. The goal of Agent i is to find the policy πi that maximizes its discounted return Rit = ∑∞ t′=t γ\nt′rit′ . Hence, the objective function of Agent i is defined as Ji(π i) = Eπ [ Ri0 ] , where π = (π1, · · · , πN ) and γ ∈ [0, 1] are the joint policy and the discounting factor, respectively." }, { "heading": "4 THE PROPOSED INTENTION SHARING SCHEME", "text": "The key idea behind the IS scheme is that multiple agents communicate with other agents by sending their implicit future plans, which carry their intention. The received messages capturing the intention of other agents enable the agent to coordinate its action with those of other agents. We now describe the architecture of the proposed IS scheme. At time step t, Agent i selects an action ait ∼ πi(·|oit,mt−1) and a message mit = MGN i(oit,mt−1, πi) based on its own observation oit and received messages mt−1, where MGN i is the message-generation network (MGN) of Agent i. The MGN consists of two components: Imagined trajectory generation module (ITGM) and attention module (AM). Each agent generates an imagined trajectory by using ITGM and learns the importance of each imagined step in the imagined trajectory by using AM. The output of AM is an encoded imagined trajectory reflecting the importance of imagined steps and is used as the communication message. The overall architecture of the proposed IS scheme is shown in Fig. 1. In the following we describe the detail of each module." }, { "heading": "4.1 IMAGINED TRAJECTORY GENERATION MODULE (ITGM)", "text": "The role of ITGM is to produce the next imagined step. ITGM takes the received messages, observation, and action as input and yields the predicted next observation and predicted action as output. By stacking ITGMs, we generate an imagined trajectory, as shown in Fig. 1. For Agent i at time step t, we define an H-length imagined trajectory as\nτ i = (τ it , τ̂ i t+1, · · · , τ̂ it+H−1), (1)\nwhere τ̂ it+k = (ô i t+k, â i t+k) is the imagined step at time step t + k. Note that τ i t = (o i t, a i t) is the true values of observation and action, but the imagined steps except τ it are predicted values.\nITGM consists of a roll-out policy and two predictors: Other agents’ action predictor f ia(o i t) (we will call this predictor simply action predictor) and observation predictor f io(o i t, a i t, a −i t ). First, we model the action predictor which takes the observation as input and produces other agents’ predicted actions. The output of the action predictor is given by\nf ia(o i t) = (â 1 t , · · · , âi−1t , âi+1t , · · · , âN ) =: â−it (2)\nNote that the action predictor can be trained by the previously proposed opponent modeling method (Rabinowitz et al. (2018); Raileanu et al. (2018)) and can take the received messages as input. Next,\nwe model the observation predictor f io(o i t, a i t, â −i t ) which is conditioned on the observation o i t, own action ait, and the output of the action predictor â −i t . Here, we adopt the dynamics function that predicts the difference between the next observation and the current observation, i.e., oit+1 − oit instead of the next observation oit+1 proposed in (Nagabandi et al. (2018)) in order to reduce model bias in the early stage of learning. Hence, the next observation can be written as\nôit+1 = o i t + f i o(o i t, a i t, â −i t ). (3)\nBy injecting the predicted next observation and the received messages into the roll-out policy in ITGM, we obtain the predicted next action âit+1 = π\ni(ôit+1,mt−1). Here, we use the current policy as the roll-out policy. Combining ôit+1 and â i t+1, we obtain next imagined step at time step t + 1, τ it+1 = (ô i t+1, â i t+1). In order to produce an H-length imagined trajectory, we inject the output of ITGM and the received messages mt−1 into the input of ITGM recursively. Note that we use the received messages at time step t, mt−1, in every recursion of ITGM.1" }, { "heading": "4.2 ATTENTION MODULE (AM)", "text": "Instead of the naive approach that uses the imagined trajectory [τt, · · · , τt+H−1] directly as the message, we apply an attention mechanism in order to learn the relative importance of imagined steps and encode the imagined trajectory according to the relative importance. We adopt the scaledot product attention proposed in (Vaswani et al. (2017)) as our AM. Our AM consists of three components: query, key, and values. The output of AM is the weighted sum of values, where the weight of values is determined by the dot product of the query and the corresponding key. In our model, the query consists of the received messages, and the key and value consist of the imagined trajectory. For Agent i at time step t, the query, key and value are defined as\nqit =W i Qmt−1 =W i Q [ m1t−1‖m2t−1‖ · · · ‖mN−1t−1 ‖mNt−1 ] ∈ Rdk (4)\nkit = [ W iKτt, · · · ,W iKτt+h−1︸ ︷︷ ︸\n=:ki,ht\n, · · · ,W iKτt+H−1 ] ∈ RH×dk (5)\nvit = [ W iV τt, · · · ,W iV τt+h−1︸ ︷︷ ︸\n=:vi,ht\n, · · · ,W iV τt+H−1 ] ∈ RH×dm , (6)\n1Although the fixed received messages cause bias, it is observed that the prediction of received messages generates more critical bias in simulation. Hence, we use mt−1 for all H prediction steps.\nwhereW iQ ∈ Rdk×Ndm ,W iK ∈ Rdk×dτ andW iV ∈ Rdm×dτ are learnable parameters and operation ‖ denotes the concatenation of vectors. The output mit of the attention model, which is used for message, is the weighted sum of the values:\nmit = H∑ h=1 αihv i,h t , (7)\nwhere the weight vector αi = (αi1, · · · , αiH) is computed as\nαi = softmax [ qit T ki,0t√ dk , · · · , q i t T ki,ht√ dk︸ ︷︷ ︸\n=:αih\n, · · · , q i t T ki,Ht√ dk\n] . (8)\nThe weight of each value is computed by the dot product of the corresponding key and query. Since the projections of the imagined trajectory and the received messages are used for key and query, respectively, the weight can be interpreted as the relative importance of imagined step given the received messages. Note that WQ, WK and WV are updated through the gradients from the other agents." }, { "heading": "4.3 TRAINING", "text": "We implement the proposed IS scheme on the top of MADDPG (Lowe et al. (2017)), but it can be applied to other MARL algorithms. MADDPG is a well-known MARL algorithm and is briefly explained in Appendix A. In order to handle continuous state-action spaces, the actor, critic, observation predictor, and action predictor are parameterized by deep neural networks. For Agent i, let θiµ, θ i Q, θ i o, and θ i a be the deep neural network parameters of actor, critic, observation predictor, and action predictor, respectively. Let W i = (W iQ,W i K ,W i V ) be the trainable parameters in the attention module of Agent i. The centralized critic for Agent i, Qi, is updated to minimize the following loss:\nLQ(θ i Q) = Ex,a,ri,x′\n[ (yi −Qi(x, a))2 ] , yi = ri + γQi−(x′, a′)|aj′=µi−(o′j ,m), (9)\nwhere Qi− and µi− are the target Q-function and the target policy of Agent i and parameterized by θi−µ , θ i− Q , respectively. The policy is updated to minimize the policy gradient loss:\n∇θiµJ(θ i µ) = Ex,a [ ∇θiµµ i(oi,m)∇aiQi(x, a)|ai=µi(oi,m) ]\n(10)\nSince the MGN is connected to the agent’s own policy and other agents’ policies, the attention module parameters W i are trained by gradient flow from all agents. The gradient of Agent i’s attention module parameters is given by∇W iJ(W i) =\n1\nN N∑ j=1\n[ Ex,m,x,a [ ∇W iMGN(m̃i|oi,m)∇m̃iµj(oj , m̃i, m̃−i)∇ajQj(x, a)|aj=µj(oj ,m) ]] , (11)\nwhere oi and m are the previous observation and received messages, respectively. The gradient of the attention module parameters are obtained by applying the chain rule to policy gradient.\nBoth the action predictor and the observation predictor are trained based on supervised learning and the loss functions for agent i are given by\nL(θia) = Eoi,a [ (f iθia(o i)− a−i)2 ]\n(12) L(θio) = Eoi,a,o′i [ ((o′ i − oi)− f iθio(o i, ai, â−i))2 ] . (13)" }, { "heading": "5 EXPERIMENT", "text": "In order to evaluate the proposed algorithm and compare it with other communication schemes fairly, we implemented existing baselines on the top of the same MADDPG used for the proposed scheme.\nThe considered baselines are as follows. 1) MADDPG (Lowe et al. (2017)): we can assess the gain of introducing communication from this baseline. 2) DIAL (Foerster et al. (2016)): we modified DIAL, which is based on Q-learning, to our setting by connecting the message-generation network to other agents’ policies and allowing the gradient flow through communication channel. 3) TarMAC (Das et al. (2019)): we adopted the key concept of TarMAC in which the agent sends targeted messages using a signature-based attention model. 4) Comm-OA: the message consists of its own observation and action. 5) ATOC (Jiang & Lu (2018)): an attentional communication model which learns when communication is needed and how to combine the information of agents. We considered three multi-agent environments: predator-prey, cooperative navigation, and traffic junction, and we slightly modified the conventional environments to require more coordination among agents." }, { "heading": "5.1 ENVIRONMENTS", "text": "Predator-prey (PP) The predator-prey environment is a standard task in multi-agent systems. We used a PP environment that consists of N predators and fixed M preys in a continuous state-action domain. We control the actions of predators and the goal is to capture as many preys as possible in a given time. Each agent observes the positions of predators and preys. When C predators catch a prey simultaneously, the prey is captured and all predators get shared reward R1. At every time when all the preys are captured, the preys are respawn and the shared reward value R1 increases by one with initial value one to accelerate the capture speed for the given time. We simulated three cases: (N = 2, C = 1), (N = 3, C = 1), and (N = 4, C = 2) with all M = 9 preys, where the fixed positions of the preys are shown in Fig.2(a). In the cases of (N = 2, C = 1) and (N = 3, C = 1), the initial positions of all predators are the same and randomly determined. Thus, the predators should learn not only how to capture preys but also how to spread out. In the case of (N = 4, C = 2), the initial positions of all predators are randomly determined independently. Thus, the predators should learn to capture preys in group of two.\nCooperative-navigation (CN) The goal of cooperative navigation introduced in (Lowe et al. (2017)) is for N agents to cover L landmarks while avoiding collisions among the agents. We modified the original environment so that collisions occur more easily. We set L = N , increased the size of agent, and assigned a specific landmark to cover to each agent (i.e., each agent should cover the landmark of the same color in Fig.2(b)). Each agent observes the positions of predators and landmarks. The agent receives shared reward R1 which is the sum of the distance between each agent and the corresponding landmark at each time step and success reward N ′ × R2 where N ′ is the number of the covered landmark. Agents who collide with other agents receive negative reward R3. We simulated the environment with N = L = 3, R1 = 1/3, R2 = 1 and R3 = −5. Traffic-junction (TJ) We modified the traffic-junction introduced in Sukhbaatar et al. (2016) to continuous state-action domain. In the beginning of an episode, each agent is randomly located in a predefined initial position and assigned one of three routes: left, right or straight, as seen in Fig.2(c). The observation of each agent consists of the positions of all agents (no route information of other agents) and 2 one-hot vectors which encodes the initial position and assigned route of the agent. The action of each agent is a real value in (0, 1), which indicates the distance to go along the assigned route from the current position. The goal is to go to the destination as fast as possible while avoiding collision with other agents. To achieve the goal, we design reward with three components. Each agent receives success reward R1 if it arrives at the destination without any collision with\nother agents, collision negative reward R2 if its position is overlapped with that of other agent, and time negative reward R3 to avoid traffic jam. When an agent arrives at the destination, the agent is assigned a new initial position and the route. An episode ends when T time steps elapse. We set R1 = 20, R2 = −10, and R3 = −0.01τ , where τ is the total time step after agent is initialized." }, { "heading": "5.2 RESULTS", "text": "Fig. 3 shows the performance of the proposed IS scheme and the considered baselines on the PP, CN, and TJ environments. Figs.3(a)-(d) shows the learning curves of the algorithms on PP and CN and Figs. (e)-(f) show the average return using deterministic policy over 100 episodes every 250000 time steps. All performance is averaged over 10 different seeds. It is seen that Comm-OA performs similarly to MADDPG in the considered environments. Since the received messages come from other agents at the previous time step, Comm-OA in which the communication message consists of agent’s observation and action performs similarly to MADDPG. Unlike Comm-OA, DIAL, TarMAC, and ATOC outperform MADDPG and the performance gain comes from the benefit of learning communication protocol in the considered environments except PP with N = 4. In PP with N = 4, four agents need to coordinate to spread out in group of two to capture preys. In this complicated coordination requirement, simply learning communication protocol based on past and current information did not obtain benefit from communication. On the contrary, the proposed IS scheme sharing intention with other agents achieved the required coordination even in this complicated environment." }, { "heading": "5.3 ANALYSIS", "text": "Imagined trajectory The proposed IS scheme uses the encoded imagined trajectory as the message content. Each agent rolls out an imagined trajectory based on its own policy and trained models including action predictor and observation predictor. Since the access to other agents’ policies is not available, the true trajectory and the imagined trajectory can mismatch. Especially, the mismatch is large in the beginning of an episode because each agent does not receive any messages from other agents (In this case, we inject zero vector instead of the received messages into the policy). We expect that the mismatch will gradually decrease as the episode progresses and this can be interpreted as the procedure of coordination among agents. Fig.5 shows the positions of all agents and each agent’s imagined trajectory over time step in one episode for predator-prey with N = 3 predators, where the initial positions of the agents after the end of training (t = 0) is bottom right on the map. Note that each agent estimates the future positions of other agents as well as their\nown future position due to the assumption of full observability. The first, second, and third row of Fig.5 show the imagined trajectories of all agents at Agent 1 (red), Agent 2 (green) and Agent 3 (blue), respectively. Note that the imagined trajectory of each agent represents its future plan for the environment. As seen in Fig.5, at t = 0 the intention of both Agent 1 and Agent 3 is to move to the left to catch preys. At t = 1, all agents receive the messages from other agents. It is observed that Agent 3 changes its future plan to catch preys around the center while Agent 1 maintains its future plan. This procedure shows that coordination between Agent 1 and Agent 3 starts to occur. It is seen that as time goes, each agent roughly predicts other agents’ future actions.\nWe conducted experiments to examine the impact of the length of the imagined trajectory H . Fig.4 shows the performances of the proposed method for different values ofH . It is seen that the training speed is reduced when H = 7 as compared to H = 3 or H = 5. However, the final performance all outperforms the baseline.\nAttention In the proposed IS scheme, the imagined trajectory is encoded based on the attention module to capture the importance of components in the imagined trajectory. Recall that the message of Agent i is expressed as mit = ∑H h=1 α i hv i,h t , as seen in (7), where α i h denotes the importance of vi,ht , which is the encoded imagined step. Note that the previously proposed communication schemes are the special case corresponding to αi = (1, 0, · · · , 0). In Fig.5, the brightness of each circle is proportional to the attention weight. At time step t = K, where K = 37, α12, which indicates when Agent 1 moves to the prey in the bottom middle, is the highest. In addition, α34, which indicates when Agent 3 moves to the prey in the left middle, is the highest. Hence, the agent tends to send future information when it is near a prey. Similar attention weight tendency is also captured in the time step t = K + 1 and t = K + 2.\nAs aforementioned, the aim of the IS scheme is to communicate with other agents based on their own future plans. How far future is important depends on the environment and on the tasks. In order to analyze the tendency in the importance of future plans, we averaged the attention weight over the trajectories on the fully observable PP environment with 3 agents and a partially observable PP environment with 3 agents in which each agent knows the locations of other agents within a certain range from the agent. The result is summarized in Table 1. It is observed that the current information (time k) and the farthest future information (time k + 4) are mainly used as the message content in the fully observable case, whereas the current information and the information next to the present (time k and k + 1) are mainly used in the partially observable environment. This is because sharing observation information is more critical in the partially observable case than the fully observable case. A key aspect of the proposed IS scheme is that it adaptively selects most important steps as the message content depending on the environment by using the attention module.\nWe conducted an ablation study for the attention module and the result is shown in Fig. 4. We compared the proposed IS scheme with and without the attention module. We replace the attention\nmodule with an averaging layer, which is the special case corresponding to αi = ( 1H , · · · , 1 H ). Fig. 4 shows that the proposed IS scheme with the attention module yields better performance than the one without the attention module. This shows the necessity of the attention module. In the PP environment with 4 agents, the imagined trajectory alone without the attention module improves the training speed while the final performance is similar to that of MADDPG. In the TJ environment with 3 agents, the imagined trajectory alone without the attention module improves both the final performance and the training speed." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed the IS scheme, a new communication protocol, based on sharing intention among multiple agents for MARL. The message-generation network in the proposed IS scheme consists of ITGM, which is used for producing predicted future trajectories, and AM, which learns the importance of imagined steps based on the received messages. The message in the proposed scheme is encoded imagined trajectory capturing the agent’s intention so that the communication message includes the future information as well as the current information, and their weights are adaptively determined depending on the environment. We studied examples of imagined trajectories and attention weights. It is observed that the proposed IS scheme generates meaningful imagined trajectories and attention weights. Numerical results show that the proposed IS scheme outperforms other communication algorithms including state-of-the-art algorithms. Furthermore, we expect that the key idea of the proposed IS scheme combining with other communication algorithms such as ATOC and TarMAC would yield even better performance." }, { "heading": "7 ACKNOWLEDGMENTS", "text": "This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning(NRF2017R1E1A1A03070788)." }, { "heading": "A MULTI-AGENT DEEP DETERMINISTIC POLICY GRADIENT (MADDPG)", "text": "MADDPG is an extended version of DDPG to multi-agent systems under the framework of CTDE (Lowe et al. (2017)). Each agent has a deterministic policy at = µiθµ(ot) conditioned on its own observation ot and a centralized critic QiθQ(x, a) = E [ Rit ∣∣xt = x, at = a] conditioned on the joint actions at and state information xt. Here, xt can be state st or the set of observations (o1t , · · · , oNt ). The centralized critic is trained by minimizing the following loss:\nLQ(θQ) = Ex,a,ri,x′ [ (yi −QiθQ(x, a)) 2 ] , yi = ri + γQi θ−Q (x′, a′)|aj′=µi−(oj), (14)\nwhere θ−Q is the parameter of target Q-function and µ i− is the target policy of Agent i. The policy is trained by Deterministic Policy Gradient (DPG), and the gradient of the objective with respect to the policy parameter θµi is given by\n∇θµiJ(µ i) = Ex,a [ ∇θµiµ i(oi)∇aiQiθQ(x, a)|ai=µi(oi) ] . (15)" }, { "heading": "B TRAINING DETAILS AND HYPERPARAMETERS", "text": "" }, { "heading": "C ADDITIONAL ABLATION STUDY", "text": "We conducted an additional experiment to examine whether performance improvement is gained from sharing intention or having a prediction of the future. We compared the proposed IS scheme with MADPPG-p in which the agent does not use communication, but uses their own imagined trajectory as additional input. Fig. 6 shows that the proposed IS scheme outperforms MADDPG-p. Thus, sharing intention, which is a core idea of this paper, is more important than having a prediction of the future." }, { "heading": "D PSEUDO CODE", "text": "Algorithm 1 Intention Sharing (IS) Communication Scheme\nInitialize parameter θiµ, θ i Q, θ i− µ , θ i− Q , θ i o, θ i a,W i, ∀i ∈ {1, · · · , N} for episode = 1, 2, · · · do\nInitialize state s1, messages m0 = ( −→ 0 , · · · ,−→0 ) and each agent observes oi1 for t <= T and st 6= terminal do Each agent receives the messages mt−1 = (m1t−1, · · · ,mNt−1) Each agent selects action ait ∼ πi(·|oit,mt−1) for each agent i Execute at and each agent i receives rt and oit+1 for h = 1, 2, · · · , H do\nPredict other agents’ actions â−it+h−1 from the action predictor f i a Generate ôit+h from observation predictor f i o(o i t+h−1, â i t+h−1, â −i t+h−1)\nGenerate âit+h ∼ πi(·|ôit+h,mt−1) end for Each agent generates the messages mit by injecting τ i = (τ it , τ̂ i t+1, · · · , τ̂ it+H−1) into Attention Module (AM) Store transitions in D\nend for for each gradient step do\nUpdate θiQ and (θ i o, θ i a) by minimizing the loss (9) and the loss (12) Update θiµ and W i based on the gradient (10) and the gradient (11)\nend for Update θi−µ , θ i− Q using the moving average method\nend for" } ]
2,021
null
SP:b24e79d30d19c99f1093779bdba8bd8b2aed9ec0
[ "In this paper, the authors focus on keystroke inference attacks in which an attacker leverages machine learning approaches, In particular, a new framework is proposed for low-resource video domain adaptation using supervised disentangled learning, and another method to assess the threat of keystroke inference attacks by an attacker using a deep learning system, given limited real-life data. The novelty of the approach and its theoretical foundation is appreciated. For a given domain, they decompose the data into real-life style, synthetic style, real-life content, and synthetic content, and then combine them into feature representations from all combinations of style-content pairings across domains to train a model, This allows classify the content of a sample in the style of another domain. Results indicate that training with these pairs to disentangle style and content prevents their model from overfitting to a small real-world training sets, and thereby provides an effective form of data augmentation that prevents overfitting." ]
Keystroke inference attacks are a form of side-channels attacks in which an attacker leverages various techniques to recover a user’s keystrokes as she inputs information into some display (for example, while sending a text message or entering her pin). Typically, these attacks leverage machine learning approaches, but assessing the realism of the threat space has lagged behind the pace of machine learning advancements, due in-part, to the challenges in curating large real-life datasets. This paper aims to overcome the challenge of having limited number of real data by introducing a video domain adaptation technique that is able to leverage synthetic data through supervised disentangled learning. Specifically, for a given domain, we decompose the observed data into two factors of variation: Style and Content. Doing so provides four learned representations: real-life style, synthetic style, real-life content and synthetic content. Then, we combine them into feature representations from all combinations of style-content pairings across domains, and train a model on these combined representations to classify the content (i.e., labels) of a given datapoint in the style of another domain. We evaluate our method on real-life data using a variety of metrics to quantify the amount of information an attacker is able to recover. We show that our method prevents our model from overfitting to a small real-life training set, indicating that our method is an effective form of data augmentation.
[]
[ { "authors": [ "M. Backes", "M. Dürmuth", "D. Unruh" ], "title": "Compromising reflections-or-how to read lcd monitors around the corner", "venue": "IEEE Symposium on Security and Privacy (sp", "year": 2008 }, { "authors": [ "M. Backes", "T. Chen", "M. Duermuth", "H.P.A. Lensch", "M. Welk" ], "title": "Tempest in a teapot: Compromising reflections revisited", "venue": "In 2009 30th IEEE Symposium on Security and Privacy,", "year": 2009 }, { "authors": [ "Satanjeev Banerjee", "Alon Lavie" ], "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "venue": "In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization,", "year": 2005 }, { "authors": [ "Liang Cai", "Hao Chen" ], "title": "On the practicality of motion based keystroke inference attack", "venue": "pp. 273–290,", "year": 2012 }, { "authors": [ "Yimin Chen", "Tao Li", "Rui Zhang", "Yanchao Zhang", "Terri Hedgpeth" ], "title": "Eyetell: Video-assisted touchscreen keystroke inference from eye movements", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2018 }, { "authors": [ "Fred J. Damerau" ], "title": "A technique for computer detection and correction of spelling errors", "venue": "Commun. ACM,", "year": 1964 }, { "authors": [ "Emily Denton", "Vighnesh Birodkar" ], "title": "Unsupervised learning of disentangled representations from video, 2017", "venue": null, "year": 2017 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": null, "year": 2014 }, { "authors": [ "Judy Hoffman", "Eric Tzeng", "Taesung Park", "Jun-Yan Zhu", "Phillip Isola", "Kate Saenko", "Alexei A. Efros", "Trevor Darrell" ], "title": "Cycada: Cycle-consistent adversarial domain adaptation, 2017", "venue": null, "year": 2017 }, { "authors": [ "Ehsan Hosseini-Asl", "Yingbo Zhou", "Caiming Xiong", "Richard Socher" ], "title": "Augmented cyclic adversarial learning for low resource domain adaptation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jun-Ting Hsieh", "Bingbin Liu", "De-An Huang", "Li Fei-Fei", "Juan Carlos Niebles" ], "title": "Learning to decompose and disentangle representations for video prediction, 2018", "venue": null, "year": 2018 }, { "authors": [ "Rohit Kulkarni" ], "title": "A Million News Headlines, 2018. URL https://doi.org/10.7910/DVN/ SYBGZL", "venue": null, "year": 2018 }, { "authors": [ "Alon Lavie" ], "title": "Evaluating the output of machine translation systems", "venue": null, "year": 2010 }, { "authors": [ "Yingzhen Li", "Stephan Mandt" ], "title": "Disentangled sequential autoencoder, 2018", "venue": null, "year": 2018 }, { "authors": [ "John Lim", "True Price", "Fabian Monrose", "Jan-Michael Frahm" ], "title": "Revisiting the threat space for vision-based keystroke inference attacks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Chin-Yew Lin" ], "title": "ROUGE: A package for automatic evaluation of summaries", "venue": "In Text Summarization Branches Out,", "year": 2004 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations, 2018", "venue": null, "year": 2018 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations, 2019", "venue": null, "year": 2019 }, { "authors": [ "L.V.D. Maaten", "Geoffrey E. Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Saeid Motiian", "Quinn Jones", "Seyed Iranmanesh", "Gianfranco Doretto" ], "title": "Few-shot adversarial domain adaptation", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Rahul Raguram", "Andrew M White", "Dibyendusekhar Goswami", "Fabian Monrose", "Jan-Michael Frahm" ], "title": "ispy: automatic reconstruction of typed input from compromising reflections", "venue": "In Proceedings of the 18th ACM conference on Computer and communications security,", "year": 2011 }, { "authors": [ "Ashish Shrivastava", "Tomas Pfister", "Oncel Tuzel", "Josh Susskind", "Wenda Wang", "Russ Webb" ], "title": "Learning from simulated and unsupervised images through adversarial training, 2016", "venue": null, "year": 2016 }, { "authors": [ "Matthew Snover", "Bonnie Dorr", "Richard Schwartz", "Linnea Micciulla", "John Makhoul" ], "title": "A study of translation edit rate with targeted human annotation", "venue": "Proceedings of Association for Machine Translation in the Americas,", "year": 2006 }, { "authors": [ "Jingchao Sun", "Xiaocong Jin", "Yimin Chen", "Jinxue Zhang", "Yanchao Zhang", "Rui Zhang" ], "title": "Visible: Video-assisted keystroke inference from tablet backside motion", "venue": null, "year": 2016 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": null, "year": 2014 }, { "authors": [ "J.B. Tenenbaum", "W.T. Freeman" ], "title": "Separating style and content with bilinear models", "venue": "Neural Computation,", "year": 2000 }, { "authors": [ "Joshua B. Tenenbaum", "William T. Freeman" ], "title": "Separating style and content", "venue": "Advances in Neural Information Processing Systems", "year": 1997 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee" ], "title": "Decomposing motion and content for natural video sequence prediction, 2017", "venue": null, "year": 2017 }, { "authors": [ "Yi Xu", "Jared Heinly", "Andrew M White", "Fabian Monrose", "Jan-Michael Frahm" ], "title": "Seeing double: Reconstructing obscured typed input from repeated compromising reflections", "venue": "In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security,", "year": 2013 }, { "authors": [ "Guixin Ye", "Zhanyong Tang", "Dingyi Fang", "Xiaojiang Chen", "Kwang In Kim", "Ben Taylor", "Zheng Wang" ], "title": "Cracking android pattern lock in five attempts", "venue": null, "year": 2017 }, { "authors": [ "Qinggang Yue", "Zhen Ling", "Xinwen Fu", "Benyuan Liu", "Kui Ren", "Wei Zhao" ], "title": "Blind recognition of touched keys on mobile devices", "venue": "In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2014 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "ÊC = EC", "Ĝ = G A" ], "title": "ADDITIONAL TRAINING DETAILS - CYCLEGAN CycleGAN (Zhu et al., 2017) learns pixel-wise transformation functions F : S → T andG : T → S that can transform data from the source domain to the target domain, and vice versa. We take 10,000 random frames from the synthetic and real-life video datasets and train F and G in an unpaired", "venue": null, "year": 2017 }, { "authors": [ "Zhu" ], "title": "2017) and train for 200 epochs. We take G and transform every frame in our real-life dataset to the synthetic space. We found that images transformed under G (i.e, real to synthetic) yielded higher quality transformations (e.g., the thumb was not malformed, the thumb", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "We are exceedingly reliant on our mobile devices in our everyday lives. Numerous activities, such as banking, communications, and information retrieval, have gone from having separate channels to collapsing into one: through our mobile phones. While this has made many of our lives more convenient, this phenomena further incentivizes attackers seeking to steal information from users. Therefore, studying different attack vectors and understanding the realistic threats that arise from attackers’ abilities to recover user information is imperative to formulating defenses. The argument for studying these attacks is not a new one. A rich literature of prior works studying both attacks and defenses has assessed a wide array of potential attack vectors. The majority of these attacks utilize various machine learning algorithms to predict the user’s keystrokes, (Raguram et al., 2011; Cai & Chen, 2012; Xu et al., 2013; Sun et al., 2016; Chen et al., 2018; Lim et al., 2020), but the ability to assess attackers leveraging deep learning methods has lagged due to the high costs of curating real-life datasets for this domain, and the lack of publicly available datasets.\nDespite all the recent attention to keystroke inference attacks, numerous questions have gone unanswered. Which defenses work against adversaries who leverage deep learning systems? Which defenses are easily undermined? Are there weaknesses in deep learning systems that we can use to develop better defenses to thwart state-of-the-art attacks? These questions capture the essence of the underlying principles for research into defenses for keystroke inference atttacks. Given the backand-forth nature of researching attacks and defenses, these questions can not be addressed because of the current inability to assess attacks with deep learning methods.\nThis paper aims to overcome the challenge of having limited number of labeled, real-life data by introducing a video domain adaptation technique that is able to leverage abundantly labeled synthetic\ndata. We show that by disentangling our data into separate style and content representations, we can subsequently create style-content pairs across both domains, and combine them into representations that contain the content in the style of its inputs, i.e., style transfer in the feature space. This is especially attractive in the case of pairs of real-life style and synthetic content, as this is an effective data augmentation scheme. Style representations need to be well separated between domains whereas content needs to be indistinguishable. To do this, we introduce auxiliary losses on the latent spaces to enforce disentanglement. Through a series of ablations, we show that doing so improves performance. In our context, Content answers the question: What was typed?. For example, the sentence that a user types. Style answers the question: How was it typed?. For example, the texting pattern.\nThe majority of visual domain adaptation methods do not work well in our problem setting because they mainly focus on tasks in which the domain shift is limited to a shift in texture, e.g., image classification, semantic segmentation, etc. (Ganin & Lempitsky, 2014; Shrivastava et al., 2016; Tzeng et al., 2017; Hoffman et al., 2017; Motiian et al., 2017). When predicting keystroke sequences, addressing the domain shift with respect to texture is not sufficient. While there is a clear difference in texture, we have to also address the temporal domain shift, e.g., different finger motions, speeds, etc. Notice the difference between the trajectories of thumbs in the two example videos displayed in Figure 1. The synthetic thumb is linearly interpolated whereas the real one moves in a more complex fashion. Our pairing mechanism is inspired by the one introduced by Motiian et al. (2017). They devise a training regime that pairs the scarce data in the target domain with the data from the source domain. This strategy aims to augment the data in the target domain on the order of the source domain. In our work, we loosen the restriction of needing pairs with the same label to adapt to our setting of not having paired sentences. This makes our pairing mechanism more general and applicable to other settings. To summarize, our main contributions are: 1) A framework for low-resource video domain adaptation using supervised disentangled learning. 2) A novel method to assess the threat of keystroke inference attacks by an attacker using a deep learning system while having limited real-life data." }, { "heading": "2 BACKGROUND", "text": "Keystroke Inference Attacks Some of the early works in (vision-based) keystroke inference attacks have focused on direct line of sight and reflective surfaces (i.e., teapots, sunglasses, eyes) (Backes et al., 2008; 2009; Raguram et al., 2011; Xu et al., 2013; Yue et al., 2014; Ye et al., 2017; Lim et al., 2020) to infer sensitive data. The attackers train models that account for various capture angles by aligning the user’s mobile phone to a template keyboard. Collectively, these works showed that\nattackers are able to successfully recover pins and full sentences. In this work, we advance the stateof-the-art under the direct line of sight model wherein the attacker uses a mobile camera to record a victim’s mobile phone usage. None of these works adequately explore the capabilities of an attacker that leverages deep learning systems because of the costs to collect large scale datasets. Lim et al. (2020) created a simulator that generates synthetic data for keystroke inference attacks and showed that training with both synthetic and real data, in a supervised domain adaptation framework, yielded a CNN that generalized to a real-life test set, despite having limited labels in the real domain. This work is limited due to the restricted threat scenario of inferring single keypresses. In our work, we assess the ability of an attacker to recover complete sequences. Predicting entire sequences from an input video is not only a more challenging task, but also it is a more realistic threat scenario.\nStyle and Content Disentanglement in Videos Tenenbaum & Freeman (1997); Tenenbaum & Freeman (2000) observe that by learning to factor observations of data into two independent factors of variation, style and content, models learn separate representations that can extrapolate style into novel content, classify content in different styles, and translate new content into new styles. This framework has been extended to videos and has been explored in a variety of settings. Prior works have disentangled videos into a time-dependent style representation and time-independent content with adversarial training (Denton & Birodkar, 2017; Villegas et al., 2017) or with variational autoencoders (Li & Mandt, 2018; Hsieh et al., 2018). In our setting, style and content are both timedependent. Style encapsulates the trajectory of the finger in between keys or speed of the user typing. The difference in texture on a per-frame basis is also encapsulated by style. Content represents the entire trajectory as that determines the sentence that was typed. These methods are all unsupervised methods to disentangle style and content. Since we have labels, we are able to leverage the observation made by Locatello et al. (2018; 2019), arguing that learning disentangled representations is impossible without supervision, and that the unsupervised methods leveraging temporal inductive biases do not lead to improved disentangled representations.\nLow Resource Domain Adaptation We are operating in a low resource setting in which we have abundant labels in the source domain and have very few, albeit labeled, data points in the target domain. Hosseini-Asl et al. (2019) extend the CyCada (Hoffman et al., 2017) and CycleGAN (Zhu et al., 2017) frameworks to the low resource domain adaptation setting by adding a semantic consistency loss. Motiian et al. (2017) addresses this problem by learning a feature space that is domain invariant, but is semantically aligned across both domains by introducing a pairing process that pairs feature samples in the training set into four groups: 1) both samples from domain A with same labels; 2) a sample from each domain with same labels; 3) both samples from domain A with different labels; 4) a sample from each domain with different labels. They use adversarial training to learn a feature representation such that a discriminator can’t distinguish samples from groups 1 and 2, and also from groups 3 and 4. We extend this pairing mechanism by relaxing the constraint of needing the same labels in both domains, i.e., pairs of synthetic and real sentences. Since we are effectively transferring different styles onto the content latent space, we do not need labels in the target domain so long as they are effectively disentangled." }, { "heading": "3 METHODS", "text": "We first give a brief introduction to keystroke inference attacks and define the problem setup. Then, we describe our proposed framework to disentangle the style and content latent spaces to train on all style-content pairs An overview of our method is in Figure 2 and in Algorithm 1." }, { "heading": "3.1 KEYSTROKE INFERENCE ATTACKS", "text": "We model the keystroke inference attack as a Seq2Seq (Sutskever et al., 2014) problem where the input X = {x1, x2, ..., xk} is a video with k frames and Y = {y1, y2, ..., yj} is a sequence of j characters. The videos are of users typing on their mobile phones that are cropped and aligned to a template image. The tokens are a sequence of characters of the sentence the user typed. We do not use any paired data (i.e. the synthetic and real-life datasets do not contain the same sentences), and do not have access to any auxiliary labels such as the exact frame in which a key was pressed. Our goal is to learn the parameters of a model that maximizes the conditional probability of Y given X . We use a Transformer (Vaswani et al., 2017) encoder-decoder as our model. In our setting we have a dataset of synthetic videos, Ds = {(Xsi , Y si )}, and a dataset of real-life videos Dt = {(Xti , Y ti )},\nwhere the number of real-life videos is significantly less than the synthetic (Figure 3). While a large synthetic dataset can be easily generated, there exists a distribution shift between the two domains (Figure 1). When the amount of labeled data is scarce, it becomes difficult to train neural networks that generalize to samples out of the training set." }, { "heading": "3.2 DISENTANGLING STYLE AND CONTENT", "text": "Our method to address the lack of real-life data is to train on combinations of style and content representation pairs from the synthetic and real domains. We introduce auxiliary losses to enforce disentanglement of style and content, ensuring that the style latent space does not contain any information about the content, and vice versa. Our training framework consists of a Content Encoder, EC , a Style Encoder ES , a Decoder G, a Feature Aggregation Module, M , a Style Discriminator DS , a Content Discriminator DC , and a Domain-Class Discriminator DM .\nPretraining Synthetic Model We first pretrain an Encoder-Decoder Transformer on only synthetic data. We train this network with a multi-class cross entropy loss where the goal is to predict the correct sentence for a given video. Then EC , ES , and DC are initialized with the weights of the pretrained Encoder, and G is initialized with the weights of the pretrained Decoder.\nStyle Disentanglement Style disentanglement ensures that style information is removed from the content latent space. The content latent space is defined as zfcontent = EC(X f i ; θEC ) where f ∈ {s, t} where fs and f t represent the synthetic and real domains, respectively. Similar to the setup of GANs (Goodfellow et al., 2014) the Style Discriminator, DS is trained to classify whether z f content is real or synthetic. Next, EC is trained to spoof DS and generate a content feature representation that is domain invariant. DS is trained using Equation 1. EC is trained using the same equation, but the labels are flipped and DS is not updated.\nLAdvDS = −E[log(DS(EC(X s i )))− log(1−DS(EC(Xti )))] (1)\nContent Disentanglement Content disentanglement ensures that content information is removed from the style latent space. The style latent space is defined as zfstyle = ES(X f i ; θES ) where f ∈ {s, t}. The Content Discriminator, DC , is a Transformer Decoder, and is trained to predict the correct sentence given the input style representation. ES is trained to spoof DC and generate a style feature representation, zfstyle, such that DC can not predict the correct sentence. This is done by maximizing the entropy, H , of the predictions of DC . DC is trained by minimizing Equation 2. ES is trained by maximizing Equation 3 with the weights of DC kept frozen.\nLAdvDC = − log p(Y z i |DC(ES(X f i ))) (2)\nLAdvES = H(Y z i |DC(ES(Xsi ))) (3)\nFeature Aggregation A Feature Aggregation Module, M , combines the disentangled representations from the previous two steps. For any given pair of style and content representations we have:\nM(zfstyle, z f ′ content) = m(z f style + z f ′ content) (4)\nIn Equation 4, m is the LayerNorm operation (Ba et al., 2016), f ∈ {s, t} and f ′ ∈ {s, t}. There are four different possible pairs that can be the input to our model, since there are two factors of variation (style and content) and two domains (synthetic and real-life). For any given input pair, the output feature representation ofM can be thought as the content in the style of the specified domain. We denote this as hff ′ where f is the style and f ′ is the content.\nPrediction The Decoder, G, takes in the output of M , hff ′, and outputs the predicted sentence Ŷ f ′, and is trained with the cross-entropy loss using the labels Y f ′. The objective is:\nLcls = − log p(Y f ′|G(M(ES(Xf ), EC(Xf ′)))) (5)\nAt test time, the model outputs the most likely sentence given a real-life video:\nargmax Y\np(Y t|G(htt)) (6)\nSemantic Aligment We extend the framework of Motiian et al. (2017) to create training pairs to compensate for limited data in one domain. Rather than train on pairs of feature samples, we train on the outputs of a feature aggregation module that takes in as input style-content pairs. Furthermore, we do not need to have the same labels for both domains, i.e., we do not need to have the same synthetic and real sentences. We create four pairs Gk, k ∈ {1, 2, 3, 4}. G1 and G2 are outputs of M that share synthetic content: (Synthetic Style, Synthetic Content) and (Real Style, Synthetic Content). G3 and G4 share real content: (Synthetic Style, Real Content) and (Real Style, Real Content). A multi-class discriminator, DM , is trained using Equation 7 to correctly identify which group every output of M belongs to. lk is the corresponding label for a given Gk. EC , ES , and M are updated with Equation 8 such that DM can’t distinguish outputs of M that are in G1 and G2 and outputs of M that are in G3 and G4.\nLAdvDM = −E[ 4∑\nk=1\nlk log(DM (M(Gk)))] (7)\nLAdvM = −E[l1 log(DM (M(G2)))− l3 log(DM (M(G4)))] (8) The final loss function to train our model is show in Equation 9 where the weightings for each term are tuned using a validation set. An overview of the training procedure is shown in Algorithm A.9.\nL = λ1Lcls + λ2LAdvM + λ3LAdvDM + λ4LAdvES + λ5LAdvDC + λ6LAdvDS (9)" }, { "heading": "4 EXPERIMENTS", "text": "In this section, we detail the datasets, the motivation and interpretation of our evaluation metrics, and experimental results. Further details regarding data collection, network architectures, and training details are located in section Appendix A.\nDatasets Figure 3 shows different statistics for the synthetic and real datasets. We set aside 10% of the training set as a validation set. The real-life dataset was collected by recording participants typing sentences into a mobile phone. Three different participants were asked to type conversational text messages into their mobile devices while we recorded them in both indoor and outdoor settings. We used a mobile camera and captured from a distance of 3 meters. The details to preprocess and align the real data is detailed in A.2. The synthetic data was generated using the simulator by Lim et al. (2020). It generates aligned videos of a synthetic thumb typing various sentences. We generated sentences from the “A Million News Headlines” dataset released by Kulkarni (2018). We add a START and STOP token to the beginning and end of a sentence, respectively. In total, there are 30 tokens in which the decoder can predict: 26 letters and 4 special tokens (START, STOP, SPACE, PAD).\nEvaluation Metrics We propose to use a variety of metrics to quantify the amount of information the attacker is able to recover from the user as there is no single, agreed-upon metric for keystroke inference attacks. In the first scenario, we postprocess the outputs of our model with a language model, similar to the methodologies of Raguram et al. (2011); Xu et al. (2013); Sun et al. (2016); Chen et al. (2018). Appropriate metrics for this scenario are Bleu-n (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee & Lavie, 2005). Bleu-n scores are scored on n-gram precision, i.e., the n-grams in the predicted sentence that are also in the ground truth sentence. For brevity, we do not report Bleu-2 and Bleu-3. ROUGE scores are scored on n-gram recall, i.e., the n-grams in the ground truth that are also in the predicted sentence. METEOR is a metric that is scored on the harmonic mean of unigram precision and recall and was developed to address some of the drawbacks of ROUGE and Bleu-n. METEOR scores range from 0 to 1. Scores above 0.5 reflect understandable translations and scores above 0.7 reflect fluent ones (Lavie, 2010).\nWhile these scores have merit in the context of keystroke inference attacks, they are not without shortcomings. These scores are especially harsh for predictions that contain slight typographical errors (e.g., “hello” vs. “hellp”), and there is no guarantee that the previously mentioned postprocessing steps will address every error. Also, there are settings in which the applicability of these metrics does not make sense — e.g., recovering alphanumeric passwords. Thus, we also need evaluation metrics for the raw outputs of our model. Two appropriate metrics are Translation Edit Rate (TER) (Snover et al., 2006) and a QWERTY-keyboard-based edit distance. Both metrics measure the number of edits required for a hypothesis sentence to be translated to the ground truth. The latter is a form of the Damerau–Levenshtein (DL) distance (Damerau, 1964) that penalizes the edit operations (i,e., insertions, deletions, substitutions, character swapping) conditioned on the QWERTY keyboard layout. For example, if ”hello” was the ground truth word, ”hellp” should be less penalized than ”hellv” as the former is a more likely output than the latter given the assumed keyboard layout.\nSynthetic Data Experiments Table 1 shows the results for a model trained and tested on synthetic data. The model performs very well on the synthetic test set across all proposed evaluation metrics. To lessen the compute cost of processing over 45k raw videos, we extract a fixed 128-dimensional feature representation as a preprocessing step by training a CNN for single key press classification. We use the simulator to generate single key press images and train a CNN to predict the correct key. Further details for this step are available in A.4." }, { "heading": "4.1 BASELINE RESULTS", "text": "We compare our work against finetuning, Adversarial Discriminative Domain Adaptation, ADDA, (Tzeng et al., 2017) and CycleGAN (Zhu et al., 2017). All methods are evaluated on the real-life test set and use the model trained on synthetic data.\n• Finetuning. We finetune a model trained only on synthetic with the real-life training set. • ADDA. Our goal with ADDA is to have the Encoder output feature representations that are do-\nmain invariant to a Discriminator, but also be discriminative for the Decoder. • CycleGAN. This method learns a pixel-wise transformation that transforms data from one do-\nmain to another. We apply this transformation to every real frame to a synthetic one. Then, we finetune the synthetic model with the transformed real training set and test on the transformed real test set. Finetuning is needed because this method does not address the temporal shift as the transformations are conducted on a per-frame basis.\nIt is important to note that in Lim et al. (2020), the authors were successful in applying ADDA to the task of single key press classification when the number of labeled data is scarce. Simply applying ADDA to our sequence prediction task leads to severe overfitting due to the limited real-life data, indicating that this task is more challenging than the single keypress classification task studied by Lim et al. (2020). While these are common approaches used in domain adaptation, we found that these approaches are not suitable in our problem setting. We carried out extensive experiments to tune our baselines and maximize their performance; a full listing of these experiments and corresponding hyperparameters is available in Appendix A. Despite an extensive search of hyperparameters, we still overfit. We report the best results in Table 1." }, { "heading": "4.2 ADAPTING TO REAL-LIFE VIDEOS", "text": "Our method, unlike the above baselines, does not overfit to the real-life training set. Our results show that training with our pairing mechanism with disentangled representations across domains is an effective form of data augmentation. We outperform the baselines in both raw output evaluations and post-processed evaluations as shown in Table 1. We found that our training was not sensitive to the hyperparameters and weightings of the loss terms in 9, and use the same hyperparameters for all experiments. Additional training details are in A.12.\nWhile a direct comparison to the state of the art in direct line of sight attacks is difficult due to the differences in datasets, it is worth noting how our model performs relative to current methods. Raguram et al. (2011) achieve a METEOR score of 0.89 whereas Xu et al. (2013) achieve a score of 0.71, but recording from much farther distances. To measure an attacker’s ability to recover passwords, Raguram et al. (2011) report precision and recall for individual word units and characters. They achieve word-level precision and recall of 75% and 78%, respectively, and character-level scores of 94% and 98%. We achieve a word-level precision and recall of 78% and 79%, respectively, and a precision and recall of 96% and 95%, respectively, for characters. They do not report METEOR scores for this scenario; we do so in Table 1 under “No LM”\nFeature Visualization The t-sne (Maaten & Hinton, 2008) plots, in Figure 4, for the feature representations ofES , EC , andM on synthetic and real test data show that sentences with different styles have a noticeable separation, whereas the content representations are intertwined. The last figure on the right shows outputs of our feature aggregation module, M , and shows the transfer of styles in the feature space. There is a clear separation between styles, while the datapoints within one style cluster are mixed. To obtain inputs suitable for t-sne, we perform a max pooling operation along the temporal dimension of the outputs of the networks.\nAblation Next, we conduct a series ablation studies to explore the effectiveness of our proposed framework. We introduce seven different models: I is our base method without the use of any adversarial losses, just the pairing mechanism. II uses style disentanglement, i.e., I + style disentanglement. III uses content disentanglement, i.e., I + content disentanglement. IV uses both style and content disentanglement. V is the base method with only the semantic alignment loss (Motiian et al., 2017). VI uses style and content disentanglement, along with the semantic alignment, but does not use LayerNorm inM . VII uses style and content disentanglement, along with the semantic alignment. This is our proposed method trained with Algorithm 1, i.e., IV + semantic alignment.\nFirst, we find that our base model (I) achieves competitive results without any losses on the latent spaces. This indicates that training on paired representations across domains is an effective method for data augmentation. Second, we find that adding auxiliary losses on the latent spaces to enforce style and content disentanglement improves performance. The performance for models II and III shows the base model is benefiting from the added loss terms. The results for Model IV aligns with our hypothesis that explicitly disentangling style and content allows us to overcome the lack of training data in the target domain by training with all combinations of the factors of variation. Finally, we trained model V to apply the semantic alignment step on our paired outputs without any additional adversarial losses. This is quite competitive with IV, but we find the greatest performance boost when training model VII using both semantic alignment and disentanglement. A closer look into the distribution of the scores in Figure 5 shows that the distribution of scores for Model VII (Full) indicates higher overall performance compared to Model I (Base). Our results show that explicitly disentangling style and content by adding the adversarial losses on the latent spaces, supplements the pairing mechanism to achieve the highest performance against our evaluation metrics." }, { "heading": "5 CONCLUSION", "text": "Our work provides the important initial step needed to formulate defenses for keystroke inference attacks in the age of deep learning. We provide the first assessment of an attacker’s ability to recover sentence level information using deep learning systems, demonstrating that this attack is plausible and a significant threat to users. Such a task has been challenging due to the costs of curating a reallife dataset. We address this problem by introducing a framework for low resource video domain adaptation, that disentangles the style and content across both domains, and creates representations from all pairs of style and content combinations. Our results indicate that training with these pairs, along with auxillarly losses to explicitly disentangle style and content, serves as an effective form of data augmentation that prevents overfitting." }, { "heading": "A ADDITIONAL TRAINING DETAILS", "text": "" }, { "heading": "A.1 SYNTHETIC DATASET COLLECTION", "text": "We use the synthetic dataset generator released by Lim et al. (2020) to generate our synthetic data. We take the headlines dataset released by Kulkarni (2018) for our sentences. To preprocess every sentence, we change it to lower case, remove any special characters, and punctuation marks. The thumb is of only right-handed users. The simulator takes in parameters for the user’s device and the attacker’s camera. We use the iPhone 6 as the camera parameters and the iPhone XR as the user’s device." }, { "heading": "A.2 REAL-LIFE DATASET COLLECTION", "text": "We preprocess the real-life capture data in order to compensate for the various capturing angles and positions and adversary can take. More specifically, for every video: 1. We crop out the location of the mobile phone. 2. Manually label the 4 corners of the phone. 3. Calculate a homography H matrix between the labeled 4 corners and 4 corners of a template image. 4. Warp the captured image to the template image using H. All aligned images are resized to 200 x 100 and then cropped to only display the keyboard. We capture from distances up to 3 meters where the attacker is using an iPhone6 camera and the user is using an iPhone XR." }, { "heading": "A.3 ALGORITHM BLOCK", "text": "Algorithm 1: Learning Algorithm for Disentangling Style and Content." }, { "heading": "Input: EC , ES , M , DS , DC , G, DM , DS , DT", "text": "Result: Well-trained ÊC , Well-trained ÊS , Well-trained M̂ , and Well-trained Ĝ\n1 while Not Converged do 2 sample mini-batch of b synthetic samples, {(Xs1 , Y s1 ), . . . , (Xsb , Y sb ) } from DS 3 sample mini-batch of b real-life samples, {(Xt1, Y t1 ), . . . , (Xtb, Y tb ) } from DT 4 Style Disentanglement: Remove Style information from the Content Space 5 update DS and EC with LAdvDS 6 Content Disentanglement: Remove Content information from the Style Space 7 update DC with LAdvDC 8 update ES with LAdvES 9 Sequence Prediction\n10 update EC , ES , G,M with Lcls 11 Semantic Alignment 12 update DM with LAdvDM 13 update M,EC , andES with LAdvM 14 end 15 return ÊC = EC , ÊS = ES , M̂ =M , Ĝ = G" }, { "heading": "A.4 SYNTHETIC SINGLE KEY PRESS CLASSIFIER", "text": "We train a CNN, φ(·), for the task of single key press classification in order to learn a d−dimensional (d = 128) feature extractor. Training with almost 50k synthetic videos where the average sequence length is 387 is computationally expensive. Once this network is fully trained for the task of single key press classification, we can extract the features of each video on a per-frame basis.\nData Preparation First, we use the simulator (Lim et al., 2020) to generate 70,000 single key press images. These images contain the synthetic thumb over one of 27 keys on the QWERTY keyboard (26 letters + the space bar). Once generated, these images are preprocessed in a similar fashion to the synthetic video dataset. We resize the images to size 200 X 100 and crop the phone such that only the keyboard is showing. We use 50,000 images for training and 10,000 images for testing and validation, respectively.\nTraining Details and Results We use a CNN where each layer consists of a Convolution Layer, ReLU activation, and MaxPool operation. We use 3 layers and 2 fully connected layers. We were able to train a network to achieve 95% accuracy on a held out test set without much hyperparameter or architecture search, as this is a fairly simple 27-way classification task. We use this final model to preprocess every frame in our synthetic video dataset. Every video is a now a sequence of these d−dimensional feature representations. We use the Adam optimizer with a learning rate of 1e−3." }, { "heading": "A.5 REAL-LIFE SINGLE KEY PRESS CLASSIFIER", "text": "When extracting the visual features for the real-life videos, we can not use a feature extractor that was trained only on synthetic data. There is a distribution shift between the synthetic and real-life data, so the features we extract would be not be informative. Instead of using φ(·) that was trained for single keypress classification on just synthetic data, we train φ(·) with a combination of synthetic and real-life data. Specifically, we adopt the ADDA (Tzeng et al., 2017) framework for unsupervised domain adaptation to train φ(·).\nData Preparation We treat the individual frames for all of the videos in our real-life training set as unlabeled data. Even though we do not have labels for individual keypresses for real-life data, we can leverage the fact that we have abundant labels for synthetic data by adopting the unsupervised domain adaptation technique ADDA.\nTraining Details and Results We use the CNN for single key press classification on synthetic data as our pretrained network. The Discriminator is a 1 layer, 128-dimensional fully connected layer followed by a sigmoid. We follow the same guidelines to train ADDA as the original paper (Tzeng et al., 2017), and refer the reader to this work for the full description of their training process. We use the Adam optimizer and a learning rate of 1e−3 for both the Discriminator and CNN." }, { "heading": "A.6 NETWORK ARCHITECTURES", "text": "For all experiments, the Encoders (EC , ES) and Decoders (DC , G) are both Transformers with 4 layers, 4 attention heads, an embedding size of 128 and a hidden size of 256. DM and DS are both 1-layer fully connected layers. Since the output of the Encoder is a sequence of n continuous representations, where n is the input sequence length, we do a max pooling operation along the temporal dimension so that we have a fixed vector representation. These fixed vector representations are the direct inputs to DM and DS . The max sequence length is set at 300, and the max phrase length is set at 70. If an input sequence has more than 300 frames, we randomly sample 300 frames at each epoch. If a video in the testing or validation set has more than 300 frames, we fix the indices of the sampled frames to remove any randomness for evaluation. For input sequences that are shorter than 300 frames, we zero-pad the remaining sequence." }, { "heading": "A.7 ADDITIONAL TRAINING DETAILS - SYNTHETIC SEQ2SEQ", "text": "We use a dropout value of p = 0.1 and the Adam optimizer with a learning rate 1e−4. We train the network for 400 epochs." }, { "heading": "A.8 ADDITIONAL TRAINING DETAILS - FINETUNING", "text": "We use the φ(·) trained on a combination of real-life and synthetic data described in A.5. We do not fix extract a fixed d-dimensional representation for every frame in the real-life video in order to preprocess or reduce computational costs. Since we are dealing with such few real-life videos, handling raw videos is not as computationally expensive. Doing this lets us apply data augmentation techniques on the image space versus the feature space. Also, we can continue to adjust the weights of φ(·). Table 3 shows the various configurations we experimented with for finetuning. We experimented with different combinations of freezing networks and tuning learning rates. We were unable to discover any combination that did not overfit to the training data. For all experiments, we use the Adam optimizer." }, { "heading": "A.9 ADDITIONAL TRAINING DETAILS - ADDA", "text": "ADDA (Tzeng et al., 2017) is an unsupervised domain adaptation method that learns a feature representation that is domain invariant and discriminative. While this framework was originally proposed for UDA, it can be easily extended to a supervised domain adaptation scenario. To follow the ADDA framework, we initialize the weights of our Encoder and Decoder with the ones pretrained on synthetic data. Then, we train on mini-batches of both synthetic and real-life videos such that the Decoder can predict the input video’s labels, but the Encoder produces representations that are invariant to domain, i.e., synthetic vs. real. We train in adversarial fashion where the Discriminator is trained to successfully predict the domain of its input, and the Encoder is trained to spoof the Discriminator. The Discriminator used in this experiment is the same one we use for our method, DS . To get the input for DS , we first perform a maxpool operation on the output of the Encoder. We also tried to use a 1 layer LSTM with hidden size of 256 as our Discriminator. For the LSTM Discriminator, the input is the entire sequence of the Transformer output — no pooling. The output for the last time step is used as input to a linear layer followed by a sigmoid for binary classification. Table 4 highlights the different hyperparameters we tried to maximize performance." }, { "heading": "A.10 ALGORITHM BLOCK", "text": "Algorithm 2: Learning Algorithm for ADDA." }, { "heading": "Input: EC , G, DS , DS , DT", "text": "Result: Well-trained Encoder ÊC and Well-trained Decoder ÊS\n1 while Not Converged do 2 sample mini-batch of b synthetic samples, {(Xs1 , Y s1 ), . . . , (Xsb , Y sb ) } from DS 3 sample mini-batch of b real-life samples, {(Xt1, Y t1 ), . . . , (Xtb, Y tb ) } from DT 4 Domain Invariance: Make Encoder outputs domain invariant 5 update DS and EC with LAdvDS 6 Sequence Prediction: Make Encoder outputs discriminative 7 update EC and G with Lcls 8 end 9 return ÊC = EC , Ĝ = G" }, { "heading": "A.11 ADDITIONAL TRAINING DETAILS - CYCLEGAN", "text": "CycleGAN (Zhu et al., 2017) learns pixel-wise transformation functions F : S → T andG : T → S that can transform data from the source domain to the target domain, and vice versa. We take 10,000 random frames from the synthetic and real-life video datasets and train F and G in an unpaired fashion. We train using the default CycleGAN hyperparameters reported in Zhu et al. (2017) and train for 200 epochs. We take G and transform every frame in our real-life dataset to the synthetic space. We found that images transformed under G (i.e, real to synthetic) yielded higher quality transformations (e.g., the thumb was not malformed, the thumb was placed on the correct key, etc.) as opposed to the transformed images produced by F . Then, we finetune on the transformed reallife training set and test on the transformed real-life test set. Simply testing on the transformed\nreal-life test set is not sufficient as the CycleGAN approach does not address temporal domain shift as the transformations are performed independently on a per-frame basis. Thus, we are finetuning to address this temporal shift. We use the φ(·) from the synthetic key press classifier. Table 5 details the different configurations of learning rates and weight freezing we experimented on for finetuning. Despite finetuning on the transformed data, this method still overfits to the training set and is unable to generalize.\nFigure 6 depicts an example of the outputs of the CycleGAN network we train. We show the outputs ofG : T → S. The top row contains the output ofGwhere the bottom row is the real-life input. The thumb for the synthetic outputs is, for the most part, on the same location as the rea-life input. While the thumb being placed on the correct location is important, it is not sufficient to fully address the domain gap between the synthetic and real. This is because this method does address the temporal shift. These transformations are done on a per-frame basis. The different thumb colors is an artifact of this method displaying temporal inconsistency." }, { "heading": "A.12 ADDITIONAL TRAINING DETAILS - OURS", "text": "We use the Adam optimizer with learning rate of 1e−4 for all of our networks. We train for 60k iterations with a batch size of 8, and use a dropout value of 0.15. We use the validation set to tune the\ndifferent weightings for our loss function. We set λ1 = 1.0, λ2 = 0.25, λ3 = 1.0, λ4 = 1.0, λ5 = 1.0 and λ6 = 1.0." } ]
2,020
DISENTANGLING STYLE AND CONTENT FOR LOW RESOURCE VIDEO DOMAIN ADAPTATION: A CASE STUDY ON KEYSTROKE INFERENCE ATTACKS
SP:181ce6eaacf4be8ede3fbdd82c63200278f63cc4
[ "The paper considers the problem of approximating Sinkhorn divergence and corresponding transportation plan by combining low-rank and sparse approximation for the Sinkhorn kernel and using Nystrom iterations as a substitute for Sinkhorn's iterations. The corresponding approach is amenable to differentiation and can be used as a building block in different architectures. Numerical experiments in several settings are performed to compare the proposed approach with existing ones and demonstrate its scalability." ]
Optimal transport (OT) is a cornerstone of many machine learning tasks. The current best practice for computing OT is via entropy regularization and Sinkhorn iterations. This algorithm runs in quadratic time and requires calculating the full pairwise cost matrix, which is prohibitively expensive for large sets of objects. To alleviate this limitation we propose to instead use a sparse approximation of the cost matrix based on locality sensitive hashing (LSH). Moreover, we fuse this sparse approximation with the Nyström method, resulting in the locally corrected Nyström method (LCN). These approximations enable general log-linear time algorithms for entropy-regularized OT that perform well even in complex, high-dimensional spaces. We thoroughly demonstrate these advantages via a theoretical analysis and by evaluating multiple approximations both directly and as a component of two real-world models. Using approximate Sinkhorn for unsupervised word embedding alignment enables us to train the model full-batch in a fraction of the time while improving upon the original on average by 3.1 percentage points without any model changes. For graph distance regression we propose the graph transport network (GTN), which combines graph neural networks (GNNs) with enhanced Sinkhorn and outcompetes previous models by 48 %. LCN-Sinkhorn enables GTN to achieve this while still scaling log-linearly in the number of nodes.
[]
[ { "authors": [ "Pierre Ablin", "Gabriel Peyré", "Thomas Moreau" ], "title": "Super-efficiency of automatic differentiation for functions defined as a minimum", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Mokhtar Z. Alaya", "Maxime Berar", "Gilles Gasso", "Alain Rakotomamonjy" ], "title": "Screening Sinkhorn Algorithm for Regularized Optimal Transport", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Jason Altschuler", "Francis Bach", "Alessandro Rudi", "Jonathan Niles-Weed" ], "title": "Massively scalable Sinkhorn distances via the Nyström method", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "David Alvarez-Melis", "Tommi S. Jaakkola" ], "title": "Gromov-Wasserstein Alignment of Word Embedding Spaces", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Alexandr Andoni", "Piotr Indyk", "Robert Krauthgamer" ], "title": "Earth mover distance over high-dimensional spaces", "venue": "In ACM-SIAM symposium on Discrete algorithms (SODA),", "year": 2008 }, { "authors": [ "Alexandr Andoni", "Piotr Indyk", "Thijs Laarhoven", "Ilya P. Razenshteyn", "Ludwig Schmidt" ], "title": "Practical and Optimal LSH for Angular Distance", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein Generative Adversarial Networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Arturs Backurs", "Yihe Dong", "Piotr Indyk", "Ilya Razenshteyn", "Tal Wagner" ], "title": "Scalable Nearest Neighbor Search for Optimal Transport", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Yunsheng Bai", "Hao Ding", "Yizhou Sun", "Wei Wang" ], "title": "Convolutional Set Matching for Graph Similarity", "venue": "In NeurIPS-W,", "year": 2018 }, { "authors": [ "Yunsheng Bai", "Hao Ding", "Song Bian", "Ting Chen", "Yizhou Sun", "Wei Wang" ], "title": "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation", "venue": null, "year": 2019 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation", "venue": null, "year": 2013 }, { "authors": [ "Christian Berg", "Jens Peter Reus Christensen", "Paul Ressel" ], "title": "Harmonic Analysis on Semigroups", "venue": "Number 100 in Graduate Texts in Mathematics", "year": 1984 }, { "authors": [ "Kristian Birchall", "Valerie J. Gillet", "Gavin Harper", "Stephen D. Pickett" ], "title": "Training Similarity Measures for Specific Activities: Application to Reduced Graphs", "venue": "Journal of Chemical Information and Modeling,", "year": 2006 }, { "authors": [ "Mathieu Blondel", "Vivien Seguy", "Antoine Rolet" ], "title": "Smooth and Sparse Optimal Transport", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Olivier Bousquet", "Sylvain Gelly", "Ilya Tolstikhin", "Carl-Johann Simon-Gabriel", "Bernhard Schoelkopf" ], "title": "From optimal transport to generative modeling: the VEGAN", "venue": "cookbook. arXiv,", "year": 2017 }, { "authors": [ "Horst Bunke", "Kim Shearer" ], "title": "A graph distance metric based on the maximal common subgraph", "venue": "Pattern Recognition Letters,", "year": 1998 }, { "authors": [ "Moses Charikar" ], "title": "Similarity estimation techniques from rounding algorithms", "venue": "In ACM symposium on Theory of computing (STOC),", "year": 2002 }, { "authors": [ "Lénaïc Chizat", "Gabriel Peyré", "Bernhard Schmitzer", "François-Xavier Vialard" ], "title": "Scaling algorithms for unbalanced optimal transport problems", "venue": "Mathematics of Computation,", "year": 2018 }, { "authors": [ "Henry Cohn" ], "title": "A Conceptual Breakthrough in Sphere Packing", "venue": "Notices of the American Mathematical Society,", "year": 2017 }, { "authors": [ "Alexis Conneau", "Guillaume Lample", "Marc’Aurelio Ranzato", "Ludovic Denoyer", "Hervé Jégou" ], "title": "Word translation without parallel data", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Nicolas Courty", "Rémi Flamary", "Amaury Habrard", "Alain Rakotomamonjy" ], "title": "Joint distribution optimal transportation for domain adaptation", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn Distances: Lightspeed Computation of Optimal Transport", "venue": "In NeurIPS,", "year": 2013 }, { "authors": [ "Pavel E. Dvurechensky", "Alexander Gasnikov", "Alexey Kroshnin" ], "title": "Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn’s Algorithm", "venue": null, "year": 2018 }, { "authors": [ "Montacer Essid", "Justin Solomon" ], "title": "Quadratically Regularized Optimal Transport on Graphs", "venue": "SIAM Journal on Scientific Computing,", "year": 2018 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast Graph Representation Learning with PyTorch Geometric", "venue": "In ICLR-W,", "year": 2019 }, { "authors": [ "Aden Forrow", "Jan-Christian Hütter", "Mor Nitzan", "Philippe Rigollet", "Geoffrey Schiebinger", "Jonathan Weed" ], "title": "Statistical Optimal Transport via Factored Couplings", "venue": null, "year": 2019 }, { "authors": [ "Charlie Frogner", "Chiyuan Zhang", "Hossein Mobahi", "Mauricio Araya-Polo", "Tomaso A. Poggio" ], "title": "Learning with a Wasserstein Loss", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Aude Genevay", "Marco Cuturi", "Gabriel Peyré", "Francis R. Bach" ], "title": "Stochastic Optimization for Large-scale Optimal Transport", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Aude Genevay", "Gabriel Peyré", "Marco Cuturi" ], "title": "Learning Generative Models with Sinkhorn Divergences", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Samuel Gerber", "Mauro Maggioni" ], "title": "Multiscale Strategies for Computing Optimal Transport", "venue": "J. Mach. Learn. Res.,", "year": 2017 }, { "authors": [ "Edouard Grave", "Armand Joulin", "Quentin Berthet" ], "title": "Unsupervised Alignment of Embeddings with Wasserstein Procrustes", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "Paul L. Houston", "Apurba Nandi", "Joel M. Bowman" ], "title": "A Machine Learning Approach for Prediction of Rate Constants", "venue": "The Journal of Physical Chemistry Letters,", "year": 2019 }, { "authors": [ "Piotr Indyk", "Nitin Thaper" ], "title": "Fast image retrieval via embeddings", "venue": "In ICCV-W,", "year": 2003 }, { "authors": [ "Arun Jambulapati", "Aaron Sidford", "Kevin Tian" ], "title": "A Direct tilde{O}(1/epsilon) Iteration Parallel Algorithm for Optimal Transport", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Justin Johnson", "Ranjay Krishna", "Michael Stark", "Li-Jia Li", "David A. Shamma", "Michael S. Bernstein", "Fei-Fei Li" ], "title": "Image retrieval using scene graphs", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Armand Joulin", "Piotr Bojanowski", "Tomas Mikolov", "Hervé Jégou", "Edouard Grave" ], "title": "Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion", "venue": null, "year": 2018 }, { "authors": [ "Sofia Ira Ktena", "Sarah Parisot", "Enzo Ferrante", "Martin Rajchl", "Matthew Lee", "Ben Glocker", "Daniel Rueckert" ], "title": "Distance Metric Learning Using Graph Convolutional Networks: Application to Functional Brain Networks", "venue": "In MICCAI,", "year": 2017 }, { "authors": [ "Julien Lerouge", "Zeina Abu-Aisheh", "Romain Raveaux", "Pierre Héroux", "Sébastien Adam" ], "title": "New binary linear programming formulation to compute the graph edit distance", "venue": "Pattern Recognit.,", "year": 2017 }, { "authors": [ "Yujia Li", "Chenjie Gu", "Thomas Dullien", "Oriol Vinyals", "Pushmeet Kohli" ], "title": "Graph Matching Networks for Learning the Similarity of Graph Structured Objects", "venue": null, "year": 2019 }, { "authors": [ "Hermina Petric Maretic", "Mireille El Gheche", "Giovanni Chierchia", "Pascal Frossard" ], "title": "GOT: An Optimal Transport framework for Graph comparison", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Cheng Meng", "Yuan Ke", "Jingyi Zhang", "Mengrui Zhang", "Wenxuan Zhong", "Ping Ma" ], "title": "Large-scale optimal transport map estimation using projection pursuit", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Cameron Musco", "Christopher Musco" ], "title": "Recursive Sampling for the Nystrom Method", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Michel Neuhaus", "Horst Bunke" ], "title": "An Error-Tolerant Approximate Matching Algorithm for Attributed Planar Graphs and Its Application to Fingerprint Classification", "venue": "In Structural, Syntactic, and Statistical Pattern Recognition,", "year": 2004 }, { "authors": [ "Giannis Nikolentzos", "Polykarpos Meladianos", "Michalis Vazirgiannis" ], "title": "Matching Node Embeddings for Graph Similarity", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "David Nistér", "Henrik Stewénius" ], "title": "Scalable Recognition with a Vocabulary Tree", "venue": "In CVPR,", "year": 2006 }, { "authors": [ "Nicolas Papadakis", "Gabriel Peyré", "Édouard Oudet" ], "title": "Optimal Transport with Proximal Splitting", "venue": "SIAM J. Imaging Sciences,", "year": 2014 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Köpf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Loïc Paulevé", "Hervé Jégou", "Laurent Amsaleg" ], "title": "Locality sensitive hashing: A comparison of hash function types and querying mechanisms", "venue": "Pattern Recognit. Lett.,", "year": 2010 }, { "authors": [ "Allon G Percus", "Olivier C Martin" ], "title": "Scaling Universalities ofkth-Nearest Neighbor Distances on Closed Manifolds", "venue": "Advances in Applied Mathematics,", "year": 1998 }, { "authors": [ "Gabriel Peyré", "Marco Cuturi" ], "title": "Computational Optimal Transport", "venue": "Foundations and Trends in Machine Learning,", "year": 2019 }, { "authors": [ "Julien Rabin", "Gabriel Peyré", "Julie Delon", "Marc Bernot" ], "title": "Wasserstein Barycenter and Its Application to Texture Mixing", "venue": "In Scale Space and Variational Methods in Computer Vision (SSVM),", "year": 2011 }, { "authors": [ "Pau Riba", "Andreas Fischer", "Josep Lladós", "Alicia Fornés" ], "title": "Learning Graph Distances with Message Passing Neural Networks", "venue": "In ICPR,", "year": 2018 }, { "authors": [ "Kaspar Riesen", "Horst Bunke" ], "title": "IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning", "venue": "In Structural, Syntactic, and Statistical Pattern Recognition,", "year": 2008 }, { "authors": [ "Kaspar Riesen", "Horst Bunke" ], "title": "Approximate graph edit distance computation by means of bipartite graph matching", "venue": "Image Vis. Comput.,", "year": 2009 }, { "authors": [ "Sebastian Ruder", "Ivan Vulic", "Anders Søgaard" ], "title": "A Survey of Cross-lingual Word Embedding Models", "venue": "J. Artif. Intell. Res.,", "year": 2019 }, { "authors": [ "Bernhard Schmitzer" ], "title": "Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems", "venue": "SIAM Journal on Scientific Computing,", "year": 2019 }, { "authors": [ "Anshumali Shrivastava", "Ping Li" ], "title": "Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS)", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Justin Solomon", "Raif M. Rustamov", "Leonidas J. Guibas", "Adrian Butscher" ], "title": "Earth mover’s distances on discrete surfaces", "venue": "ACM Trans. Graph.,", "year": 2014 }, { "authors": [ "Justin Solomon", "Fernando de Goes", "Gabriel Peyré", "Marco Cuturi", "Adrian Butscher", "Andy Nguyen", "Tao Du", "Leonidas J. Guibas" ], "title": "Convolutional wasserstein distances: efficient optimal transportation on geometric domains", "venue": "ACM Trans. Graph.,", "year": 2015 }, { "authors": [ "Robert E. Tarjan" ], "title": "Dynamic trees as search trees via euler tours, applied to the network simplex algorithm", "venue": "Mathematical Programming,", "year": 1997 }, { "authors": [ "Evgeny Tenetov", "Gershon Wolansky", "Ron Kimmel" ], "title": "Fast Entropic Regularized Optimal Transport Using Semidiscrete Cost Approximation", "venue": "SIAM J. Sci. Comput.,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is All you Need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Titouan Vayer", "Nicolas Courty", "Romain Tavenard", "Laetitia Chapel", "Rémi Flamary" ], "title": "Optimal Transport for structured data with application on graphs", "venue": null, "year": 2019 }, { "authors": [ "Jingdong Wang", "Heng Tao Shen", "Jingkuan Song", "Jianqiu Ji" ], "title": "Hashing for Similarity Search: A Survey", "venue": null, "year": 2014 }, { "authors": [ "Runzhong Wang", "Junchi Yan", "Xiaokang Yang" ], "title": "Learning Combinatorial Embedding Networks for Deep Graph Matching", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Christopher K.I. Williams", "Matthias Seeger" ], "title": "Using the Nyström Method to Speed Up Kernel Machines", "venue": "In NeurIPS,", "year": 2001 }, { "authors": [ "Bing Xiao", "Xinbo Gao", "Dacheng Tao", "Xuelong Li" ], "title": "HMM-based graph edit distance for image indexing", "venue": "Int. J. Imaging Systems and Technology,", "year": 2008 }, { "authors": [ "Kai Zhang", "Ivor W. Tsang", "James T. Kwok" ], "title": "Improved Nyström low-rank approximation and error analysis", "venue": "In ICML,", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Measuring the distance between two distributions or sets of objects is a central problem in machine learning. One common method of solving this is optimal transport (OT). OT is concerned with the problem of finding the transport plan for moving a source distribution (e.g. a pile of earth) to a sink distribution (e.g. a construction pit) with the cheapest cost w.r.t. some pointwise cost function (e.g. the Euclidean distance). The advantages of this method have been shown numerous times, e.g. in generative modelling (Arjovsky et al., 2017; Bousquet et al., 2017; Genevay et al., 2018), loss functions (Frogner et al., 2015), set matching (Wang et al., 2019), or domain adaptation (Courty et al., 2017). Motivated by this, many different methods for accelerating OT have been proposed in recent years (Indyk & Thaper, 2003; Papadakis et al., 2014; Backurs et al., 2020). However, most of these approaches are specialized methods that do not generalize to modern deep learning models, which rely on dynamically changing high-dimensional embeddings.\nIn this work we aim to make OT computation for point sets more scalable by proposing two fast and accurate approximations of entropy-regularized optimal transport: Sparse Sinkhorn and LCNSinkhorn, the latter relying on our newly proposed locally corrected Nyström (LCN) method. Sparse Sinkhorn uses a sparse cost matrix to leverage the fact that in entropy-regularized OT (also known as the Sinkhorn distance) (Cuturi, 2013) often only each point’s nearest neighbors influence the result. LCN-Sinkhorn extends this approach by leveraging LCN, a general similarity matrix approximation that fuses local (sparse) and global (low-rank) approximations, allowing us to simultaneously capture both kinds of behavior. LCN-Sinkhorn thus fuses sparse Sinkhorn and Nyström-Sinkhorn (Altschuler et al., 2019). Both sparse Sinkhorn and LCN-Sinkhorn run in log-linear time.\nWe theoretically analyze these approximations and show that sparse corrections can lead to significant improvements over the Nyström approximation. We furthermore validate these approximations by showing that they are able to reproduce both the Sinkhorn distance and transport plan significantly better than previous methods across a wide range of regularization parameters and computational\nbudgets (as e.g. demonstrated in Fig. 1). We then show the impact of these improvements by employing Sinkhorn approximations end-to-end in two high-impact machine learning tasks. First, we incorporate them into Wasserstein Procrustes for word embedding alignment (Grave et al., 2019). LCN-Sinkhorn improves upon the original method’s accuracy by 3.1 percentage points using a third of the training time without any further model changes. Second, we develop the graph transport network (GTN), which combines graph neural networks (GNNs) with optimal transport, and further improve it via learnable unbalanced OT and multi-head OT. GTN with LCN-Sinkhorn is the first model that both overcomes the bottleneck of using a single embedding per graph and scales log-linearly in the number of nodes. In summary, our paper’s main contributions are:\n• Locally Corrected Nyström (LCN), a flexible, log-linear time approximation for similarity matrices, leveraging both local (sparse) and global (low-rank) approximations.\n• Entropy-regularized optimal transport (a.k.a. Sinkhorn distance) with log-linear runtime via sparse Sinkhorn and LCN-Sinkhorn. These are the first log-linear approximations that are stable enough to substitute full entropy-regularized OT in models that leverage high-dimensional spaces.\n• The graph transport network (GTN), which combines a graph neural network (GNN) with multihead unbalanced LCN-Sinkhorn. GTN both sets the state of the art on graph distance regression and still scales log-linearly in the number of nodes." }, { "heading": "2 SPARSE SINKHORN", "text": "Entropy-regularized optimal transport. In this work we focus on optimal transport between two discrete sets of points. We furthermore add entropy regularization, which enables fast computation and often performs better than regular OT (Cuturi, 2013). Formally, given two categorical distributions modelled via the vectors p ∈ Rn and q ∈ Rm supported on two sets of points Xp = {xp1, . . . ,xpn} and Xq = {xq1, . . . ,xqm} in Rd and the cost function c : Rd × Rd → R (e.g. the squared L2 distance) giving rise to the cost matrix Cij = c(xpi,xqi) we aim to find the Sinkhorn distance dλc and the associated optimal transport plan P̄ (Cuturi, 2013)\ndλc = min P 〈P ,C〉F − λH(P ), s.t. P1m = p, P T1n = q, (1)\nwith the Frobenius inner product 〈., .〉F and the entropy H(P ) = − ∑n i=1 ∑m j=1Pij logPij . Note that dλc includes the entropy and can thus be negative, while Cuturi (2013) originally used d 1/λ Cuturi,c = 〈P̄ ,C〉F. This optimization problem can be solved by finding the vectors s̄ and t̄ that normalize the columns and rows of the matrix P̄ = diag(s̄)K diag(t̄) with the similarity matrixKij = e− Cij λ , so that P̄1m = p and P̄ T1n = q. This is usually achieved via the Sinkhorn algorithm, which initializes the normalization vectors as s(1) = 1n and t(1) = 1m and then updates them alternatingly via\ns(i) = p (Kt(i−1)), t(i) = q (KTs(i)) (2) until convergence, where denotes elementwise division. Sparse Sinkhorn. The Sinkhorn algorithm is faster than non-regularized EMD algorithms, which run in O(n2m log n log(nmax(C))) (Tarjan, 1997). However, its computational cost is still quadratic\nin time, i.e. O(nm), which is prohibitively expensive for large n and m. We propose to overcome this by observing that the matrix K, and hence also P̄ , is negligibly small everywhere except at each point’s closest neighbors because of the exponential used inK’s computation. We propose to leverage this by approximating C via the sparse matrix Csp, where\nCspij = { Cij if xpi and xqj are “near”, ∞ otherwise. (3)\nKsp and P̄ sp follow according to the definitions of K and P̄ . In this work we primarily consider neighbors with distance lower than r1 as “near”. Finding such neighbors can be efficiently solved via locality sensitive hashing (LSH) on Xp ∪Xq. Locality sensitive hashing. LSH tries to filter “near” from “far” data points by putting them into different hash buckets. Points closer than a certain distance r1 are put into the same bucket with probability at least p1, while those beyond some distance r2 = c · r1 with c > 1 are put into the same bucket with probability at most p2 p1. There is a plethora of LSH methods for different cost functions (Wang et al., 2014; Shrivastava & Li, 2014), so we do not have to restrict our approach to a limited set of functions. In this work we focus on cross-polytope LSH (Andoni et al., 2015) and k-means LSH (Paulevé et al., 2010), depending on the cost function (see App. H). Sparse Sinkhorn with LSH scales log-linearly with the number of points, i.e. O(n log n) for n ≈ m (see App. A and App. K for details). Unfortunately, LSH can fail when e.g. the cost between pairs is very similar (see App. B). However, we can alleviate these limitations by fusingKsp with the Nyström approximation." }, { "heading": "3 LOCALLY CORRECTED NYSTRÖM AND LCN-SINKHORN", "text": "Nyström method. The Nyström method is a popular way of approximating similarity matrices that provides performance guarantees for many important tasks (Williams & Seeger, 2001; Musco & Musco, 2017). It approximates a positive semi-definite (PSD) similarity matrixK via its low-rank decomposition KNys = UA−1V . Since the optimal decomposition via SVD is too expensive to compute, Nyström instead chooses a set of l landmarks L = {xl1, . . . ,xll} and obtains the matrices via Uij = k(xpi,xlj), Aij = k(xli,xlj), and Vij = k(xli,xqj), where k(x1,x2) is an arbitrary PSD kernel, e.g. k(x1,x2) = e− c(x1,x2)\nλ for Sinkhorn. Common methods of choosing landmarks from Xp ∪Xq are uniform and ridge leverage score (RLS) sampling. We instead focus on k-means Nyström and sampling via k-means++, which we found to be significantly faster than recursive RLS sampling (Zhang et al., 2008) and perform better than both uniform and RLS sampling (see App. H).\nSparse vs. Nyström. Exponential kernels like the one used forK (e.g. the Gaussian kernel) typically have a reproducing kernel Hilbert space that is infinitely dimensional. The resulting Gram matrixK thus always has full rank. A low-rank approximation like the Nyström method can therefore only account for its global structure and not the local structure around each point x. As such, it is ill-suited for any moderately low entropy regularization parameter, where the transport matrix P̄ resembles a permutation matrix. Sparse Sinkhorn, on the other hand, cannot account for global structure and instead approximates all non-selected distances as infinity. It will hence fail if more than a handful of neighbors are required per point. These approximations are thus opposites of each other, and as such not competing but rather complementary approaches.\nLocally corrected Nyström. Since we know that the entries in our sparse approximation are exact, fusing this matrix with the Nyström method is rather straightforward. For all non-zero values in the sparse approximationKsp we first calculate the corresponding Nyström approximations, obtaining the sparse matrixKspNys. To obtain the locally corrected Nyström (LCN) approximation we remove these entries fromKNys and replace them with their exact values, i.e.\nKLCN = KNys +K sp ∆ = KNys −K sp Nys +K sp. (4)\nLCN-Sinkhorn. To obtain the approximate transport plan P̄LCN we run the Sinkhorn algorithm with KLCN instead of K. However, we never fully instantiate KLCN. Instead, we only save the decomposition and directly use these parts in Eq. (2) viaKLCNt = U(A−1V t) +K sp ∆t, similarly to Altschuler et al. (2019). As a result we obtain the decomposition of P̄LCN = P̄Nys + P̄ sp ∆ = P̄U P̄W + P̄ sp − P̄ spNys and the approximate distance (using Lemma A from Altschuler et al. (2019))\ndλLCN,c = λ ( sT P̄U P̄W1m + 1 T n P̄U P̄W t+ s T P̄ sp∆ 1m + 1 T n P̄ sp ∆ t ) . (5)\nThis approximation scales log-linearly with dataset size (see App. A and App. K for details). It allows us to smoothly move from Nyström-Sinkhorn to sparse Sinkhorn by varying the number of neighbors and landmarks. We can thus freely choose the optimal “operating point” based on the underlying problem and regularization parameter. We discuss the limitations of LCN-Sinkhorn in App. B." }, { "heading": "4 THEORETICAL ANALYSIS", "text": "Approximation error. The main question we aim to answer in our theoretical analysis is what improvements to expect from adding sparse corrections to Nyström Sinkhorn. To do so, we first analyse approximations ofK in a uniform and a clustered data model. In these we use Nyström and LSH schemes that largely resemble k-means, as used in most of our experiments. Relevant proofs and notes for this section can be found in App. C to G. Theorem 1. Let Xp and Xq have n samples that are uniformly distributed in a d-dimensional closed, locally Euclidean manifold with unit volume. Let furthermore Cij = ‖xpi − xqj‖2 and Kij = e\n−Cij/λ. Let the l landmarks L be arranged optimally and regularly so that the expected L2 distance to the closest landmark is minimized. Denote R = 12 minx,y∈L,x6=y ‖x− y‖2. Assume that the sparse correctionKspij = Kij if and only if xqj is one of the k − 1 nearest neighbors of xpi, and that the distance to xpi’s k-nearest neighbor δk R. Then the expected maximum error in row i of the LCN approximationKLCN is\nE[‖Ki,: −KLCN,i,:‖∞] = E[e−δk/λ]− E[KLCN,i,j ], (6) with j denoting the index of xpi’s k-nearest neighbor. Using the upper incomplete Gamma function Γ(., .) we can furthermore bound the second term by\ne− √ dR/λ ≤ E[KLCN,i,j ] ≤ 2d(Γ(d)− Γ(d, 2R/λ)) (2R/λ)d(1 + e−2R/λ) +O(e−2 √ 3R/λ). (7)\nThe error in Eq. (6) is dominated by the first term since δk R. Note that R only decreases slowly with the number of landmarks since R ≥ ( (d/2)!l ) 1/d 1 2 √ π\n(Cohn, 2017). Moving from pure Nyström to LCN by correcting the nearest neighbors’ entries thus provides significant benefits, even for uniform data. For example, by just correcting the first neighbor we obtain a 68 % improvement in the first term (d=32, λ=0.05, n=1000). This is even more pronounced in clustered data. Theorem 2. Let Xp, Xq ⊆ Rd be distributed inside the same c clusters with cluster centers xc. Let r be the maximum L2 distance of a point to its cluster center and D the minimum distance between two points from different clusters, with r D. Let each LSH bucket used for the sparse approximation Ksp cover at least one cluster. Let KNys use 1 ≤ l ≤ d and KLCN use l = 1 optimally distributed landmarks per cluster. Then the maximum error is\nmax ‖K −KNys‖∞ = 1− max ∆∈[0,r]\nle−2 √ r2+ l−12l ∆ 2/λ 1 + (l − 1)e−∆/λ −O(e−D/λ), (8)\nmax ‖K −Ksp‖∞ = e−D/λ, (9) max ‖K −KLCN‖∞ = e−D/λ(1− e−2r/λ(2− e−2r/λ) +O(e−D/λ)). (10)\nSince we can lower bound Eq. (8) by 1 − le−2r/λ − O(e−D/λ) we can conclude that the error in KNys is close to 1 for any reasonably large rλ (which is the maximum error possible). The errors in Ksp andKLCN on the other hand are vanishingly small, since r D. Moreover, these maximum approximation error improvements directly translate to improvements in the Sinkhorn approximation. We can show this by slightly adapting the error bounds for an approximate Sinkhorn transport plan and distance due to Altschuler et al. (2019). Theorem 3 (Altschuler et al. (2019)). Let Xp, Xq ⊆ Rd have n samples. Denote ρ as the maximum distance between two samples. Let K̃ be an approximation of the similarity matrixK withKij = e−‖xpi−xqj‖2/λ and ‖K̃ −K‖∞ ≤ ε ′ 2 e −ρ/λ, where ε′ = min(1, ε\n50(ρ+λ log λnε ) ). When performing\nthe Sinkhorn algorithm until ‖P̃1N − p‖1 + ‖P̃ T1N − q‖1 ≤ ε′/2, the resulting approximate transport plan P̃ and distance d̃λc are bounded by\n|dλc − d̃λc̃ | ≤ ε, DKL(P̄ ‖P̃ ) ≤ ε/λ. (11)\nConvergence rate. We next show that approximate Sinkhorn converges as fast as regular Sinkhorn by slightly adapting the convergence bound by Dvurechensky et al. (2018) to account for sparsity.\nTheorem 4 (Dvurechensky et al. (2018)). Given the matrix K̃ ∈ Rn×n and p, q the Sinkhorn algorithm gives a transport plan satisfying ‖P̃1N − p‖1 + ‖P̃ T1N − q‖1 ≤ ε in iterations\nk ≤ 2 + −4 ln(mini,j{K̃ij |K̃ij > 0}mini,j{pi, qj}) ε . (12)\nBackpropagation. Efficient gradient computation is almost as important for modern deep learning models as the algorithm itself. These models usually aim at learning the embeddings in Xp and Xq and therefore need gradients w.r.t. the cost matrix C. We can estimate these either via automatic differentiation of the unrolled Sinkhorn iterations or via the analytic solution that assumes exact convergence. Depending on the problem at hand, either the automatic or the analytic estimator will lead to faster overall convergence (Ablin et al., 2020). LCN-Sinkhorn works flawlessly with automatic backpropagation since it only relies on basic linear algebra (except for choosing Nyström landmarks and LSH neighbors, for which we use a simple straight-through estimator (Bengio et al., 2013)). To enable fast analytic backpropagation we provide analytic gradients in Proposition 1. Note that both backpropagation methods have runtime linear in the number of points n and m. Proposition 1. The derivatives of the distances dλc and dλLCN,c (Eqs. (1) and (5)) and the optimal transport plan P̄ ∈ Rn×m w.r.t. the (decomposed) cost matrix C ∈ Rn×m in entropy-regularized OT and LCN-Sinkhorn are\n∂dλc ∂C = P̄ , ∂P̄ij ∂Ckl = − 1 λ P̄ijδikδjl, (13)\n∂dλLCN,c ∂U = −λs̄(Wt̄)T , ∂dλLCN,c ∂W = −λ(s̄TU)T t̄T , ∂dλLCN,c ∂ logKsp = −λP̄ sp, ∂dλLCN,c ∂ logKspNys = −λP̄ spNys,\n(14)\n∂P̄U,ij ∂Ukl = δikδjlsi, ∂P̄W,ij ∂Ukl = P̄ †U,ikskP̄W,lj , ∂P̄U,ij ∂Wkl = P̄U,iktlP̄ † W,lj ,\n∂P̄W,ij ∂Wkl\n= δikδjltj , ∂P̄ spij\n∂ logKspkl = P̄ spij δikδjl,\n∂P̄ spNys,ij ∂ logKspNys,kl = P̄ spNys,ijδikδjl,\n(15)\nwith δij denoting the Kronecker delta and † the Moore-Penrose pseudoinverse. Using these decompositions we can backpropagate through LCN-Sinkhorn in time O((n+m)l2 + l3)." }, { "heading": "5 GRAPH TRANSPORT NETWORK", "text": "Graph distance learning. The ability to predict similarities or distances between graph-structured objects is useful across a wide range of applications. It can e.g. be used to predict the reaction rate between molecules (Houston et al., 2019), search for similar images (Johnson et al., 2015), similar molecules for drug discovery (Birchall et al., 2006), or similar code for vulnerability detection (Li et al., 2019). We propose the graph transport network (GTN) to evaluate approximate Sinkhorn and advance the state of the art on this task.\nGraph transport network. GTN first uses a Siamese graph neural network (GNN) to embed two graphs independently as sets of node embeddings. These embeddings are then matched using enhanced entropy-regularized optimal transport. Given an undirected graph G = (V, E), with node set V and edge set E , node attributes xi ∈ RHx and (optional) edge attributes ei,j ∈ RHe , with i, j ∈ V , we update the node embeddings in each GNN layer via\nh (l) self,i = σ(W (l) nodeh (l−1) i + b (l)), (16)\nh (l) i = h (l) self,i + ∑ j∈Ni η (l) i,jh (l) self,jWedgeei,j , (17)\nwith Ni denoting the neighborhood of node i, h(0)i = xi, h (l) i ∈ RHN for l ≥ 1, the bilinear layer Wedge ∈ RHN×HN×He , and the degree normalization η(1)i,j = 1 and η (l) i,j = 1/ √ degi degj for l > 1.\nThis choice of ηi,j allows our model to handle highly skewed degree distributions while still being able to represent node degrees. We found the choice of non-linearity σ not to be critical and chose a LeakyReLU. We do not use the bilinear layer Wedgeei,j if there are no edge attributes. We aggregate each layer’s node embeddings to obtain the final embedding of node i\nhfinali = [h (1) self,i ‖h (1) i ‖h (2) i ‖ . . . ‖h (L) i ]. (18)\nHaving obtained the embedding sets Hfinal1 and H final 2 of both graphs we use the L2 distance as a cost function and then calculate the Sinkhorn distance, which is symmetric and permutation invariant w.r.t. the sets Hfinal1 and H final 2 . We obtain the embeddings for matching via h (0) i = MLP(h final i ) and obtain the final prediction via d = dλcwout + bout, with learnable wout and bout. All weights in GTN are trained end-to-end via backpropagation. For small graphs we use the full Sinkhorn distance and scale to large graphs by leveraging LCN-Sinkhorn. GTN is more expressive than models that aggegrate node embeddings to a single fixed-size embedding for the entire graph but still scales log-linearly in the number of nodes, as opposed to previous approaches that scale quadratically. Note that GTN inherently performs graph matching and can therefore also be applied to this task.\nLearnable unbalanced OT. Since GTN regularly encounters graphs with disagreeing numbers of nodes it needs to be able to handle cases where ‖p‖1 6= ‖q‖1 or where not all nodes in one graph have a corresponding node in the other and thus P1m < p or P T1n < q. Unbalanced OT allows us to handle both of these cases (Peyré & Cuturi, 2019). Previous methods did so by swapping these requirements with a uniform divergence loss term on p and q (Frogner et al., 2015; Chizat et al., 2018). However, these approaches uniformly penalize deviations from balanced OT and therefore cannot adapt to only ignore parts of the distribution. We propose to alleviate this limitation by swapping the cost matrix C with the bipartite matching (BP) matrix (Riesen & Bunke, 2009)\nCBP =\n[ C C(p,ε)\nC(ε,q) C(ε,ε)\n] , C\n(p,ε) ij =\n{ ci,ε i = j\n∞ i 6= j , C (ε,q) ij =\n{ cε,j i = j\n∞ i 6= j , C (ε,ε) ij = 0,\n(19)\nand adaptively computing the costs ci,ε, cε,j and cε,ε based on the input sets Xp and Xq. Using the BP matrix adds minor computational overhead since we only need to save the diagonals cp,ε and cε,q ofCp,ε andCε,q. We can then include the additional parts ofCBP in the Sinkhorn algorithm (Eq. (2)) via\nKBPt = [ Kt̂+ cp,ε ť cε,q t̂+ 1Tn ť ] , KTBPs = [ KT ŝ+ cε,q š cp,ε ŝ+ 1Tmš ] , (20)\nwhere t̂ denotes the upper and ť the lower part of the vector t. To calculate dλc we can decompose the transport plan PBP in the same way as CBP, with a single scalar for Pε,ε. For GTN we obtain the deletion cost via ci,ε = ‖α xpi‖2, with a learnable vector α ∈ Rd. Multi-head OT. Inspired by attention models (Vaswani et al., 2017) we further improve GTN by using multiple OT heads. UsingK heads means that we calculate OT in parallel forK separate sets of embeddings representing the same pair of objects and obtain a set of distances dλc ∈ RK . We can then transform these distances to a final distance prediction using a set of linear layers h(k)i = W\n(k)hfinali for head k and obtain the final prediction via d = MLP(dλc ). Note that both learnable unbalanced OT and multi-head OT might be of independent interest." }, { "heading": "6 RELATED WORK", "text": "Log-linear optimal transport. For an overview of optimal transport and its foundations see Peyré & Cuturi (2019). On low-dimensional grids and surfaces OT can be solved using dynamical OT (Papadakis et al., 2014; Solomon et al., 2014), convolutions (Solomon et al., 2015), or embedding/hashing schemes (Indyk & Thaper, 2003; Andoni et al., 2008). In higher dimensions we can use tree-based algorithms (Backurs et al., 2020) or hashing schemes (Charikar, 2002), which are however limited to a previously fixed set of points Xp, Xq, on which only the distributions p and q change. For sets that change dynamically (e.g. during training) one common method of achieving log-linear runtime is a multiscale approximation of entropy-regularized OT (Schmitzer, 2019; Gerber & Maggioni, 2017). Tenetov et al. (2018) recently proposed using a low-rank approximation of the Sinkhorn similarity\nmatrix obtained via a semidiscrete approximation of the Euclidean distance. Altschuler et al. (2019) improved upon this approach by using the Nyström method for the approximation. These approaches still struggle with high-dimensional real-world problems, as we will show in Sec. 7.\nSliced Wasserstein distance. Another approach to reduce the computational complexity of optimal transport (without entropy regularization) are sliced Wasserstein distances (Rabin et al., 2011). However, they require the L2 distance as a cost function and are either unstable in convergence or prohibitively expensive for high-dimensional problems (O(nd3)) (Meng et al., 2019). Fast Sinkhorn. Another line of work has been pursuing accelerating entropy-regularized OT without changing its computational complexity w.r.t. the number of points. Original Sinkhorn requires O(1/ε2) iterations (Dvurechensky et al., 2018) and Jambulapati et al. (2019) recently proposed an algorithm that reduces them toO(1/ε). Alaya et al. (2019) proposed to reduce the size of the Sinkhorn problem by screening out neglectable components, which allows for approximation guarantees. Genevay et al. (2016) proposed using a stochastic optimization scheme instead of Sinkhorn iterations. Essid & Solomon (2018) and Blondel et al. (2018) proposed alternative regularizations to obtain OT problems with similar runtimes as the Sinkhorn algorithm. This work is largely orthogonal to ours.\nEmbedding alignment. For an overview of cross-lingual word embedding models see Ruder et al. (2019). Unsupervised word embedding alignment was proposed by Conneau et al. (2018), with subsequent advances by Alvarez-Melis & Jaakkola (2018); Grave et al. (2019); Joulin et al. (2018).\nGraph matching and distance learning. Most recent approaches for graph matching and graph distance learning either rely on a single fixed-dimensional graph embedding (Bai et al., 2019; Li et al., 2019), or only use attention or some other strongly simplified variant of optimal transport (Bai et al., 2019; Riba et al., 2018; Li et al., 2019). Others break permutation invariance and are thus ill-suited for this task (Ktena et al., 2017; Bai et al., 2018). So far only approaches using a single graph embedding allow faster than quadratic scaling in the number of nodes. Compared to the Sinkhorn-based image model concurrently proposed by Wang et al. (2019) GTN uses no CNN or cross-graph attention, but an enhanced GNN and embedding aggregation scheme. OT has recently been proposed for graph kernels (Maretic et al., 2019; Vayer et al., 2019), which can (to a limited extent) be used for graph matching, but not for distance learning." }, { "heading": "7 EXPERIMENTS", "text": "Approximating Sinkhorn. We start by directly investigating different Sinkhorn approximations. To do so we compute entropy-regularized OT on pairs of 10 000 word embeddings from Conneau et al. (2018), which we preprocess with Wasserstein Procrustes alignment in order to obtain both close and distant neighbors. We let every method use the same total number of 40 neighbors and landmarks (LCN uses 20 each) and set λ = 0.05 (as in Grave et al. (2019)). We measure transport plan approximation quality by (a) calculating the Pearson correlation coefficient (PCC) between all entries in the approximated plan and the true P̄ and (b) comparing the sets of 0.1 % largest entries in the approximated and true P̄ using the Jaccard similarity (intersection over union, IoU). In all figures the error bars denote standard deviation across 5 runs, which is often too small to be visible. Table 1 shows that both sparse Sinkhorn, LCN-Sinkhorn and factored OT (Forrow et al., 2019) obtain distances that are significantly closer to the true dλc than Multiscale OT and Nyström-Sinkhorn. Furthermore, the transport plan computed by sparse Sinkhorn and LCN-Sinkhorn show both a PCC and IoU that are around twice as high as Multiscale OT, while Nyström-Sinkhorn and factored OT exhibit almost no correlation. LCN-Sinkhorn performs especially well in this regard. This is also\nevident in Fig. 1, which shows how the 104 × 104 approximated OT plan entries compared to the true Sinkhorn values. Fig. 2 shows that sparse Sinkhorn offers the best trade-off between runtime and OT plan quality. Factored OT exhibits a runtime 2 to 10 times longer than the competition due to its iterative refinement scheme. LCN-Sinkhorn performs best for use cases with constrained memory (few neighbors/landmarks), as shown in Fig. 3. The number of neighbors and landmarks directly determines memory usage and is linearly proportional to the runtime (see App. K). Fig. 9 shows that sparse Sinkhorn performs best for low regularizations, where LCN-Sinkhorn fails due to the Nyström part going out of bounds. Nyström Sinkhorn performs best at high values and LCN-Sinkhorn always performs better than both (as long as it can be calculated). Interestingly, all approximations except factored OT seem to fail at high λ. We defer analogously discussing the distance approximation to App. L. All approximations scale linearly both in the number of neighbors/landmarks and dataset size, as shown in App. K. Overall, we see that sparse Sinkhorn and LCN-Sinkhorn yield significant improvements over previous approximations. However, do these improvements also translate to better performance on downstream tasks?\nEmbedding alignment. Embedding alignment is the task of finding the orthogonal matrixR ∈ Rd×d that best aligns the vectors from two different embedding spaces, which is e.g. useful for unsupervised word translation. We use the experimental setup established by Conneau et al. (2018) by migrating Grave et al. (2019)’s implementation to PyTorch. The only change we make is using the full set of 20 000 word embeddings and training for 300 steps, while reducing the learning rate by half every 100 steps. We do not change any other hyperparameters and do not use unbalanced OT. After training we match pairs via cross-domain similarity local scaling (CSLS) (Conneau et al., 2018). We use 10 Sinkhorn iterations, 40 neighbors for sparse Sinkhorn, and 20 neighbors and landmarks for LCN-Sinkhorn (for details see App. H). We allow both multiscale OT and Nyström Sinkhorn to use as many landmarks and neighbors as can fit into GPU memory and finetune both methods. Table 2 shows that using full Sinkhorn yields a significant improvement in accuracy on this task\nTable 3: RMSE for GED regression across 3 runs and the targets’ standard deviation σ. GTN outperforms previous models by 48 %.\nLinux AIDS30 Pref. att. σ 0.184 16.2 48.3 SiamMPNN 0.090(7) 13.8(3) 12.1(6) SimGNN 0.039 4.5(3) 8.3(1.4) GMN 0.015() 10.3(6) 7.8(3) GTN, 1 head 0.022(1) 3.7(1) 4.5(3) 8 OT heads 0.012(1) 3.2(1) 3.6(2) Balanced OT 0.034(1) 15.3(1) 27.4(9)\nTable 4: RMSE for graph distance regression across 3 runs. Using LCN-Sinkhorn with GTN increases the error by only 10 % and allows log-linear scaling.\nGED PM [10−2] AIDS30 Pref. att. Pref. att. 200\nσ 16.2 48.3 10.2 Full Sinkhorn 3.7(1) 4.5(3) 1.27(6) Nyström Skh. 3.6(3) 6.2(6) 2.43(7) Multiscale OT 11.2(3) 27.4(5.4) 6.71(44) Sparse Skh. 44.0(30.4) 40.7(8.1) 7.57(1.09) LCN-Skh. 4.0(1) 5.1(4) 1.41(15)\ncompared to the original approach of performing Sinkhorn on randomly sampled subsets of embeddings (Grave et al., 2019). LCN-Sinkhorn even outperforms the full version in most cases, which is likely due to regularization effects from the approximation. It also runs 4.6x faster than full Sinkhorn and 3.1x faster than the original scheme. Sparse Sinkhorn runs 1.8x faster than LCN-Sinkhorn but cannot match its accuracy. LCN-Sinkhorn still outcompetes the original method after refining the embeddings with iterative local CSLS (Conneau et al., 2018). Both multiscale OT and Nyström Sinkhorn fail at this task, despite their larger computational budget. This shows that the improvements achieved by sparse Sinkhorn and LCN-Sinkhorn have an even larger impact in practice.\nGraph distance regression. The graph edit distance (GED) is useful for various tasks, such as image retrieval (Xiao et al., 2008) or fingerprint matching (Neuhaus & Bunke, 2004), but its computation is NP-complete (Bunke & Shearer, 1998). Therefore, to use it on larger graphs we need to learn an approximation. We use the Linux dataset by Bai et al. (2019) and generate 2 new datasets by computing the exact GED using the method by Lerouge et al. (2017) on small graphs (≤ 30 nodes) from the AIDS dataset (Riesen & Bunke, 2008) and a set of preferential attachment graphs. We compare GTN to 3 state-of-the-art baselines: SiameseMPNN (Riba et al., 2018), SimGNN (Bai et al., 2019), and the Graph Matching Network (GMN) (Li et al., 2019). We tune the hyperparameters of all baselines and GTN on the validation set via a grid search. For more details see App. H to J. We first test both GTN and the proposed OT enhancements. Table 3 shows that GTN improves upon competing models by 20 % with a single head and by 48 % with 8 OT heads. These improvements break down when using regular balanced OT, showing the importance of learnable unbalanced OT. Having established GTN as a state-of-the-art model we next ask whether we can sustain its performance when using approximate OT. To test this we additionally generate a set of larger graphs with around 200 nodes and use the Pyramid matching (PM) kernel (Nikolentzos et al., 2017) as the prediction target, since these graphs are too large to compute the GED. See App. J for hyperparameter details. Table 4 shows that both sparse Sinkhorn and the multiscale method using 4 (expected) neighbors fail at this task, demonstrating that the low-rank approximation in LCN has a crucial stabilizing effect during training. Nyström Sinkhorn with 4 landmarks performs surprisingly well on the AIDS30 dataset, suggesting an overall low-rank structure with Nyström acting as regularization. However, it does not perform as well on the other two datasets. Using LCN-Sinkhorn with 2 neighbors and landmarks works well on all three datasets, with an RMSE increased by only 10 % compared to full GTN. App. K furthermore shows that GTN with LCN-Sinkhorn indeed scales linearly in the number of nodes across multiple orders of magnitude. This model thus allows to perform graph matching and distance learning on graphs that are considered large even for simple node-level tasks (20 000 nodes)." }, { "heading": "8 CONCLUSION", "text": "Locality sensitive hashing (LSH) and the novel locally corrected Nyström (LCN) method enable fast and accurate approximations of entropy-regularized OT with log-linear runtime: Sparse Sinkhorn and LCN-Sinkhorn. The graph transport network (GTN) is one example for such a model, which can be substantially improved with learnable unbalanced OT and multi-head OT. It sets the new state of the art for graph distance learning while still scaling log-linearly with graph size. These contributions enable new applications and models that are both faster and more accurate, since they can sidestep workarounds such as pooling." }, { "heading": "A COMPLEXITY ANALYSIS", "text": "Sparse Sinkhorn. A common way of achieving a high p1 and low p2 in LSH is via the AND-OR construction. In this scheme we calculate B · r hash functions, divided into B sets (hash bands) of r hash functions each. A pair of points is considered as neighbors if any hash band matches completely. Calculating the hash buckets for all points with b hash buckets per function scales asO((n+m)dBbr) for the hash functions we consider. As expected, for the tasks and hash functions we investigated we obtain approximately m/br and n/br neighbors, with br hash buckets per band. Using this we can fix the number of neighbors to a small, constant β in expectation with br = min(n,m)/β. We thus obtain a sparse cost matrix Csp with O(max(n,m)β) non-infinite values and can calculate s and t in linear time O(Nsink max(n,m)β), where Nsink ≤ 2 + −4 ln(mini,j{K̃ij |K̃ij>0}mini,j{pi,qj})ε (see Theorem 4) denotes the number of Sinkhorn iterations. Calculating the hash buckets with r = log min(n,m)−log βlog b takes O((n+m)dBb(log min(n,m)− log β)/ log b). Since B, b, and β are small, we obtain roughly log-linear scaling with the number of points overall, i.e. O(n log n) for n ≈ m. LCN-Sinkhorn. Both choosing landmarks via k-means++ sampling and via k-means with a fixed number of iterations have the same runtime complexity of O((n+m)ld). PrecomputingW can be done in time O(nl2 + l3). The low-rank part of updating the vectors s and t can be computed in O(nl + l2 + lm), with l chosen constant, i.e. independently of n and m. Since sparse Sinkhorn with LSH has a log-linear runtime we again obtain log-linear overall runtime for LCN-Sinkhorn." }, { "heading": "B LIMITATIONS", "text": "Sparse Sinkhorn. Using a sparse approximation for K works well in the common case when the regularization parameter λ is low and the cost function varies enough between data pairs, such that the transport plan P resembles a sparse matrix. However, it can fail if the cost between pairs is very similar or the regularization is very high, if the dataset contains many hubs, i.e. points with a large number of neighbors, or if the distributions p or q are spread very unevenly. Furthermore, sparse Sinkhorn can be too unstable to train a model from scratch, since randomly initialized embeddings often have no close neighbors (see Sec. 7). LCN-Sinkhorn largely alleviates these limitations.\nLCN-Sinkhorn. Since we cannot calculate the full cost matrix, LCN-Sinkhorn cannot provide accuracy guarantees in general. Highly concentrated distributions p and q might have adverse effects on LCN-Sinkhorn. However, we can compensate for these by sampling landmarks or neighbors proportional to each point’s probability mass. We therefore do not expect LCN-Sinkhorn to break down in this scenario. If the regularization parameter is low or the cost function varies greatly, we sometimes observed stability issues (over- and underflows) with the Nyström approximation because of the inverseA−1, which cannot be calculated in log-space. Due to its linearity the Nyström method furthermore sometimes approximates similarities as negative values, which leads to a failure if the result of the matrix product in Eq. (2) becomes negative. In these extreme cases we also observed catastrophic elimination caused by the correction Ksp∆ . Since this essentially means that optimal transport will be very local, we recommend using sparse Sinkhorn in these scenarios. This again demonstrates the complementarity of the sparse approximation and Nyström: In cases where one fails we can often resort to the other." }, { "heading": "C PROOF OF THEOREM 1", "text": "We first prove a lemma that will be useful later on. Lemma A. Let K̃ be the Nyström approximation of the similarity matrixKij = e−‖xi−xj‖2/λ. Let xi and xj be data points with equal L2 distance ri and rj to all l landmarks, which have the same distance ∆ to each other. Then\nK̃ij = le−(ri+rj)/λ\n1 + (l − 1)e−∆/λ (21)\nProof. The inter-landmark distance matrix is A = e−∆/λ1l×l + (1− e−∆/λ)Il, (22)\nwhere 1l×l denotes the constant 1 matrix. Using the identity\n(b1n×n + (a− b)In)−1 = −b\n(a− b)(a+ (n− 1)b) 1n×n +\n1\na− b In (23)\nwe compute\nK̃ij = Ui,:A −1V:,j\n= ( e−ri/λ e−ri/λ · · · )( −e−∆/λ (1− e−∆/λ)(1 + (l − 1)e−∆/λ) 1l×l + 1 1− e−∆/λ Il )e −rj/λ e−rj/λ\n... = e−(ri+rj)/λ\n1− e−∆/λ\n( −l2e−∆/λ\n1 + (l − 1)e−∆/λ + l\n) = e−(ri+rj)/λ\n1− e−∆/λ l − le−∆/λ 1 + (l − 1)e−∆/λ\n= le−(ri+rj)/λ\n1 + (l − 1)e−∆/λ (24)\nNow consider the error ‖Ki,: −KLCN,i,:‖∞. The k − 1 nearest neighbors are covered by the sparse correction and therefore the next nearest neighbor has distance δk. The expected distance from the closest landmark is greater than the expected distance inside the surrounding d-ball of radius R, i.e. E[r] ≥ EV (R)[r] = dd+1R. Because furthermore δk R, the error is dominated by the first term and the maximum error in row i is given by the k-nearest neighbor of i, denoted by j. Thus\nE[‖Ki,: −KLCN,i,:‖∞] = E[Ki,j −KLCN,i,j ] = E[Ki,j ]− E[KLCN,i,j ] = E[e−δk/λ]− E[KLCN,i,j ] (25)\nNote that we can lower bound the first term using Jensen’s inequality. However, we were unable to find a reasonably tight upper bound and the resulting integral (ignoring exponentially small boundary effects, see Percus & Martin (1998))\nE[e−δk/λ] = n!\n(n− k)!(k − 1)!\n∫ ((d/2)!)1/d√ π\n0\ne−r/λV (r)k−1(1− V (r))n−k dV (r) dr dr, (26)\nwith the volume of the d-ball\nV (r) = πd/2rd\n(d/2)! (27)\ndoes not have an analytical solution. We thus have to resort to calculating this expectation numerically.\nWe lower bound the second term by (1) ignoring every landmark except the closest one, since additional landmarks can only increase the estimate KLCN,i,j . We then (2) upper bound the L2 distance to the closest landmark r by √ dR/2, since this would be the furthest distance to the closest point in a d-dimensional grid. Any optimal arrangement minimizing E[miny∈L ‖x− y‖2 |x ∈ Xp] would be at least as good as a grid and thus have furthest distances as small or smaller than those in a grid. Thus,\nE[KLCN,i,j ] (1) ≥ e−2r/λ (2) ≥ e− √ dR/λ. (28)\nWe upper bound this expectation by considering that any point outside the inscribed sphere of the space closest to a landmark (which has radius R) would be further away from the landmarks and thus have a lower value e−d/λ. We can therefore reduce the space over which the expectation is taken to the ball with radius R, i.e.\nE[KLCN,i,j ] ≤ EV (R)[KLCN,i,j ] (29)\nNext we (1) ignore the contributions of all landmarks except for the closest 2, since a third landmark must be further away from the data point than √ 3R, adding an error of O(e−2 √ 3R/λ). We then (2)\nlower bound the distances of both points to both landmarks by the closest distance to a landmark r = min{‖xi − xl1‖2, ‖xi − xl2‖2, ‖xj − xl1‖2, ‖xj − xl2‖2} and use Lemma A to obtain\nEV (R)[KLCN,i,j ] (1) = EV (R)[KLCN, 2 landmarks,i,j ] +O(e−2 √ 3R/λ)\n(2) ≤ EV (R) [ 2e−2r/λ\n1 + e−2R/λ\n] +O(e−2 √ 3R/λ)\n= 2EV (R)[e−2r/λ]\n1 + e−2R/λ +O(e−2\n√ 3R/λ).\n(30)\nAssuming Euclideanness in V (R) we obtain\nEV (R)[e−2r/λ] = 1\nV (R) ∫ R 0 e−2r/λ dV (r) dr dr = d R ∫ R 0 e−2r/λrd−1 dr\n= d\n(2R/λ)d (Γ(d)− Γ(d, 2R/λ))\n(31)" }, { "heading": "D PROOF OF THEOREM 2", "text": "Note that this theorem does not use probabilistic arguments but rather geometrically analyzes the maximum possible error. Ksp is correct for all pairs inside a cluster and 0 otherwise. We therefore obtain the maximum error by considering the closest possible pair between clusters. By definition, this pair has distance D and thus\nmax ‖K −Ksp‖∞ = e−D/λ (32) LCN is also correct for all pairs inside a cluster, so we again consider the closest possible pair xi, xj between clusters. We furthermore only consider the landmarks of the two concerned clusters, adding an error of O(e−D/λ). Hence, KLCN, 2 landmarks,ij = ( e−r/λ e−(r+D)/λ )( 1 e−(2r+D)/λ e−(2r+D)/λ 1 )−1( e−(r+D)/λ e−r/λ\n) = 1\n1− e−(4r+2D)/λ ( e−r/λ e−(r+D)/λ )( 1 −e−(2r+D)/λ −e−(2r+D)/λ 1 )( e−(r+D)/λ e−r/λ ) = 1\n1− e−(4r+2D)/λ ( e−r/λ e−(r+D)/λ )(e−(r+D)/λ − e−(3r+D)/λ) e−r/λ − e−(3r+2D)/λ ) = 1\n1− e−(4r+2D)/λ (e−(2r+D)/λ − e−(4r+D)/λ + e−(2r+D)/λ − e−(4r+3D)/λ)\n= e−(2r+D)/λ\n1− e−(4r+2D)/λ (2− e−2r/λ − e−(2r+2D)/λ)\n= e−D/λe−2r/λ(2− e−2r/λ)−O(e−2D/λ) (33)\nand thus max ‖K −KLCN‖∞ = e−D/λ(1− e−2r/λ(2− e−2r/λ) +O(e−D/λ)). (34)\nFor pure Nyström we need to consider the distances inside a cluster. In the worst case two points overlap, i.e. Kij = 1, and lie at the boundary of the cluster. Since r D we again only consider the landmarks in the concerned cluster, adding an error of O(e−D/λ). Because of symmetry we can optimize the worst-case distance from all landmarks by putting them on a (l− 1)-simplex centered on the cluster center. Since there are at most d landmarks in each cluster there is always one direction in which the worst-case points are r away from all landmarks. The circumradius of an (l − 1)-simplex with side length ∆ is √ l−1 2l ∆. Thus, the maximum distance to all landmarks is √ r2 + l−12l ∆\n2. Using Lemma A we therefore obtain the Nyström approximation\nKNys,ij = le−2 √ r2+ l−12l ∆ 2/λ\n1 + (l − 1)e−∆/λ +O(e−D/λ) (35)" }, { "heading": "E NOTE ON THEOREM 3", "text": "Lemmas C-F and and thus Theorem 1 by Altschuler et al. (2019) are also valid for Q outside the simplex so long as ‖Q‖1 = n and it only has non-negative entries. Any P̃ returned by Sinkhorn fulfills these conditions. Therefore the rounding procedure given by their Algorithm 4 is not necessary for this result.\nFurthermore, to be more consistent with Theorems 1 and 2 we use the L2 distance instead of L22 in this theorem, which only changes the dependence on ρ." }, { "heading": "F NOTES ON THEOREM 4", "text": "To adapt Theorem 1 by Dvurechensky et al. (2018) to sparse matrices (i.e. matrices with some Kij = 0) we need to redefine\nν := min i,j {Kij |Kij > 0}, (36)\ni.e. take the minimum only w.r.t. non-zero elements in their Lemma 1." }, { "heading": "G PROOF OF PROPOSITION 1", "text": "Theorem A (Danskin’s theorem). Consider a continuous function φ : Rk × Z → R, with the compact set Z ⊂ Rj . If φ(x, z) is convex in x for every z ∈ Z and φ(x, z) has a unique maximizer z̄, the derivative of\nf(x) = max z∈Z φ(x, z) (37)\nis given by the derivative at the maximizer, i.e.\n∂f ∂x = ∂φ(x, z̄) ∂x . (38)\nWe start by deriving the derivatives of the distances. To show that the Sinkhorn distance fulfills the conditions for Danskin’s theorem we first identify x = C, z = P , and φ(C,P ) = −〈P ,C〉F + λH(P ). We next observe that the restrictions P1m = p and P T1n = q define a compact, convex set for P . Furthermore, φ is a continuous function and linear in C, i.e. both convex and concave for any finite P . Finally, φ(C,P ) is concave in P since 〈P ,C〉F is linear and λH(P ) is concave. Therefore the maximizer P̄ is unique and Danskin’s theorem applies to the Sinkhorn distance. Using\n∂CNys,ij ∂Ukl = ∂ ∂Ukl\n( −λ log(\n∑ a UiaWaj)\n) = −λδik\nWlj∑ aUiaWaj = −λδik Wlj KNys,ij , (39)\n∂CNys,ij ∂Wkl = ∂ ∂Wkl\n( −λ log(\n∑ a UiaWaj)\n) = −λδjl\nUik∑ aUiaWaj = −λδjl Uik KNys,ij , (40)\nP̄Nys,ij KNys,ij = ∑ b P̄U,ibP̄W,bj∑ aUiaWaj = s̄it̄j ∑ bUibWbj∑ aUiaWaj = s̄it̄j ∑ bUibWbj∑ aUiaWaj = s̄it̄j (41)\nand the chain rule we can calculate the derivative w.r.t. the cost matrix as\n∂dλc ∂C = − ∂ ∂C\n( −〈P̄ ,C〉F + λH(P̄ ) ) = P̄ , (42)\n∂dλLCN,c ∂Ukl = ∑ i,j ∂CNys,ij ∂Ukl ∂dλLCN,c ∂CNys,ij = −λ ∑ i,j δikWlj P̄Nys,ij KNys,ij\n= −λ ∑ i,j δikWlj s̄it̄j = −λs̄k ∑ j Wlj t̄j = ( −λs̄(Wt̄)T ) kl ,\n(43)\n∂dλLCN,c ∂Wkl = ∑ i,j ∂CNys,ij ∂Wkl ∂dλLCN,c ∂CNys,ij = −λ ∑ i,j δjlUik P̄Nys,ij KNys,ij\n= −λ ∑ i,j δjlUiks̄it̄j = −λ (∑ i s̄iUik ) t̄l = ( −λ(s̄TU)T t̄T ) kl ,\n(44)\nand ∂dλLCN,c ∂ log Ksp and ∂dλLCN,c ∂ log KspNys directly follow from ∂d λ c ∂C . Theorem B (Implicit function theorem). Let f : Rn′ ×Rm′ → Rm′ be a continuously differentiable function with f(a, b) = 0. If its Jacobian matrix Jfi(x,y),yj = ∂fi ∂yj (a, b) is invertible, then there exists an open set a ∈ U ⊂ Rn′ on which there exists a unique continuously differentiable function g : U → Rm′ with g(a) = b and ∀x ∈ U : f(x, g(x)) = 0. Moreover,\n∂gi ∂xj\n(x) = − ( Jf(x,y),y(x, g(x)) )−1 i,:\n( ∂\n∂x f(x, g(x)) ) :,j\n(45)\nNext we derive the transport plan derivatives. To apply the implicit function theorem we identify x = C and y = P as flattened matrices. We will index these flat matrices via index pairs to simplify interpretation. We furthermore identify\nf(C,P ) = ∂\n∂P̄\n( 〈P̄ ,C〉F − λH(P̄ ) ) = C + λ(log P̄ + 1). (46)\nThe minimizer P̄ cannot lie on the boundary of P ’s valid region since limp→0 ∂∂pp log p = limp→0 log p + 1 = −∞ and therefore P̄ij > 0. Hence, f(C, P̄ ) = 0 with P̄ (C) = arg minP 〈P ,C〉F − λH(P ) and we find that g(x) = P̄ (C). We furthermore obtain\nJfij(C,P ),Pab(C, P̄ (C)) = ∂\n∂P̄ab\n( Cij + λ(log P̄ij + 1) ) = δiaδjbλ/P̄ij , (47)\n∂\n∂Ckl fab(C, P̄ (C)) =\n∂\n∂Ckl\n( Cab + λ(log P̄ab + 1) ) = δakδbl. (48)\nJfij(C,P ),Pab(C, P̄ (C)) is hence a diagonal matrix and invertible since λ/P̄ij > 0. We can thus use the implicit function theorem and obtain\n∂P̄ij ∂Ckl = − ∑ a,b ( Jfij(C,P ),Pab(C, P̄ (C)) )−1 ∂ ∂Ckl fab(C, P̄ (C))\n= − ∑ a,b δiaδjb 1 λ P̄ijδakδbl = − 1 λ P̄ijδikδjl.\n(49)\nTo extend this result to LCN-OT we use\n∂P̄U,ij ∂P̄Nys,ab = ∂ ∂P̄Nys,ab ∑ k P̄Nys,ikP̄ † W,kj = δiaP̄ † W,bj , (50)\n∂P̄W,ij ∂P̄Nys,ab = ∂ ∂P̄Nys,ab ∑ k P̄ †U,ikP̄Nys,kj = δjbP̄ † U,ia (51)\nand the chain rule to obtain\n∂P̄U,ij ∂Ukl = ∑ a,b,c,d ∂CNys,cd ∂Ukl ∂P̄Nys,ab ∂CNys,cd ∂P̄U,ij ∂P̄Nys,ab\n= ∑ a,b,c,d ( −λδck Wld KNys,cd )( − 1 λ P̄Nys,abδacδbd )( δiaP̄ † W,bj ) = ∑ b Wlb P̄Nys,ib KNys,ib P̄ †W,bjδik = δik ∑ b siWlbtbP̄ † W,bj = δiksi ∑ b P̄W,lbP̄ † W,bj\n= δikδjlsi,\n(52)\n∂P̄W,ij ∂Ukl = ∑ a,b,c,d ∂CNys,cd ∂Ukl ∂P̄Nys,ab ∂CNys,cd ∂P̄W,ij ∂P̄Nys,ab\n= ∑ a,b,c,d ( −λδck Wld KNys,cd )( − 1 λ P̄Nys,abδacδbd )( δjbP̄ † U,ia ) = Wlj\nP̄Nys,kj KNys,kj P̄ †U,ik = WljtjP̄ † U,iksk = P̄ † U,ikskP̄W,lj ,\n(53)\n∂P̄U,ij ∂Wkl = ∑ a,b,c,d ∂CNys,cd ∂Wkl ∂P̄Nys,ab ∂CNys,cd ∂P̄U,ij ∂P̄Nys,ab\n= ∑ a,b,c,d ( −λδdl Uck KNys,cd )( − 1 λ P̄Nys,abδacδbd )( δiaP̄ † W,bj ) = Uik\nP̄Nys,il KNys,il P̄ †W,lj = siUiktlP̄ † W,lj = P̄U,iktlP̄ † W,lj ,\n(54)\n∂P̄W,ij ∂Wkl = ∑ a,b,c,d ∂CNys,cd ∂Wkl ∂P̄Nys,ab ∂CNys,cd ∂P̄W,ij ∂P̄Nys,ab\n= ∑ a,b,c,d ( −λδdl Uck KNys,cd )( − 1 λ P̄Nys,abδacδbd )( δjbP̄ † U,ia ) = ∑ a Uak P̄Nys,aj KNys,aj P̄ †U,iaδjl = δjl ∑ a saUaktjP̄ † U,ia = δjltj ∑ a P̄ †U,iaP̄U,ak\n= δikδjltj .\n(55)\n∂P̄ sp ∂ log Ksp and ∂P̄ spNys ∂ log KspNys directly follow from ∂P̄∂C .\nWe can calculate the pseudoinverses P̄ †U = (P̄ T U P̄U ) −1P̄U and P̄ † W = P̄ T W (P̄W P̄ T W ) −1 in time O((n + m)l2 + l3) since P̄U ∈ Rn×l and P̄W ∈ Rl×m. We do not fully instantiate the matrices required for backpropagation but instead save their decompositions, similar to the transport plan P̄Nys = P̄U P̄W . We can then compute backpropagation in time O((n + m)l2) by applying the sums over i and j in the right order. We thus obtain O((n + m)l2 + l3) overall runtime for backpropagation." }, { "heading": "H CHOOSING LSH NEIGHBORS AND NYSTRÖM LANDMARKS", "text": "We focus on two LSH methods for obtaining near neighbors. Cross-polytope LSH (Andoni et al., 2015) uses a random projection matrix R ∈ Rd×b/2 with the number of hash buckets b, and then decides on the hash bucket via h(x) = arg max([xTR ‖ −xTR]), where ‖ denotes concatenation. K-means LSH computes k-means and uses the clusters as hash buckets.\nWe further improve the sampling probabilities of cross-polytope LSH via the AND-OR construction. In this scheme we calculate B · r hash functions, divided into B sets (hash bands) of r hash functions\neach. A pair of points is considered as neighbors if any hash band matches completely. K-means LSH does not work well with the AND-OR construction since its samples are highly correlated. For large datasets we use hierarchical k-means instead (Paulevé et al., 2010; Nistér & Stewénius, 2006).\nSince the graph transport network (GTN) uses the L2 distance between embeddings as a cost function we use (hierarchical) k-means LSH and k-means Nyström in both sparse OT and LCNOT. For embedding alignment we use cross-polytope LSH for sparse OT since similarities are measured via the dot product. For LCN-OT we found that using k-means LSH works better with Nyström using k-means++ sampling than cross-polytope LSH. This is most likely due to a better alignment between LSH samples and Nyström. We convert the cosine similarity to a distance via\ndcos =\n√ 1− x T p xq\n‖xp‖2‖xq‖2 (Berg et al., 1984) to use k-means with dot product similarity. Note that\nthis is actually based on cosine similarity, not the dot product. Due to the balanced nature of OT we found this more sensible than maximum inner product search (MIPS). For both experiments we also experimented with uniform and recursive RLS sampling but found that the above mentioned methods work better.\nI IMPLEMENTATIONAL DETAILS\nOur implementation runs in batches on a GPU via PyTorch (Paszke et al., 2019) and PyTorch Scatter (Fey & Lenssen, 2019). To avoid over- and underflows we use log-stabilization throughout, i.e. we save all values in log-space and compute all matrix-vector products and additions via the log-sum-exp trick log ∑ i e xi = maxj xj + log( ∑ i e xi−maxj xj ). Since the matrix A is small we compute its inverse using double precision to improve stability. Surprisingly, we did not observe any benefit from using the Cholesky decomposition or not calculating A−1 and instead solving the equation B = AX forX . We furthermore precomputeW = A−1V to avoid unnecessary operations.\nWe use 3 layers and an embedding size HN = 32 for GTN. The MLPs use a single hidden layer, biases and LeakyReLU non-linearities. The single-head MLP uses an output size of HN, match = HN and a hidden embedding size of 4HN, i.e. the same as the concatenated node embedding, and the multi-head MLP uses a hidden embedding size of HN. To stabilize initial training we scale the node embeddings by d̄\nn̄ √ HN, match directly before calculating OT. d̄ denotes the average graph distance in the\ntraining set, n̄ the average number of nodes per graph, and HN, match the matching embedding size, i.e. 32 for single-head and 128 for multi-head OT." }, { "heading": "J GRAPH DATASET GENERATION AND EXPERIMENTAL DETAILS", "text": "The dataset statistics are summarized in Table 5. Each dataset contains the distances between all graph pairs in each split, i.e. 10 296 and 1128 distances for preferential attachment. The AIDS dataset was generated by randomly sampling graphs with at most 30 nodes from the original AIDS dataset (Riesen & Bunke, 2008). Since not all node types are present in the training set and our choice of GED is permutation-invariant w.r.t. types, we permuted the node types so that there are no previously unseen types in the validation and test sets. For the preferential attachment datasets we first generated 12, 4, and 4 undirected “seed” graphs (for train, val, and test) via the initial attractiveness model with randomly chosen parameters: 1 to 5 initial nodes, initial attractiveness of 0 to 4 and 1/2n̄ and 3/2n̄ total nodes, where n̄ is the average number of nodes (20, 200, 2000, and 20 000). We then randomly label every node (and edge) in these graphs uniformly. To obtain the remaining graphs we edit the “seed” graphs between n̄/40 and n̄/20 times by randomly adding, type editing, or removing nodes\nand edges. Editing nodes and edges is 4x and adding/deleting edges 3x as likely as adding/deleting nodes. Most of these numbers were chosen arbitrarily, aiming to achieve a somewhat reasonable dataset and process. We found that the process of first generating seed graphs and subsequently editing these is crucial for obtaining meaningfully structured data to learn from. For the GED we choose an edit cost of 1 for changing a node or edge type and 2 for adding or deleting a node or an edge.\nWe represent node and edge types as one-hot vectors. We train all models except SiamMPNN (which uses SGD) and GTN on Linux with the Adam optimizer and mean squared error (MSE) loss for up to 300 epochs and reduce the learning rate by a factor of 10 every 100 steps. On Linux we train for up to 1000 epochs and reduce the learning rate by a factor of 2 every 100 steps. We use the parameters from the best epoch based on the validation set. We choose hyperparameters for all models using multiple steps of grid search on the validation set, see Tables 6 to 8 for the final values. We use the originally published result of SimGNN on Linux and thus don’t provide its hyperparameters. GTN uses 500 Sinkhorn iterations. We obtain the final entropy regularization parameter from λbase via λ = λbase d̄ n̄ 1 logn , where d̄ denotes the average graph distance and n̄ the average number of nodes per graph in the training set. The factor d̄/n̄ serves to estimate the embedding distance scale and 1/ log n counteracts the entropy scaling with n log n. Note that the entropy regularization parameter was small, but always far from 0, which shows that entropy regularization actually has a positive\nTable 9: Runtimes (ms) of Sinkhorn approximations for EN-DE embeddings at different dataset sizes. Full Sinkhorn scales quadratically, while all approximationes scale at most linearly with the size. Sparse approximations are 2-4x faster than low-rank approximations, and factored OT is multiple times slower due to its iterative refinement scheme. Note that similarity matrix computation time (K) primarily depends on the LSH/Nyström method, not the OT approximation.\nN = 10000 N = 20000 N = 50000 K OT K OT K OT\nFull Sinkhorn 8 2950 29 11 760 OOM OOM Factored OT 29 809 32 1016 55 3673 Multiscale OT 90 48 193 61 521 126 Nyström Skh. 29 135 41 281 79 683 Sparse Skh. 42 46 84 68 220 137 LCN-Sinkhorn 101 116 242 205 642 624\n0 100 200\nNeighbors + landmarks\n0\n100\n200\n300\nR un\ntim e\n(m s)\nMultsc. OT Nys. Skh. Sparse Skh. LCN-Skh.\nFigure 5: Runtime scales linearly with the number of neighbors/landmarks for all relevant Sinkhorn approximation methods.\n100 1000 10000 Avg. graph size\n10\n100\n1000\n10000\nTi m\ne pe\nre po\nch (s\n)\nFull LCN\nFigure 6: Log-log runtime per epoch for GTN with full Sinkhorn and LCN-Sinkhorn. LCNSinkhorn scales almost linearly with graph size while sustaining similar accuracy.\neffect on learning. On the pref. att. 200 dataset we use no L2 regularization, λbase = 0.5, and a batch size of 200. For pref. att. 2k we use λbase = 2 and a batch size of 20 for full Sinkhorn and 100 for LCN-OT. For pref. att. 20k we use λbase = 50 and a batch size of 4. λbase scales with graph size due to normalization of the PM kernel.\nFor LCN-OT we use roughly 10 neighbors for LSH (20 k-means clusters) and 10 k-means landmarks for Nyström on pref. att. 200. We double these numbers for pure Nyström Sinkhorn, sparse OT, and multiscale OT. For pref. att. 2k we use around 15 neighbors (10 · 20 hierarchical clusters) and 15 landmarks and for pref. att. 20k we use roughly 30 neighbors (10 · 10 · 10 hierarchical clusters) and 20 landmarks. The number of neighbors for the 20k dataset is higher and strongly varies per iteration due to the unbalanced nature of hierarchical k-means. This increase in neighbors and landmarks and PyTorch’s missing support for ragged tensors largely explains LCN-OT’s deviation from perfectly linear runtime scaling.\nWe perform all runtime measurements on a compute node using one Nvidia GeForce GTX 1080 Ti, two Intel Xeon E5-2630 v4, and 256GB RAM." }, { "heading": "K RUNTIMES", "text": "Table 9 compares the runtime of the full Sinkhorn distance with different approximation methods using 40 neighbors/landmarks. We separate the computation of approximate K from the optimal transport computation (Sinkhorn iterations), since the former primarily depends on the LSH and Nyström methods we choose. We observe a 2-4x speed difference between sparse (multiscale OT and sparse Sinkhorn) and low-rank approximations (Nyström Sinkhorn and LCN-Sinkhorn), while factored OT is multiple times slower due to its iterative refinement scheme. In Fig. 5 we observe that this runtime gap stays constant independent of the number of neighbors/landmarks, i.e. the relative\ndifference decreases as we increase the number of neighbors/landmarks. This gap could either be due to details in low-level CUDA implementations and hardware or the fact that low-rank approximations require 2x as many multiplications for the same number of neighbors/landmarks. In either case, both Table 9 and Fig. 5 show that the runtimes of all approximations scale linearly both in the dataset size and the number of neighbors and landmarks, while full Sinkhorn scales quadratically.\nWe furthermore investigate whether GTN with approximate Sinkhorn indeed scales log-linearly with the graph size by generating preferential attachment graphs with 200, 2000, and 20 000 nodes (±50 %). We use the Pyramid matching (PM) kernel (Nikolentzos et al., 2017) as prediction target. Fig. 6 shows that the runtime of LCN-Sinkhorn scales almost linearly (dashed line) and regular full Sinkhorn quadraticly (dash-dotted line) with the number of nodes, despite both achieving similar accuracy and LCN using slightly more neighbors and landmarks on larger graphs to sustain good accuracy. Full Sinkhorn went out of memory for the largest graphs." }, { "heading": "L DISTANCE APPROXIMATION", "text": "Figs. 7 and 8 show that for the chosen λ = 0.05 sparse Sinkhorn offers the best trade-off between computational budget and distance approximation, with LCN-Sinkhorn and multiscale OT coming in second. Factored OT is again multiple times slower than the other methods and thus not included in Fig. 7. Note that dλc can be negative due to the entropy offset. This picture changes as we increase the regularization. For higher regularizations LCN-Sinkhorn is the most precise at constant computational budget (number of neighbors/landmarks), as shown in Fig. 9. Note that the crossover points in this figure roughly coincide with those in Fig. 4. Keep in mind that in most cases the OT plan is more important than the raw distance approximation, since it determines the training gradient and tasks like embedding alignment don’t use the distance at all. This becomes evident in the fact that sparse Sinkhorn achieves a better distance approximation than LCN-Sinkhorn but performs worse in both downstream tasks investigated in Sec. 7." } ]
2,020
null
SP:06414ad3c4b2438227a6d0749755106ee30f1564
[ "The submission presents three contributions. First, the authors show the inconsistencies in the existing annealed Langevin sampling used in score-matching generative models and propose to correct it with the newly proposed Consistent Annealed Sampling (CAS) algorithm. The second contribution claimed is in providing evidence of the benefits of Expected Denoised Sample (EDS). Furthermore, the submission introduces a hybrid adversarial score-matching model that demonstrates improvements in terms of FID on simpler architectures." ]
Denoising Score Matching with Annealed Langevin Sampling (DSM-ALS) has recently found success in generative modeling. The approach works by first training a neural network to estimate the score of a distribution, and then using Langevin dynamics to sample from the data distribution assumed by the score network. Despite the convincing visual quality of samples, this method appears to perform worse than Generative Adversarial Networks (GANs) under the Fréchet Inception Distance, a standard metric for generative models. We show that this apparent gap vanishes when denoising the final Langevin samples using the score network. In addition, we propose two improvements to DSM-ALS: 1) Consistent Annealed Sampling as a more stable alternative to Annealed Langevin Sampling, and 2) a hybrid training formulation, composed of both Denoising Score Matching and adversarial objectives. By combining these two techniques and exploring different network architectures, we elevate score matching methods and obtain results competitive with state-of-the-art image generation on CIFAR-10.
[]
[ { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Pascal Vincent" ], "title": "A connection between score matching and denoising autoencoders", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Martin Raphan", "Eero P Simoncelli" ], "title": "Least squares estimation without priors or supervision", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Gareth O Roberts", "Richard L Tweedie" ], "title": "Exponential convergence of langevin distributions and their discrete approximations", "venue": null, "year": 1996 }, { "authors": [ "Zahra Kadkhodaie", "Eero P Simoncelli" ], "title": "Solving linear inverse problems using the prior implicit in a denoiser", "venue": "arXiv preprint arXiv:2007.13640,", "year": 2020 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Improved techniques for training score-based generative models", "venue": "arXiv preprint arXiv:2006.09011,", "year": 2020 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Song and Ermon (2019) recently proposed a novel method of generating samples from a target distribution through a combination of Denoising Score Matching (DSM) (Hyvärinen, 2005; Vincent, 2011; Raphan and Simoncelli, 2011) and Annealed Langevin Sampling (ALS) (Welling and Teh, 2011; Roberts et al., 1996). Since convergence to the distribution is guaranteed by the ALS, their approach (DSM-ALS) produces high-quality samples and guarantees high diversity. Though, this comes at the cost of requiring an iterative process during sampling, contrary to other generative methods. These generative methods can notably be used to diverse tasks like colorization, image restoration and image inpainting (Song and Ermon, 2019; Kadkhodaie and Simoncelli, 2020).\nSong and Ermon (2020) further improved their approach by increasing the stability of score matching training and proposing theoretically sound choices of hyperparameters. They also scaled their approach to higher-resolution images and showed that DSM-ALS is competitive with other generative models. Song and Ermon (2020) observed that the images produced by their improved model were more visually appealing than the ones from their original work; however, the reported Fréchet Inception Distance (FID) (Heusel et al., 2017) did not correlate with this improvement.\nAlthough DSM-ALS is gaining traction, Generative adversarial networks (GANs) (Goodfellow et al., 2014) remain the leading approach to generative modeling. GANs are a very popular class of generative models; they have been successfully applied to image generation (Brock et al., 2018; Karras et al., 2017; 2019; 2020) and have subsequently spawned a wealth of variants (Radford et al., 2015a; Miyato et al., 2018; Jolicoeur-Martineau, 2018; Zhang et al., 2019). The idea behind this method is to train a Discriminator (D) to correctly distinguish real samples from fake samples generated by a second agent, known as the Generator (G). GANs excel at generating high-quality samples as the discriminator captures features that make an image plausible, while the generator learns to emulate them.\nStill, GANs often have trouble producing data from all possible modes, which limits the diversity of the generated samples. A wide variety of tricks have been developed to address this issue in GANs (Kodali et al., 2017; Gulrajani et al., 2017; Arjovsky et al., 2017; Miyato et al., 2018; JolicoeurMartineau and Mitliagkas, 2019), though it remains an issue to this day. DSM-ALS, on the other hand, does not suffer from that problem since ALS allows for sampling from the full distribution\ncaptured by the score network. Nevertheless, the perceptual quality of DSM-ALS higher-resolution images has so far been inferior to that of GAN-generated images. Generative modeling has since seen some incredible work from Ho et al. (2020), who achieved exceptionally low (better) FID on image generation tasks. Their approach showcased a diffusion-based method (Sohl-Dickstein et al., 2015; Goyal et al., 2017) that shares close ties with DSM-ALS, and additionally proposed a convincing network architecture derived from Salimans et al. (2017).\nIn this paper, after introducing the necessary technical background in the next section, we build upon the work of Song and Ermon (2020) and propose improvements based on theoretical analyses both at training and sampling time. Our contributions are as follows:\n• We propose Consistent Annealed Sampling (CAS) as a more stable alternative to ALS, correcting inconsistencies relating to the scaling of the added noise; • We show how to recover the expected denoised sample (EDS) and demonstrate its unequiv-\nocal benefits w.r.t the FID. Notably, we show how to resolve the mismatch observed in DSM-ALS between the visual quality of generated images and its high (worse) FID; • We propose to further exploit the EDS through a hybrid objective function, combining\nGAN and Denoising Score Matching objectives, thereby encouraging the EDS of the score network to be as realistic as possible.\nIn addition, we show that the network architecture used used by Ho et al. (2020) significantly improves sample quality over the RefineNet (Lin et al., 2017a) architecture used by Song and Ermon (2020). In an ablation study performed on CIFAR-10 and LSUN-church, we demonstrate how these contributions bring DSM-ALS in range of the state-of-the-art for image generation tasks w.r.t. the FID. The code to replicate our experiments is publicly available at [Available in supplementary material]." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 DENOISING SCORE MATCHING", "text": "Denoising Score Matching (DSM) (Hyvärinen, 2005) consists of training a score network to approximate the gradient of the log density of a certain distribution (∇x log p(x)), referred to as the score function. This is achieved by training the network to approximate a noisy surrogate of p at multiple levels of Gaussian noise corruption (Vincent, 2011). The score network s, parametrized by θ and conditioned on the noise level σ, is tasked to minimize the following loss:\n1 2 Ep(x̃,x,σ) [∥∥∥∥σsθ(x̃, σ) + x̃− xσ ∥∥∥∥2 2 ] , (1)\nwhere p(x̃,x, σ) = qσ(x̃|x)p(x)p(σ). We define further qσ(x̃|x) = N (x̃|x, σ2I) the corrupted data distribution, p(x) the training data distribution, and p(σ) the uniform distribution over a set {σi} corresponding to different levels of noise. In practice, this set is defined as a geometric progression between σ1 and σL (with L chosen according to some computational budget):\n{σi}Li=1 = { γiσ1 ∣∣∣ i ∈ {0, . . . , L− 1}, γ , σ2 σ1 = ... = ( σL σ1 ) 1 L−1 < 1 } . (2)\nRather than having to learn a different score function for every σi, one can train an unconditional score network by defining sθ(x̃, σi) = sθ(x̃)/σi, and then minimizing Eq. 1. While unconditional networks are less heavy computationally, it remains an open question whether conditioning helps performance. Li et al. (2019) and Song and Ermon (2020) found that the unconditional network produced better samples, while Ho et al. (2020) obtained better results than both of them using a conditional network. Additionally, the denoising autoencoder described in Lim et al. (2020) gives evidence supporting the benefits of conditioning when the noise becomes small (also see App. D and E for a theoretical discussion of the difference). While our experiments are conducted with unconditional networks, we believe our techniques can be straightforwardly applied to conditional networks; we leave that extension for future work." }, { "heading": "2.2 ANNEALED LANGEVIN SAMPLING", "text": "Given a score function, one can use Langevin dynamics (or Langevin sampling) (Welling and Teh, 2011) to sample from the corresponding probability distribution. In practice, the score function is generally unknown and estimated through a score network trained to minimize Eq. 1. Song and Ermon (2019) showed that Langevin sampling has trouble exploring the full support of the distribution when the modes are too far apart and proposed Annealed Langevin Sampling (ALS) as a solution. ALS starts sampling with a large noise level and progressively anneals it down to a value close to 0, ensuring both proper mode coverage and convergence to the data distribution. Its precise description is shown in Algorithm 1.\nAlgorithm 1 Annealed Langevin Sampling\nRequire: sθ, {σi}Li=1, , nσ . 1: Initialize x 2: for i← 1 to L do 3: αi ← σ2i /σ2L 4: for nσ steps do 5: Draw z ∼ N (0, I) 6: x← x+ αisθ(x, σi) + √ 2αiz\nreturn x\nAlgorithm 2 Consistent Annealed Sampling\nRequire: sθ, {σi}Li=1, γ, , σL+1 = 0 1: Initialize x 2: β ← √ 1− (1− /σ2L) 2 /γ2\n3: for i← 1 to L do 4: αi ← σ2i /σ2L 5: Draw z ∼ N (0, I) 6: x← x+ αisθ(x, σi) + βσi+1z\nreturn x" }, { "heading": "2.3 EXPECTED DENOISED SAMPLE (EDS)", "text": "A little known fact from Bayesian literature is that one can recover a denoised sample from the score function using the Empirical Bayes mean (Robbins, 1955; Miyasawa, 1961; Raphan and Simoncelli, 2011):\ns∗(x̃, σ) = H∗(x̃, σ)− x̃\nσ2 , (3)\nwhereH∗(x̃, σ) , Ex∼qσ(x|x̃)[x] is the expected denoised sample given a noisy sample (or Empirical Bayes mean), conditioned on the noise level. A different way of reaching the same result is through the closed-form of the optimal score function, as presented in Appendix D. The corresponding result for unconditional score function is presented in Appendix E for completeness.\nThe EDS corresponds to the expected real image given a corrupted image; it can be thought of as what the score network believes to be the true image concealed within the noisy input. It has also been suggested that denoising the samples (i.e., taking the EDS) at the end of the Langevin sampling improves their quality (Saremi and Hyvarinen, 2019; Li et al., 2019; Kadkhodaie and Simoncelli, 2020). In Section 4, we provide further evidence that denoising the final Langevin sample brings it closer to the assumed data manifold. In particular, we show that the Fréchet Inception Distance (FID) consistently decreases (improves) after denoising. Finally, in Section 5, we build a hybrid training objective using the properties of the EDS discussed above.\nThere are interesting links to be made between ALS and the RED algorithm (Romano et al., 2017; Reehorst and Schniter, 2018). The RED algorithm attempts to find the maximum a posteriori probability (MAP) denoised sample (i.e., the most plausible real data) given a noisy sample. It does so by solving an optimization problem to obtain a sample close to the noisy sample for which the EDS is a fixed point (denoising the sample does not change it because it is a real sample). Thus, just like ALS, the RED algorithm generates plausible real data given a score network. However, this algorithm does not ensure that we sample from the distribution and obtain full mode coverage. Thus, ALS’s key benefit is ensuring that we sample from the full support of the distribution." }, { "heading": "3 CONSISTENT SCALING OF THE NOISE", "text": "In this section, we present inconsistencies in ALS relating to the noise scaling and introduce Consistent Annealed Sampling (CAS) as an alternative." }, { "heading": "3.1 INCONSISTENCIES IN ALS", "text": "One can think of the ALS algorithm as a sequential series of Langevin Dynamics (inner loop in Algorithm 1) for decreasing levels of noise (outer loop). If allowed an infinite number of steps nσ, the sampling process will properly produce samples from the data distribution.\nIn ALS, the score network is conditioned on geometrically decreasing noise (σi). In the unconditional case, this corresponds to dividing the score network by the noise level (i.e., sθ(x̃, σi) = sθ(x̃)/σi). Thus, in both conditional and unconditional cases, we make the assumption that the noise of the sample at step i will be of variance σ2i , an assumption upon which the quality of the estimation of the score depends. While choosing a geometric progression of noise levels seems like a reasonable (though arbitrary) schedule to follow, we show that ALS does not ensure such schedule.\nAssume we have the true score function s∗ and begin sampling using a real image with some added zero-centered Gaussian noise of standard deviation σ0 = 50. In Figure 1a, we illustrate how the intensity of the noise in the sample evolves through ALS and CAS, our proposed sampling, for a given sampling step size and a geometric schedule in this idealized scenario. We note that, although a large nσ approaches the real geometric curve, it will only reach it at the limit (nσ →∞ and → 0). Most importantly, Figure 1b highlights how even when the annealing process does converge, the progression of the noise is never truly geometric; we prove this formally in Proposition 1.\nProposition 1. Let s∗ be the optimal score function from Eq. 3. Following the sampling described in Algorithm 1, the variance of the noise component in the sample x will remain greater than σ2t at every step t.\nThe proof is presented in Appendix F. In particular, for nσ <∞, sampling has not fully converged and the remaining noise is carried over to the next iteration of Langevin Sampling. It also follows that for any sθ different from the optimal s∗, the actual noise at every iteration is expected to be even higher than for the best possible score function s∗." }, { "heading": "3.2 ALGORITHM", "text": "We propose Consistent Annealed Sampling (CAS) as a sampling method that ensures the noise level will follow a prescribed schedule for any sampling step size and number of steps L. Algorithm 2 illustrates the process for a geometric schedule. Note that for a different schedule, β will instead depend on the step t, as in the general case, γt is defined as σt+1/σt.\nProposition 2. Let s∗ be the optimal score function from Eq. 3. Following the sampling described in Algorithm 2, the variance of the noise component in the sample x will consistently be equal to σ2t at every step t.\nThe proof is presented in Appendix G. Importantly, Proposition 2 holds no matter how many steps L we take to decrease the noise geometrically. For ALS, nσ corresponds to the number of steps\nper level of noise. It plays a similar role in CAS: we simply dilate the geometric series of noise levels used during training by a factor of nσ, such that Lsampling = (Ltraining − 1)nσ + 1. Note that the proposition only holds when the initial sample is a corrupted image (i.e., x0 = I + σ0z0). However, by defining σ0 as the maximum Euclidean distance between all pairs of training data points (Song and Ermon, 2020), the noise becomes in practice much greater than the true image; sampling with pure noise initialization (i.e., x0 = σ0zt) becomes indistinguishable from sampling with data initialization." }, { "heading": "4 BENEFITS OF THE EDS ON SYNTHETIC DATA AND IMAGE GENERATION", "text": "As previously mentioned, it has been suggested that one can obtain better samples (closer to the assumed data manifold) by taking the EDS of the last Langevin sample. We provide further evidence of this with synthetic data and standard image datasets.\nIt can first be observed that the sampling steps correspond to an interpolation between the previous point and the EDS, followed by the addition of noise.\nProposition 3. Given a noise-conditional score function, the update rules from Algorithm 1 and Algorithm 2 are respectively equivalent to the following update rules:\nx← (1− η)x+ ηH(x, σi) + √ 2ησiz for z ∼ N (0, I) and η = σ2L x← (1− η)x+ ηH(x, σi) + βσi+1z\nThe demonstration is in Appendix H. This result is equally true for an unconditional score network, with the distinction that η would no longer be independent of σi but rather linearly proportional to it.\nIntuitively, this implies that the sampling steps slowly move the current sample towards a moving target (the EDS). If the sampling behaves appropriately, we expect the final sample x to be very close to the EDS, i.e., x ≈ H(x, σL). However, if the sampling step size is inappropriate, or if the EDS does not stabilize to a fixed point near the end of the sampling, these two quantities may be arbitrarily far from one another. As we will show, the FIDs from Song and Ermon (2020) suffer from such distance.\nFrom Proposition 3, we see that CAS shares some similarities with the algorithm by Kadkhodaie and Simoncelli (2020). While the weight we give to the denoiser (η) decreases geometrically (by its linearity in σ), their schedule appears to be much steeper. They also estimate the residual noise in their samples by the l2 norm instead of determining it through a schedule, as CAS strives to do. As a note, we had found weak evidence during development that estimating the residual noise worsened the FID.\nThe equivalence showed in Proposition 3 suggests instead to take the expected denoised sample at the end of the Langevin sampling as the final sample; this would be equivalent to the update rule x← H(x, σL) at the last step. Synthetic 2D examples shown in Figure 2 demonstrate the immediate benefits of this technique.\nWe train a score network on CIFAR-10 (Krizhevsky et al., 2009) and report the FID from both ALS and CAS as a function of the sampling step size and of denoising in Figure 3. The first observation to be made is just how critical denoising is to the FID score for ALS, even as its effect cannot be perceived by the human eye. For CAS, we note that the score remains small for a much wider range of sampling step sizes when denoising. Alternatively, the sampling step size must be very carefully tuned to obtain results close to the optimal.\nFigure 3 also shows that, with CAS, the FID of the final sample is approximately equal to the FID of the denoised samples for small sampling step sizes. Furthermore, we see a smaller gap in FID between denoised and non-denoised for larger sampling step sizes than ALS. This suggests that consistent sampling is resulting in the final sample being closer to the assumed data manifold (i.e., x ≈ Hθ(x, σL)). Interestingly, when Song and Ermon (2020) improved their score matching method, they could not explain why the FID of their new model did not improve even though the generated images looked better visually. To resolve that matter, they proposed the use of a new metric (Zhou et al., 2019) that did not have this issue. As shown in Figure 3, denoising resolves this mismatch." }, { "heading": "5 ADVERSARIAL FORMULATION", "text": "The score network is trained to recover an uncorrupted image from a noisy input minimizing the l2 distance between the two. However, it is well known from the image restoration literature that l2 does not correlate well with human perception of image quality (Zhang et al., 2012; Zhao et al., 2016). One way to take advantage of the EDS would be to encourage the score network to produce an EDS that is more realistic from the perspective of a discriminator. Intuitively, this would incentivize the score network to produce more discernible features at inference time.\nWe propose to do so by training the score network to simultaneously minimize the score-matching loss function and maximize the probability of denoised samples being perceived as real by a discriminator. We use alternating gradient descent to sequentially train a discriminator for a determined number of steps at every score function update.\nIn our experiments, we selected the Least Squares GAN (LSGAN) (Mao et al., 2017) formulation as it performed best (see Appendix B for details). For an unconditional score network, the objective functions are as follows:\nmax φ\nEp(x) [ (Dφ(x)− 1)2 ] + Ep(x̃,x,σ) [ (Dφ(Hθ(x̃, σ) + 1) 2 ]\n(4)\nmin θ Ep(x̃,x,σ)\n[ (Dφ(Hθ(x̃, σ))− 1)2 + λ\n2 ∥∥∥∥σsθ(x̃, σ) + x̃− xσ ∥∥∥∥2 2 ] , (5)\nwhere Hθ(x̃, σ) = sθ(x̃, σ)σ2 + x̃ is the EDS derived from the score network. Eq. 4 is the objective function of the LSGAN discriminator, while Eq. 5 is the adversarial objective function of the score network derived from Eq. 1 and from the LSGAN objective function.\nWe note the similarities between these objective functions and those of an LSGAN adversarial autoencoder (Makhzani et al., 2015; Tolstikhin et al., 2017; Tran et al., 2018), with the distinction of using a denoising autoencoder H as opposed to a standard autoencoder. We can highlight this difference by reformulating Eq. 5 as:\nmin θ\nEp(x̃,x,σ) [ (Dφ(Hθ(x̃, σ))− 1)2 + λ\n2σ2 ‖Hθ(x̃, σ)− x‖22\n] , (6)\nAs GANs favor quality over diversity, there is a concern that this hybrid objective function might decrease the diversity of samples produced by the ALS. In Section 6.1, we first study image generation improvements brought by this method and then address the diversity concerns with experiments on the 3-StackedMNIST (Metz et al., 2016) dataset in Section 6.2." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 ABLATION STUDY", "text": "We ran experiments on CIFAR-10 (Krizhevsky et al., 2009) and LSUN-churches (Yu et al., 2015) with the score network architecture used by Song and Ermon (2020). We also ran similar experiments with an unconditional version of the network architecture by Ho et al. (2020), given that their approach is similar to Song and Ermon (2019) and they obtain very small FIDs. For the hybrid adversarial score matching approach, we used an unconditional BigGAN discriminator (Brock et al., 2018). We compared three factors in an ablation study: adversarial training, Consistent Annealed Sampling and denoising.\nDetails on how the experiments were conducted are found in Appendix B. Unsuccessful experiments with large images are also discussed in Appendix C. See also Appendix I for a discussion pertaining to the use of the Inception Score (Heusel et al., 2017), a popular metric for generative models.\nResults for CIFAR-10 and LSUN-churches with Song and Ermon (2019) score network architecture are respectively shown in Table 1 and 2. Results for CIFAR-10 with Ho et al. (2020) score network architecture are shown in Table 3.\nWe always observe an improvement in FID from denoising and by increasing nσ from 1 to 5. We observe an improvement from using the adversarial approach with Song and Ermon (2019) network\narchitecture, but not on denoised samples with the Ho et al. (2020) network architecture. We hypothesize that this is a limitation of the architecture of the discriminator since, as far as we know, no variant of BigGAN achieves an FID smaller than 6. Nevertheless, it remains advantageous for more simple architectures, as shown in Table 1 and 2. We observe that consistent sampling outperforms non-consistent sampling on the CIFAR-10 task at nσ = 1, the quickest way to sample.\nWe calculated the FID of the non-consistent denoised models from 50k samples in order to compare our method with the recent work from Ho et al. (2020). We obtained a score of 3.65 for the nonadversarial method and 4.02 for the adversarial method on the CIFAR-10 task when sharing their architecture; these scores are close to their reported 3.17. Although not explicit in their approach, Ho et al. (2020) denoised their final sample. This suggests that taking the EDS and using an architecture akin to theirs were the two main reasons for outperforming Song and Ermon (2020). Of note, our method only trains the score network for 300k iterations, while Ho et al. (2020) trained their networks for more than 1 million iterations to achieve similar results." }, { "heading": "6.2 NON-ADVERSARIAL AND ADVERSARIAL SCORE NETWORKS HAVE EQUALLY HIGH DIVERSITY", "text": "To assess the diversity of generated samples, we evaluate our models on the 3-Stacked MNIST generation task (Metz et al., 2016) (128k images of 28x28), consisting of numbers from the MNIST dataset (LeCun et al., 1998) superimposed on 3 different channels. We trained non-adversarial and adversarial score networks in the same way as the other models. The results are shown in Table 4.\nWe see that each of the 1000 modes is covered, though the KL divergence is still inferior to PACGAN (Lin et al., 2018), meaning that the mode proportions are not perfectly uniform. Blindness to mode proportions is thought to be a fundamental limitation of score-based methods (Wenliang, 2020). Nevertheless, these results confirm a full mode coverage on a task where most GANs struggle and, most importantly, that using a hybrid objective does not hurt the diversity of the generated samples." }, { "heading": "7 CONCLUSION", "text": "We proposed Consistent Annealed Sampling as an alternative to Annealed Langevin Sampling, which ensures the expected geometric progression of the noise and brings the final samples closer to the data manifold. We showed how to extract the expected denoised sample and how to use it to further improve the final Langevin samples. We proposed a hybrid approach between GAN and score matching. With experiments on synthetic and standard image datasets; we showed that these approaches generally improved the quality/diversity of the generated samples.\nWe found equal diversity (coverage of all 1000 modes) for the adversarial and non-adversarial variant of the difficult StackedMNIST problem. Since we also observed better performance (from lower FIDs) in our other adversarial models trained on images, we conclude that making score matching adversarial increases the quality of the samples without decreasing diversity. These findings imply that score matching performs better than most GANs and on-par with state-of-the-art GANs. Furthermore, our results suggest that hybrid methods, combining multiple generative techniques together, are a very promising direction to pursue.\nAs future work, these models should be scaled to larger batch sizes on high-resolution images, since GANs have been shown to produce outstanding high-resolution images at very large batch sizes (2048 or more). We also plan to further study the theoretical properties of CAS by considering its corresponding stochastic differential equation." } ]
null
ADVERSARIAL SCORE MATCHING AND IMPROVED SAMPLING FOR IMAGE GENERATION
SP:f61e427d087e7f8b176a518af6088bde2ab75167
[ "This paper proposes an approach based on Fourier transforms to predict ratings in collaborative filtering problems. The paper’s scope (“smooth reconstruction functions”) gets immediately narrowed down to Fourier transforms--it would be nice to provide some motivation for this choice over alternative smooth functions. The paper then clusters the users as a way to reduce the number of parameters in the model, given that the Fourier transform itself does not reduce it. As a further step, the clustering is replaced by a soft-clustering learned by a neural network. In the experiments, the RMSE of the rating prediction problem is worse than some baselines and better than others. " ]
The problem of predicting the rating of a set of users to a set of items in a recommender system based on partial knowledge of the ratings is widely known as collaborative filtering. In this paper, we consider a mapping of the items into a vector space and study the prediction problem by assuming an underlying smooth preference function for each user, the quantization at each given vector yields the associated rating. To estimate the preference functions, we implicitly cluster the users with similar ratings to form dominant types. Next, we associate each dominant type with a smooth preference function; i.e., the function values for items with nearby vectors shall be close to each other. The latter is accomplished by a rich representation learning in a so called frequency domain. In this framework, we propose two approaches for learning user and item representations. First, we use an alternating optimization method in the spirit of k-means to cluster users and map items. We further make this approach less prone to overfitting by a boosting technique. Second, we present a feedforward neural network architecture consisting of interpretable layers which implicitely clusters the users. The performance of the method is evaluated on two benchmark datasets (ML-100k and ML-1M). Albeit the method benefits from simplicity, it shows a remarkable performance and opens a venue for future research. All codes are publicly available on the GitLab.
[]
[ { "authors": [ "Rianne van den Berg", "Thomas N Kipf", "Max Welling" ], "title": "Graph convolutional matrix completion", "venue": "arXiv preprint arXiv:1706.02263,", "year": 2017 }, { "authors": [ "James Davidson", "Benjamin Liebald", "Junning Liu", "Palash Nandy", "Taylor Van Vleet", "Ullas Gargi", "Sujoy Gupta", "Yu He", "Mike Lambert", "Blake Livingston" ], "title": "The youtube video recommendation system", "venue": "In Proceedings of the fourth ACM conference on Recommender systems,", "year": 2010 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Neural network matrix factorization", "venue": "arXiv preprint arXiv:1511.06443,", "year": 2015 }, { "authors": [ "Carlos A Gomez-Uribe", "Neil Hunt" ], "title": "The netflix recommender system: Algorithms, business value, and innovation", "venue": "ACM Transactions on Management Information Systems (TMIS),", "year": 2015 }, { "authors": [ "F Maxwell Harper", "Joseph A Konstan" ], "title": "The movielens datasets: History and context", "venue": "Acm transactions on interactive intelligent systems (tiis),", "year": 2015 }, { "authors": [ "Jason Hartford", "Devon R Graham", "Kevin Leyton-Brown", "Siamak Ravanbakhsh" ], "title": "Deep models of interactions across sets", "venue": "arXiv preprint arXiv:1803.02879,", "year": 2018 }, { "authors": [ "Xiangnan He", "Lizi Liao", "Hanwang Zhang", "Liqiang Nie", "Xia Hu", "Tat-Seng Chua" ], "title": "Neural collaborative filtering", "venue": "In Proceedings of the 26th international conference on world wide web,", "year": 2017 }, { "authors": [ "Andriy Mnih", "Russ R Salakhutdinov" ], "title": "Probabilistic matrix factorization", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Federico Monti", "Michael Bronstein", "Xavier Bresson" ], "title": "Geometric matrix completion with recurrent multi-graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Nikhil Rao", "Hsiang-Fu Yu", "Pradeep K Ravikumar", "Inderjit S Dhillon" ], "title": "Collaborative filtering with graph information: Consistency and scalable methods", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Steffen Rendle", "Li Zhang", "Yehuda Koren" ], "title": "On the difficulty of evaluating baselines: A study on recommender systems", "venue": null, "year": 1905 }, { "authors": [ "Suvash Sedhain", "Aditya Krishna Menon", "Scott Sanner", "Lexing Xie" ], "title": "Autorec: Autoencoders meet collaborative filtering", "venue": "In Proceedings of the 24th international conference on World Wide Web,", "year": 2015 }, { "authors": [ "Sungyong Seo", "Jing Huang", "Hao Yang", "Yan Liu" ], "title": "Interpretable convolutional neural networks with dual local and global attention for review rating prediction", "venue": "In Proceedings of the Eleventh ACM Conference on Recommender Systems,", "year": 2017 }, { "authors": [ "Florian Strub", "Romaric Gaudel", "Jérémie Mary" ], "title": "Hybrid recommender system based on autoencoders", "venue": "In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems,", "year": 2016 }, { "authors": [ "Nguyen Xuan Vinh", "Julien Epps", "James Bailey" ], "title": "Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance", "venue": "The Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Yao Wu", "Christopher DuBois", "Alice X Zheng", "Martin Ester" ], "title": "Collaborative denoising autoencoders for top-n recommender systems", "venue": "In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining,", "year": 2016 }, { "authors": [ "Muhan Zhang", "Yixin Chen" ], "title": "Inductive matrix completion based on graph neural networks", "venue": "arXiv preprint arXiv:1904.12058,", "year": 2019 }, { "authors": [ "Shuai Zhang", "Lina Yao", "Aixin Sun", "Sen Wang", "Guodong Long", "Manqing Dong" ], "title": "Neurec: On nonlinear transformation for personalized ranking", "venue": "arXiv preprint arXiv:1805.03002,", "year": 2018 }, { "authors": [ "Shuai Zhang", "Lina Yao", "Aixin Sun", "Yi Tay" ], "title": "Deep learning based recommender system: A survey and new perspectives", "venue": "ACM Computing Surveys (CSUR),", "year": 2019 } ]
[ { "heading": null, "text": "The problem of predicting the rating of a set of users to a set of items in a recommender system based on partial knowledge of the ratings is widely known as collaborative filtering. In this paper, we consider a mapping of the items into a vector space and study the prediction problem by assuming an underlying smooth preference function for each user, the quantization at each given vector yields the associated rating. To estimate the preference functions, we implicitly cluster the users with similar ratings to form dominant types. Next, we associate each dominant type with a smooth preference function; i.e., the function values for items with nearby vectors shall be close to each other. The latter is accomplished by a rich representation learning in a so called frequency domain. In this framework, we propose two approaches for learning user and item representations. First, we use an alternating optimization method in the spirit of k-means to cluster users and map items. We further make this approach less prone to overfitting by a boosting technique. Second, we present a feedforward neural network architecture consisting of interpretable layers which implicitely clusters the users. The performance of the method is evaluated on two benchmark datasets (ML-100k and ML-1M). Albeit the method benefits from simplicity, it shows a remarkable performance and opens a venue for future research. All codes are publicly available on the GitLab." }, { "heading": "1 INTRODUCTION", "text": "Nowadays, recommender systems (RS) are among the most effective ways for large companies to attract more customers. A few statistics are sufficient to attract attention towards the importance of RS: 80 percent of watched movies on Netflix and 60 percent of video clicks on Youtube are linked with recommendations (Gomez-Uribe & Hunt, 2015; Davidson et al., 2010). However, the world of RS is not limited to video industry.\nIn general, recommender systems can be categorized into three groups (Zhang et al., 2019): collaborative filtering (CF), content-based RS, and hybrid RS depending on the used data type. In this paper, we focus on CF, which uses historical interactions to make recommendations. There might be some auxiliary information available to the CF algorithm (like the user personal information); however, a general CF method does not take such side information into account (Zhang & Chen, 2019). This includes our approach in this paper.\nRecently, deep learning has found its way to RS and specifically CF methods. Deep networks are able to learn non-linear representations with powerful optimization tools, and their efficient implementations have made then promising CF approaches. However, a quick look at some pervasive deep networks in RS (e.g., He et al. (2017) and Wu et al. (2016)) shows that the utilization of deep architectures is limited to shallow networks. Still, it is unclear why networks have not gone deeper in RS in contrast to other fields like computer vision (Zhang et al., 2019). We suppose that the fundamental reason that limits the application of a deeper structure is the absence of interpretability (look at Seo et al. (2017), for example). Here, interpretability can be defined in two ways (Zhang et al., 2019); first, users be aware of the purpose behind a recommendation, and second, the system operator should know how manipulation of the system will affect the predictions (Zhang et al., 2018).\nThis paper addresses both issues by formulating the recommendation as a smooth reconstruction of user preferences. Particularly, our contributions are:\n• The CF problem is formulated as the reconstruction of user preference functions by minimal assumptions.\n• An alternating optimization method is proposed that effectively optimizes a non-convex loss function and extracts user and item representations. In this regard, effective clustering methods are proposed and tested.\n• A feed-forward shallow architecture is introduced, which has interpretable layers and performs well in practice.\n• Despite the simplicity and interpretability of the methods, their performance on benchmark datasets is remarkable." }, { "heading": "1.1 RELATED WORKS", "text": "The applied methods in CF are versatile and difficult to name. Below, we explain a number of methods which are well-known and are more related to our work.\nMultilayer perceptron based models. A natural extension of matrix factorization (MF) methods (Mnih & Salakhutdinov, 2008) are Neural Collaborative Filtering (NCF) (He et al., 2017) and Neural Network Matrix Factorization (NNMF) (Dziugaite & Roy, 2015). Both methods extend the idea behind MF and use the outputs of two networks as the user and the item representations. The innerproduct makes the prediction of two representations. Although our work has some similarity to this method, we model users by functions and represent these functions in a so-called frequency domain. Thus, user and item representations are not in the same space.\nAutoEncoder based models. AutoRec (Sedhain et al., 2015) and CFN (Strub et al., 2016) are wellknown autoencoder (AE) structures that transform partial observations (user-based or item-based) into full row or column data. Our method differs from AE structures as our network use item (user) representations and predicts user (item) ratings." }, { "heading": "2 SMOOTH RECONSTRUCTION FROM NON-UNIFORM SAMPLES", "text": "Rating as the output of the preference function. Most of the time, a finite set of features can characterize users and items that constitute the recommendation problem. Although no two users or items are exactly the same, the number of characterization features can be considerably small without losing much information. Assume that item i is characterized by the vector xi ∈ X ⊂ Rd. We further assume that all users observe similar features of an item and user u’s ratings are determined by a preference function fu : X → [cmin, cmax]. The recovery of a general preference function might need an indefinite number of samples, i.e., observed ratings. However, we do not expect user attitudes to change too much with small changes in an item’s feature. E.g., if the price is the determinative factor in someone’s preference, small changes in the price must not change the preference over this item significantly (look at figure 1).\nReconstruction of bandlimited 1D signals. Let us start with the simplest case. Consider s[n], n = 0, 1, . . . , N − 1, a 1D signal with length N . We call s to have bandwidth M < N2 if there is a representation ŝ[m],m = −M,−M + 1, . . . ,M − 1,M that can represent s as:\ns[n] = M∑ m=−M ŝ[m]ej2π(mn/N) (1)\nSo 2M +1 distinct samples from s would be enough to calculate ŝ. For an analytical approach, it is useful to interpret equation 1 as a discretization of a trigonometric continuous equation:\nh(x) = M∑ (m=−M) ame j2π(mx), a ∈ C2M+1, x ∈ [0, 1) (2)\nMirroring. Smoothness usually is used to refer to bandlimited signals around the zero-frequency which can be represented by equation 1. However, we use the smooth finite-length signal to refer to a real-valued finite-length signal that has intuitively smooth behavior in its non-zero domain. figure 2 shows an example. The trigonometric functions in equation 2 can not approximate such signals well even if we shift and scale the domain to [0,1] because the original signal is not periodic. One possible solution to make the trigonometric functions still a good representative for the finitelength signal would be mirroring. figure 2 shows the shifted, scaled, and mirrored signals in 1D space.\nExtension to multi-dimensional real-valued mirrored signals. equation 2 will be simplified for a real-valued s just to include cosine terms. One can obtain the extension of equation 2 for real-valued mirrored signals as:\nh(x) = h(x1, x2, . . . , xd) = M∑ m1=0 . . . M∑ md=0 Am1,m2,. . . ,md cos ( π(m1x1 + ...+mdxd) ) , (3)\nwhere x ∈ [0, 1]d and A is a d-dimensional real tensor. To simplify the notation, we use mTx to refer m1x1 + ...+mdxd and a to refer vectorized A. Starting from m1,m2, . . . ,md all to be 0, one can put all the possible values of m as the columns of matrix C = [c](M+1)d×d with the same order as they appear in vectorizing of A. Now we can rewrite equation 3 with matrix operations:\nh(x) = (M+1)d∑ k=1 ak cos ( πCk,:x ) (4)\nVandermonde matrix. Given r non-uniform samples in [0, 1]d, the Fourier coefficients, a, are the solution to the linear system of equations h(xi) = si, i = 1, 2, . . . , r. The Vandermonde matrix for this system is defined as:\nV = cos ( C[x1, x2, . . . , xr] ) , (5)\nwhere the cos(.) is the element-wise cosine, and [. . . ] shows the stacking operator of column vectors. So, the linear system of equations can be shortened by: V Ta = s. Here s is the column vector of r observed si put together. In contrast to the 1D case, there is no simple theorem on the conditions to estimate a correctly. Roughly speaking, this needs the rank of V to be larger than the number of unknowns, i.e., (M +1)d or in other words, the number of samples (r) should be enough larger than (M + 1)d.\nReconstruction of the preference function from the observed ratings. We can state the problem of rating prediction, as the reconstruction of the preference function of each user (fu) given the observed ratings of that user (Iu). If we assign a d-dimensional characterization vector (xi) to each item i that is assumed to lie in X = [0, 1]d, we can estimate the user u Fourier coefficients as au = (V T u ) †su. At the starting point we do not know how items are distributed in X which means Vu will be inaccurate. So, optimizing the reconstruction loss gives fair characterstics for the items in X :\nmin xi,i∈I ∑ u∈U ‖V Tu (V Tu )†su − su‖2. (6)" }, { "heading": "3 LEARNING REPRESENTATIONS BY MINIMIZING RECONSTRUCTION LOSS", "text": "Minimizing equation 6, aside from the non-convexity of the cost function, implicitly involves solving V Tu au = su, which can in general be an ill-condition system of linear equations, specially when the user u has few recorder ratings. To reliably estimate the Fourier coefficients au (user representations), group similar users into a number of clusters and use a single representative for each cluster (virtually increasing the number of available ratings). In addition, we further consider a Tikhonov (L2) regularizer to improve the condition number. With this approach, we need to solve\nmin {xi,i∈I},c L =min ∑ u∈U ‖V Tc(u)ac(u) − su‖ 2 + λ ∑ k∈C ‖ak‖2,\ns.t. 0 ≤ xi < 1, (7)\nwhere c : U → C is the mapping of the users into clusters, C is the set of clusters and Vk is the Vandermonde matrix associated with the cluster k (considering all the users in a cluster as a superuser). Hence, Vk is a function of {xi, i ∈ ∪{u: c(u)=k}Iu}. The penalty parameter λ shall be tuned via cross validation. Moreover, the Fourier coefficients ak for the cluster k are obtained by:\nak = (VkV T k + λI) −1Vksk, (8)\nwhere sk is the vector of all observed ratings in the cluster k. In the sequel, we propose two approaches for minimizing equation 7. In the first approach (Section 3.1), we alternatively find min{xi,i∈I} L and minc L; as minc L requires a combinatorial search, we introduce an approximate algorithm, named k-representation (Section 3.1.1) inspired by the k-means technique. Each iteration of k-representation consists of assigning each user the cluster with the lowest reconstruction loss, and updating the cluster representatives. In the second approach (Section 3.2), we train a neural network to jointly characterize the items and cluster the users. For this, the loss function of equation 7 is modified to accommodate for soft clustering." }, { "heading": "3.1 ALTERNATING OPTIMIZATION", "text": "3.1.1 k-REPRESENTATION FOR CLUSTERING THE USERS\nThe total loss in equation 7 can be divided into partial losses of the form\nLk = ‖V Tk ak − sk‖2 (9)\nfor each cluster k. We propose the k-representation (Algorithm 1) to minimize the overall cost iteratively. We first randomly set aks, k ∈ C; then, each user is assigned to a cluster k for which its reconstruction loss (equation 7) is minimized. After dividing the users into clusters, the representative of each cluster is updated via equation 8, and we return again to the clustering task. Similar to the k-means, there is no theoretical guarantee that the method converges to the global optimizer; nevertheless, the overall loss is decreased in each iteration. We shall evaluate the performance of the method in the Section ?? on both synthetic and real data.\nBoosted k-representation. By introducing both the clustering andL2 regularization in equation 8, we have improved the robustness of the inverse problem. However, increasing the number of clusters is still a potential issue in estimating the cluster representatives. Here, we propose to learn an ensemble of weak binary clusterings instead of learning all clusters together (Algorithm 2). The idea is to find the residuals of predicted ratings for each user and fit a new clustering to the residuals. Due to the linearity of the prediction, the final representation is the sum of weak representations for each user.\nAlgorithm 1 k-representation Input:\nitem characteristics {xi, i ∈ I} available ratings by each user {Iu, su, u ∈ U} number of clusters |C| initialization variance of cluster representations σ2 L2 penalty parameter λ.\nOutput: user clustering c : U → C cluster representatives {ak, k ∈ C} procedure k-REPRESENTATION init. ak from N (0, σ2) for all k ∈ C do Calculate Vk from xis repeat\nfor all u ∈ U do c(u)← argmink‖V Tu ak − su‖2 for all k ∈ C do Update ak via equation 8\nuntil convergence return {ak} and c\nend procedure\nAlgorithm 2 boosted k-representation Input:\nsame as Algorithm 1 Output:\nuser representatives {au, u ∈ U} procedure BOOSTED k-REPRESENTATION\nfor all u ∈ U do au ← 0, , s(0)u ← su for l = 1 : dlog2 |C|e do {a(l)k }, c(l) ← k-representation({s (l−1) u })\nfor all u ∈ U do a(l)u ← a(l)c(l)(u), s (l) u ← s(l−1)u − V Tu a (l) u , au ← au + a(l)u\nend for end procedure" }, { "heading": "3.1.2 OPTIMIZING ITEM CHARACTERISTICS", "text": "The second part of the alternating optimization is to minimize equation 7 w.r.t. item characteristics; i.e., {xi, i ∈ I}, subject to 0 ≤ xi < 1. To simplify the equations, we rewrite the total loss in equation 7 with small modification as:\nL = ∑\n(u,i)∈O+ ρ(lu,i) (10)\nwhere lu,i is |Su,i − vTi au|2 and O+ is the set of observed ratings. Here, ρ is a saturating function that reduces the effect of outliers. In equation 7, it is simply the identity function; however, we chose ρ(y) = 2( √ (1 + y) − 1) to better bound the unpredictable errors. We use the Trust Region Reflective (TRF) algorithm of the scipy python package as the optimization algorithm, which selects the updating steps adaptively and without supervision. To facilitate the optimization, we need to calculate the gradient of lu,i w.r.t. {xi}. It is obvious that ∇xj lu,i is zero for all j 6= i. For j = i and q = 1, ..., d (dimension of X ) we have:\n∂\n∂xi,q lu,i = 2 (Su,i − vTi au) (M+1)d∑ n=1 πau,n sin(πCn,:xi)Cn,q. (11)\nPre-search. Before using the TRF to optimize the loss, we make use of the current function evaluations to update item characteristics. Consider we are in the tth iteration and the values\n{ x (t−1) i ,V (t−1) i ,a (t−1) i } from the previous iteration are available. We define\nx (t− 12 ) i = x (t−1) argminj∈I ‖si−V (t−1) j a (t−1) i ‖2 . (12)\nThen, we use { x (t− 12 ) i } as the input to TRM and run a few iterations to get {x(t)i }. This pre-search makes the optimization process less prone to stagnate in local minima." }, { "heading": "3.2 RECONST-NET: A FEED-FORWARD SHALLOW NETWORK FOR PREFERENCE RECONSTRUCTION", "text": "Recent advances in deep learning have made the neural networks great tools even for traditional well-known algebraic calculations. Specifically, neural networks can be optimized effectively with various methods and efficient implementations that boost training. Further, some useful techniques like batch normalization, drop-out, etc., are available, which helps to avoid overfitting. In this section, we will reformulate equation 7 for soft clusters and design an architecture that learns items’ characteristics and users’ representations concurrently. First we modify equation 7 for soft clustering. Consider c : U → C|C| is the assignment function that determines how much each user is belonged to each cluster. We intentionally do not constraint norm of the c and let it to be scaled appropriately for different user. The total reconstruction loss is:\nL = ∑ u∈U ‖( ∑ k∈C ck(u)V T k ak)− su‖2 + λ ∑ k∈C ‖ak‖2 (13)\nfigure 3 shows an inspired architecture from 13. It consists of six layers (four hiddens) which three of them are trainable. The observed ratings will be supplied per item. Each neuron corresponds to an item at the input layer, and the input data should be one-hot vectors. The next layer is X-layer, which is a dense layer with d units. We interpret the weights from unit i in the input layer to the X-layer as xi. At the V-layer, the item’s representation (xi) is multiplied by C (EQ) and forms vi. V-layer does not have any trainable parameters. Next, there is the soft-clustering layer, a dense layer with n1 neurons. We interpret weights going from the V-layer to unit k of the soft-clustering layer as ak, i.e., the representation of the kth soft cluster. Finally, the output layer or equivalently the User-layer is a dense layer with |U| units. Weights going from the soft-clustering layer to unit u determine c(u). The drop-out layer (not depicted) after V-layer has an important role in preventing the network from overfitting. Dropping-out makes sense because the observed ratings usually come with a lot of uncertainty in a real application, i.e., the same user might rate the same item differently when asked for re-rating, and this is the nature of real data. The drop-out layer prevents overfitting by stopping the network from relying on a specific part of the items’ characteristics.\nOne way to increase the capacity of the method and capture non-linear interactions is to let the user’s representation be a nonlinear function of clusters’ reconstructed ratings. i.e., to change equation 13\nhere g can be an arbitrary non-linear function. In figure 3 we have proposed g as two additional hidden layers in right panel with tanh activation. Here, we usually choose n1 = n3 and equals our expectation from the number of soft clusters in the data. Still one can interpret the last hidden layer as the soft clustering layer." }, { "heading": "3.3 COMBINING MULTI PREDICTORS", "text": "Combining user-based and item-based methods. Till now, we have assumed that items are mapped to a vector space, and users have preference functions (user-based method), but there is no reason not to consider it reversely (item-based method). A simple way of combining user-based and itembased methods is to do a linear regression from each method’s output to the observed ratings. The validation part of the data will be used for estimating the coefficients of regression. We will see that combining user-based and item-based methods significantly improve the prediction of test data. Leveraging ensemble of predictors. A more complicated but effective way of combining is to leverage an ensemble of combined user-based and item-based methods. Consider we have a predictor f (t)(S) at iteration t that predicts observed ratings S for training the model. At each iteration, we calculate the residuals of the predicted ratings and pass it to the next predictor: S(t+1) = S(t) − f (t)(S(t)). The next predictor, f (t+1), uses S(t+1) for training its model. The final predictor leveraged from f (t), t = 1, 2, ..., T , uses the sum of all predictions: f(S) = ∑T t=1 f (t)(S)." }, { "heading": "4 EXPERIMENTS", "text": "To evaluate the proposed methods, we use three datasets: a synthetic dataset besides the well known ML-100k and ML-1M datasets (Harper & Konstan, 2015). The details of each dataset can be found in Table 1. The synthetic data is created to assess our clustering methods; for this purpose, a number of cluster representatives are randomly chosen in the low-frequncy domain (Guaranteed to be smooth) and then, each user is placed randomly around one of the representatives. The distance of the users to the associated representative (within-cluster variance) is varied in different tests. Look at the Appendix for detailed discussion.\nThe MovieLens datasets1 ML-100k and ML-1M are two of the most common benchmarks for collaborative filtering. For the ML-100k, we follow the pre-specified train-test split; i.e., 75%, 5%, and 20% of the total available data is used for training, validation, and test, respectively. For ML-1M, these numbers are 90%, 5%, and 5%, respectively.\n1https://grouplens.org/datasets/movielens/" }, { "heading": "4.1 PREDICTION EVALUATION", "text": "Training process. The training settings of both proposed methods, alternating optimization, and RECONST-NET for each dataset are provided in Table 2. Figure 4 shows the RMSE on two different datasets during the training stage. The validation is reserved for parameter tuning, and performance on test data is reported for both methods. Figure 4a clearly reveals the performance gain achieved by combining the user- and item-based techniques. Further, the stair shape decrease of the training loss (and validation loss) confirms the suitability of leveraging the ensemble of predictors.\nPerformance comparison. We further conduct experiments on ML-100k and ML-1M datasets. We have ignored the available side information in both datasets. Therefore, for the performance comparison in Table 3, we have included only methods that do not take into account these side information. Although neither of the alternating optimization and RECONST-NET record the best RMSE, they yield very good results despite their simplicity and interpretability." }, { "heading": "5 CONCLUSION", "text": "In this article, we formulated the rating prediction problem as a reconstruction problem with a smoothness assumption. The proposed methods are all simple and interpretable but show significant performance comparing to state-of-the-art methods. Specifically, we interpreted different layers of the designed network and evaluated our interpretation in synthetic design. The proposed architecture and rich frequency-domain feature can be a basis for future research on interpretable recommender systems." }, { "heading": "A APPENDIX", "text": "A.1 CLUSTERING EVALUATION\nIn Section 3.1.1, we proposed the k-representation clustering and its boosted version. Here, we study their performances via experiments on synthetic data. We recall that the mentioned methods do not explicitly penalize miss-clustering; instead, they minimize the within-cluster reconstruction loss. As a result, we expect these methods to perform fairly well when the clusters are distinguishable. To measure the matching between the identified clusters and the original ones, we employ the Adjusted Rank Index (ARI) (look at Vinh et al. (2010)). For two clusterings c1 and c2 with the same domain U , if we form the contingency matrix N with the (i, j) element as |{u, c1(u) = i, c2(u) = j}|, then, ARI defined based on the elements, column and row sums of N , intuitively shows the rate of agreement between the two clusterings if u is randomly chosen from U . It takes the maximum value 1 for identical clusterings and the minimum value 0 when the clusterings are perceived as fully random with respect to each other. As explained, ARI has the advantage of comparing two clusterings even with different number of clusters.\nTo use the ARI metric, we need a hard clustering of the users. For this purpose, we associate each user in our User-layer (soft clustering layer) in Figure 3 to the neuron (cluster) in the last hidden unit with the largest absolute weight. Although this technique violates the main goal in soft clustering, it provides us with a measure of clustering accuracy. In Figure 5, the performance of k-representation (1), boosted k-representation (2) and modified results from the soft clustering layer (Figure 3) are depicted for three scenarios. As expected, we obtain inferior result from the soft-clustering layer; however, as ARI is above zero, the clustering of this layer is not irrelevant. In the left plot in Figure 5, the ARI curves in terms of the discrimination index of the clusters (the ratio of between-cluster to within-cluster variances) are presented. We observe that the boosted k-representation works better in cases with higher discrimination index, but loses its performance in case of cluttered clusters. The center plot in Figure 5, shows ARI changes by varying the density in the rating matrix (ratio of the number of observed to non-observed entries). While the k-representation has the best performance at low densities, its performance drops when the density exceeds a threshold; this might be due to the involved regularization term. Finally, in the right plot of Figure 5, we have changed the number of clusters in all methods; the correct number of clusters is kept fixed at 4. As we see in these plots, none of the k-representation and its boosted version dominate the other one in all regimes." } ]
2,020
null
SP:97471b69a8e0ce6d2bbb202cc3f9cd786e77ddea
[ "The theoretical analysis is clearly stated in an well-organized way and the derived sparsity bound is reasonable. With FFNN and CNN, a theorem is given to show that the model is trainable only when the initialization on Edge of Chaos (EOC) and also provided a rescaling method to make the pruned NN into EOC regime. With Resnet, it proves the pruning satisfies the EOC condition by default and further provides re-parameterization method to tackle exploding gradients. The experiments well support theoretical results for both FFNN/CNN and resNet. " ]
Overparameterized Neural Networks (NN) display state-of-the-art performance. However, there is a growing need for smaller, energy-efficient, neural networks to be able to use machine learning applications on devices with limited computational resources. A popular approach consists of using pruning techniques. While these techniques have traditionally focused on pruning pre-trained NN (LeCun et al., 1990; Hassibi et al., 1993), recent work by Lee et al. (2018) has shown promising results when pruning at initialization. However, for Deep NNs, such procedures remain unsatisfactory as the resulting pruned networks can be difficult to train and, for instance, they do not prevent one layer from being fully pruned. In this paper, we provide a comprehensive theoretical analysis of Magnitude and Gradient based pruning at initialization and training of sparse architectures. This allows us to propose novel principled approaches which we validate experimentally on a variety of NN architectures.
[ { "affiliations": [], "name": "ROBUST PRUNING" }, { "affiliations": [], "name": "AT INITIALIZATION" }, { "affiliations": [], "name": "Soufiane Hayou" }, { "affiliations": [], "name": "Jean-Francois Ton" }, { "affiliations": [], "name": "Arnaud Doucet" } ]
[ { "authors": [ "J.M. Alvarez", "M. Salzmann" ], "title": "Compression-aware training of deep networks", "venue": "In 31st Conference in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "S. Arora", "S. Du", "W. Hu", "Z. Li", "R. Salakhutdinov", "R. Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In 33rd Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "M. Carreira-Perpiñán", "Y. Idelbayev", "June" ], "title": "Learning-compression algorithms for neural net pruning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "year": 2018 }, { "authors": [ "X. Dong", "S. Chen", "S. Pan" ], "title": "Learning to prune deep neural networks via layer-wise optimal brain surgeon", "venue": "In 31st Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "S. Du", "X. Zhai", "B. Poczos", "A. Singh" ], "title": "Gradient descent provably optimizes overparameterized neural networks", "venue": "In 7th International Conference on Learning Representations", "year": 2019 }, { "authors": [ "J. Frankle", "M. Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In 7th International Conference on Learning Representations", "year": 2019 }, { "authors": [ "J. Frankle", "G. Dziugaite", "D. Roy", "M. Carbin" ], "title": "Pruning neural networks at initialization: Why are we missing the mark? arXiv preprint arXiv:2009.08576", "venue": null, "year": 2020 }, { "authors": [ "G. Hardy", "J. Littlewood" ], "title": "Inequalities, Volume 2. Cambridge Mathematical Library", "venue": null, "year": 1952 }, { "authors": [ "B. Hassibi", "D. Stork", "W. Gregory" ], "title": "Optimal brain surgeon and general network pruning", "venue": "In IEEE International Conference on Neural Networks,", "year": 1993 }, { "authors": [ "S. Hayou", "E. Clerico", "B. He", "G. Deligiannidis", "A. Doucet", "J. Rousseau" ], "title": "Stable resnet", "venue": "In 24th International Conference on Artificial Intelligence and Statistics", "year": 2021 }, { "authors": [ "S. Hayou", "A. Doucet", "J. Rousseau" ], "title": "On the impact of the activation function on deep neural networks training", "venue": "In 36th International Conference on Machine Learning", "year": 2019 }, { "authors": [ "S. Hayou", "A. Doucet", "J. Rousseau" ], "title": "Mean-field behaviour of neural tangent kernel for deep neural networks. arXiv preprint arXiv:1905.13654", "venue": null, "year": 2020 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "G. Huang", "Z. Liu", "L. Maaten", "K. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "A. Jacot", "F. Gabriel", "C. Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In 32nd Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "A. Kolesnikov", "L. Beyer", "X. Zhai", "J. Puigcerver", "J. Yung", "S. Gelly", "N. Houlsby" ], "title": "Large scale learning of general visual representations for transfer", "venue": "arXiv preprint arXiv:1912.11370", "year": 2019 }, { "authors": [ "Y. LeCun", "J. Denker", "S. Solla" ], "title": "Optimal brain damage", "venue": "In Advances in Neural Information Processing Sstems,", "year": 1990 }, { "authors": [ "J. Lee", "Y. Bahri", "R. Novak", "S. Schoenholz", "J. Pennington", "J. Sohl-Dickstein" ], "title": "Deep neural networks as Gaussian processes", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "J. Lee", "L. Xiao", "S. Schoenholz", "Y. Bahri", "R. Novak", "J. Sohl-Dickstein", "J. Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In 33rd Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "N. Lee", "T. Ajanthan", "S. Gould", "P.H.S. Torr" ], "title": "A signal propagation perspective for pruning neural networks at initialization", "venue": "In 8th International Conference on Learning Representations", "year": 2020 }, { "authors": [ "N. Lee", "T. Ajanthan", "P.H. Torr" ], "title": "Snip: Single-shot network pruning based on connection sensitivity", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "H. Li", "A. Kadav", "I. Durdanovic", "H. Samet", "H. Graf" ], "title": "Pruning filters for efficient convnets", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "Y. Li", "S. Gu", "C. Mayer", "L.V. Gool", "R. Timofte" ], "title": "Group sparsity: The hinge between filter pruning and decomposition for network compression", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Y. Li", "S. Gu", "K. Zhang", "L. Van Gool", "R. Timofte" ], "title": "Dhp: Differentiable meta pruning via hypernetworks", "venue": "arXiv preprint arXiv:2003.13683", "year": 2020 }, { "authors": [ "T. Lillicrap", "D. Cownden", "D. Tweed", "C. Akerman" ], "title": "Random synaptic feedback weights support error backpropagation for deep learning", "venue": "Nature Communications", "year": 2016 }, { "authors": [ "Z. Liu", "H. Mu", "X. Zhang", "Z. Guo", "X. Yang", "K.-T. Cheng", "J. Sun" ], "title": "Metapruning: Meta learning for automatic neural network channel pruning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "C. Louizos", "M. Welling", "D. Kingma" ], "title": "Learning sparse neural networks through l0 regularization", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "A. Matthews", "J. Hron", "M. Rowland", "R. Turner", "Z. Ghahramani" ], "title": "Gaussian process behaviour in wide deep neural networks", "venue": "In 6th International Conference on Learning Representations", "year": 2018 }, { "authors": [ "M. Mozer", "P. Smolensky" ], "title": "Skeletonization: A technique for trimming the fat from a network via relevance assessment", "venue": "In Advances in Neural Information Processing Systems,", "year": 1989 }, { "authors": [ "R. Neal" ], "title": "Bayesian Learning for Neural Networks, Volume 118", "venue": null, "year": 1995 }, { "authors": [ "B. Neyshabur", "Z. Li", "S. Bhojanapalli", "Y. LeCun", "N. Srebro" ], "title": "The role of overparametrization in generalization of neural networks", "venue": "In 7th International Conference on Learning Representations", "year": 2019 }, { "authors": [ "Q. Nguyen", "M. Hein" ], "title": "Optimization landscape and expressivity of deep CNNs", "venue": "In 35th International Conference on Machine Learning", "year": 2018 }, { "authors": [ "J. Pečarić", "F. Proschan", "Y. Tong" ], "title": "Convex Functions, Partial Orderings, and Statistical Applications", "venue": null, "year": 1992 }, { "authors": [ "B. Poole", "S. Lahiri", "M. Raghu", "J. Sohl-Dickstein", "S. Ganguli" ], "title": "Exponential expressivity in deep neural networks through transient chaos", "venue": "In 30th Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "M. Puri", "S. Ralescu" ], "title": "Limit theorems for random central order statistics", "venue": "Lecture NotesMonograph Series", "year": 1986 }, { "authors": [ "S. Schoenholz", "J. Gilmer", "S. Ganguli", "J. Sohl-Dickstein" ], "title": "Deep information propagation", "venue": "In 5th International Conference on Learning Representations", "year": 2017 }, { "authors": [ "H. Tanaka", "D. Kunin", "D.L. Yamins", "S. Ganguli" ], "title": "Pruning neural networks without any data by iteratively conserving synaptic flow", "venue": "In 34th Conference on Neural Information Processing Systems", "year": 2020 }, { "authors": [ "R. Van Handel" ], "title": "Probability in High Dimension", "venue": "Princeton University. APC 550 Lecture Notes", "year": 2016 }, { "authors": [ "R. Von Mises" ], "title": "La distribution de la plus grande de n valeurs", "venue": "Selected Papers Volumen II, American Mathematical Society,", "year": 1936 }, { "authors": [ "C. Wang", "G. Zhang", "R. Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In 8th International Conference on Learning Representations", "year": 2020 }, { "authors": [ "L. Xiao", "Y. Bahri", "J. Sohl-Dickstein", "S.S. Schoenholz", "P. Pennington" ], "title": "Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks", "venue": "In 35th International Conference on Machine Learning", "year": 2018 }, { "authors": [ "G. Yang" ], "title": "Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation", "venue": "arXiv preprint arXiv:1902.04760", "year": 2019 }, { "authors": [ "G. Yang", "J. Pennington", "V. Rao", "J. Sohl-Dickstein", "S. S" ], "title": "A mean field theory of batch normalization", "venue": "Schoenholz", "year": 2019 }, { "authors": [ "G. Yang" ], "title": "Mean field residual networks: On the edge of chaos", "venue": "Schoenholz", "year": 2017 }, { "authors": [ "C. Zhang", "S. Bengio", "M. Hardt", "B. Recht", "O. Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations", "year": 2016 }, { "authors": [ "Matthews" ], "title": "i(.) by a Gaussian process was first proposed by Neal (1995) in the single layer case and has been recently extended to the multiple layer case by Lee et al", "venue": null, "year": 2018 }, { "authors": [ "Lee" ], "title": "Fφ is a function that only depends on φ. This provides a simple recursive formula for the computation of the kernel κ", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Overparameterized deep NNs have achieved state of the art (SOTA) performance in many tasks (Nguyen and Hein, 2018; Du et al., 2019; Zhang et al., 2016; Neyshabur et al., 2019). However, it is impractical to implement such models on small devices such as mobile phones. To address this problem, network pruning is widely used to reduce the time and space requirements both at training and test time. The main idea is to identify weights that do not contribute significantly to the model performance based on some criterion, and remove them from the NN. However, most pruning procedures currently available can only be applied after having trained the full NN (LeCun et al., 1990; Hassibi et al., 1993; Mozer and Smolensky, 1989; Dong et al., 2017) although methods that consider pruning the NN during training have become available. For example, Louizos et al. (2018) propose an algorithm which adds a L0 regularization on the weights to enforce sparsity while Carreira-Perpiñán and Idelbayev (2018); Alvarez and Salzmann (2017); Li et al. (2020) propose the inclusion of compression inside training steps. Other pruning variants consider training a secondary network that learns a pruning mask for a given architecture (Li et al. (2020); Liu et al. (2019)).\nRecently, Frankle and Carbin (2019) have introduced and validated experimentally the Lottery Ticket Hypothesis which conjectures the existence of a sparse subnetwork that achieves similar performance to the original NN. These empirical findings have motivated the development of pruning at initialization such as SNIP (Lee et al. (2018)) which demonstrated similar performance to classical pruning methods of pruning-after-training. Importantly, pruning at initialization never requires training the complete NN and is thus more memory efficient, allowing to train deep NN using limited computational resources. However, such techniques may suffer from different problems. In particular, nothing prevents such methods from pruning one whole layer of the NN, making it untrainable. More generally, it is typically difficult to train the resulting pruned NN (Li et al., 2018). To solve this situation, Lee et al. (2020) try to tackle this issue by enforcing dynamical isometry using orthogonal weights, while Wang et al. (2020) (GraSP) uses Hessian based pruning to preserve gradient flow. Other work by Tanaka et al. (2020) considers a data-agnostic iterative approach using the concept of synaptic flow in order to avoid the layer-collapse phenomenon (pruning a whole layer). In our work, we use principled scaling and re-parameterization to solve this issue, and show numerically that our algorithm achieves SOTA performance on CIFAR10, CIFAR100, TinyImageNet and ImageNet in some scenarios and remains competitive in others.\nIn this paper, we provide novel algorithms for Sensitivity-Based Pruning (SBP), i.e. pruning schemes that prune a weight W based on the magnitude of |W ∂L∂W | at initialization where L is the loss. Experimentally, compared to other available one-shot pruning schemes, these algorithms provide state-of the-art results (this might not be true in some regimes). Our work is motivated by a new theoretical analysis of gradient back-propagation relying on the mean-field approximation of deep NN (Hayou et al., 2019; Schoenholz et al., 2017; Poole et al., 2016; Yang and Schoenholz, 2017; Xiao et al., 2018; Lee et al., 2018; Matthews et al., 2018). Our contribution is threefold:\n• For deep fully connected FeedForward NN (FFNN) and Convolutional NN (CNN), it has been previously shown that only an initialization on the so-called Edge of Chaos (EOC) make models trainable; see e.g. (Schoenholz et al., 2017; Hayou et al., 2019). For such models, we show that an EOC initialization is also necessary for SBP to be efficient. Outside this regime, one layer can be fully pruned. • For these models, pruning pushes the NN out of the EOC making the resulting pruned model difficult to train. We introduce a simple rescaling trick to bring the pruned model back in the EOC regime, making the pruned NN easily trainable. • Unlike FFNN and CNN, we show that Resnets are better suited for pruning at initialization since they ‘live’ on the EOC by default (Yang and Schoenholz, 2017). However, they can suffer from exploding gradients, which we resolve by introducing a re-parameterization, called ‘Stable Resnet’ (SR). The performance of the resulting SBP-SR pruning algorithm is illustrated in Table 1: SBP-SR allows for pruning up to 99.5% of ResNet104 on CIFAR10 while still retaining around 87% test accuracy.\nThe precise statements and proofs of the theoretical results are given in the Supplementary. Appendix H also includes the proof of a weak version of the Lottery Ticket Hypothesis (Frankle and Carbin, 2019) showing that, starting from a randomly initialized NN, there exists a subnetwork initialized on the EOC." }, { "heading": "2 SENSITIVITY PRUNING FOR FFNN/CNN AND THE RESCALING TRICK", "text": "" }, { "heading": "2.1 SETUP AND NOTATIONS", "text": "Let x be an input in Rd. A NN of depth L is defined by\nyl(x) = Fl(W l, yl−1(x)) +Bl, 1 ≤ l ≤ L, (1)\nwhere yl(x) is the vector of pre-activations, W l and Bl are respectively the weights and bias of the lth layer and Fl is a mapping that defines the nature of the layer. The weights and bias are initialized with W l iid∼ N (0, σ2w/vl), where vl is a scaling factor used to control the variance of yl, and Bl iid∼ N (0, σ2b ). Hereafter, Ml denotes the number of weights in the lth layer, φ the activation function and [m : n] := {m,m+ 1, ..., n} for m ≤ n. Two examples of such architectures are: • Fully connected FFNN. For a FFNN of depth L and widths (Nl)0≤l≤L, we have vl = Nl−1, Ml = Nl−1Nl and\ny1i (x) = d∑ j=1 W 1ijxj +B 1 i , y l i(x) = Nl−1∑ j=1 W lijφ(y l−1 j (x)) +B l i for l ≥ 2. (2)\n• CNN. For a 1D CNN of depth L, number of channels (nl)l≤L, and number of neurons per channel (Nl)l≤L, we have\ny1i,α(x) = nl−1∑ j=1 ∑ β∈kerl W 1i,j,βxj,α+β+b 1 i , y l i,α(x) = nl−1∑ j=1 ∑ β∈kerl W li,j,βφ(y l−1 j,α+β(x))+b l i, for l ≥ 2, (3) where i ∈ [1 : nl] is the channel index, α ∈ [0 : Nl−1] is the neuron location, kerl = [−kl : kl] is the filter range, and 2kl + 1 is the filter size. To simplify the analysis, we assume hereafter that Nl = N and kl = k for all l. Here, we have vl = nl−1(2k + 1) and Ml = nl−1nl(2k + 1). We assume periodic boundary conditions; so yli,α = y l i,α+N = y l i,α−N . Generalization to multidimensional convolutions is straightforward.\nWhen no specific architecture is mentioned, (W li )1≤i≤Ml denotes the weights of the l th layer. In practice, a pruning algorithm creates a binary mask δ over the weights to force the pruned weights to be zero. The neural network after pruning is given by\nyl(x) = Fl(δl ◦W l, yl−1(x)) +Bl, (4) where ◦ is the Hadamard (i.e. element-wise) product. In this paper, we focus on pruning at initialization. The mask is typically created by using a vector gl of the same dimension as W l using a mapping of choice (see below), we then prune the network by keeping the weights that correspond to the top k values in the sequence (gli)i,l where k is fixed by the sparsity that we want to achieve. There are three popular types of criteria in the literature :\n•Magnitude based pruning (MBP): We prune weights based on the magnitude |W |.\n• Sensitivity based pruning (SBP): We prune the weights based on the values of |W ∂L∂W | where L is the loss. This is motivated by LW ≈ LW=0 +W ∂L∂W used in SNIP (Lee et al. (2018)). •Hessian based pruning (HBP): We prune the weights based on some function that uses the Hessian of the loss function as in GraSP (Wang et al., 2020).\nIn the remainder of the paper, we focus exclusively on SBP while our analysis of MBP is given in Appendix E. We leave HBP for future work. However, we include empirical results with GraSP (Wang et al., 2020) in Section 4.\nHereafter, we denote by s the sparsity, i.e. the fraction of weights we want to prune. Let Al be the set of indices of the weights in the lth layer that are pruned, i.e. Al = {i ∈ [1 : Ml], s.t. δli = 0}. We define the critical sparsity scr by\nscr = min{s ∈ (0, 1), s.t. ∃l, |Al| = Ml}, where |Al| is the cardinality of Al. Intuitively, scr represents the maximal sparsity we are allowed to choose without fully pruning at least one layer. scr is random as the weights are initialized randomly. Thus, we study the behaviour of the expected value E[scr] where, hereafter, all expectations are taken w.r.t. to the random initial weights. This provides theoretical guidelines for pruning at initialization.\nFor all l ∈ [1 : L], we define αl by vl = αlN where N > 0, and ζl > 0 such that Ml = ζlN2, where we recall that vl is a scaling factor controlling the variance of yl and Ml is the number of weights in the lth layer. This notation assumes that, in each layer, the number of weights is quadratic in the number of neurons, which is satisfied by classical FFNN and CNN architectures." }, { "heading": "2.2 SENSITIVITY-BASED PRUNING (SBP)", "text": "SBP is a data-dependent pruning method that uses the data to compute the gradient with backpropagation at initialization (one-shot pruning).We randomly sample a batch and compute the gradients of the loss with respect to each weight. The mask is then defined by δli = I(|W li ∂L∂W li | ≥ ts), where ts = |W ∂L∂W | (ks) and ks = (1− s) ∑ lMl and |W ∂L ∂W |\n(ks) is the kths order statistics of the sequence (|W li ∂L∂W li |)1≤l≤L,1≤i≤Ml .\nHowever, this simple approach suffers from the well-known exploding/vanishing gradients problem which renders the first/last few layers respectively susceptible to be completely pruned. We give a formal definition to this problem.\nDefinition 1 (Well-conditioned & ill-conditioned NN). Let ml = E[|W l1 ∂L∂W l1 | 2] for l ∈ [1 : L]. We say that the NN is well-conditioned if there exist A,B > 0 such that for all L ≥ 1 and l ∈ [1 : L] we have A ≤ ml/mL ≤ B, and it is ill-conditioned otherwise.\nUnderstanding the behaviour of gradients at initialization is thus crucial for SBP to be efficient. Using a mean-field approach, such analysis has been carried out in (Schoenholz et al., 2017; Hayou et al., 2019; Xiao et al., 2018; Poole et al., 2016; Yang, 2019) where it has been shown that an initialization known as the EOC is beneficial for DNN training. The mean-field analysis of DNNs relies on two standard approximations that we will also use here. Approximation 1 (Mean-Field Approximation). When Nl 1 for FFNN or nl 1 for CNN, we use the approximation of infinitely wide NN. This means infinite number of neurons per layer for fully connected layers and infinite number of channels per layer for convolutional layers. Approximation 2 (Gradient Independence). The weights used for forward propagation are independent from those used for back-propagation.\nThese two approximations are ubiquitous in literature on the mean-field analysis of neural networks. They have been used to derive theoretical results on signal propagation (Schoenholz et al., 2017; Hayou et al., 2019; Poole et al., 2016; Yang, 2019; Yang and Schoenholz, 2017; Yang et al., 2019) and are also key tools in the derivation of the Neural Tangent Kernel (Jacot et al., 2018; Arora et al., 2019; Hayou et al., 2020). Approximation 1 simplifies the analysis of the forward propagation as it allows the derivation of closed-form formulas for covariance propagation. Approximation 2 does the same for back-propagation. See Appendix A for a detailed discussion of these approximations. Throughout the paper, we provide numerical results that substantiate the theoretical results that we derive using these two approximations. We show that these approximations lead to excellent match between theoretical results and numerical experiments.\nEdge of Chaos (EOC): For inputs x, x′, let cl(x, x′) be the correlation between yl(x) and yl(x′). From (Schoenholz et al., 2017; Hayou et al., 2019), there exists a so-called correlation function f that depends on (σw, σb) such that cl+1(x, x′) = f(cl(x, x′)). Let χ(σb, σw) = f ′(1). The EOC is the set of hyperparameters (σw, σb) satisfying χ(σb, σw) = 1. When χ(σb, σw) > 1, we are in the Chaotic phase, the gradient explodes and cl(x, x′) converges exponentially to some c < 1 for x 6= x′ and the resulting output function is discontinuous everywhere. When χ(σb, σw) < 1, we are in the Ordered phase where cl(x, x′) converges exponentially fast to 1 and the NN outputs constant functions. Initialization on the EOC allows for better information propagation (see Supplementary for more details).\nHence, by leveraging the above results, we show that an initialization outside the EOC will lead to an ill-conditioned NN. Theorem 1 (EOC Initialization is crucial for SBP). Consider a NN of type (2) or (3) (FFNN or CNN). Assume (σw, σb) are chosen on the ordered phase, i.e. χ(σb, σw) < 1, then the NN is ill-conditioned. Moreover, we have\nE[scr] ≤ 1\nL\n( 1 + log(κLN2)\nκ\n) +O ( 1\nκ2 √ LN2\n) ,\nwhere κ = | logχ(σb, σw)|/8. If (σw, σb) are on the EOC, i.e. χ(σb, σw) = 1, then the NN is well-conditioned. In this case, κ = 0 and the above upper bound no longer holds.\nThe proof of Theorem 1 relies on the behaviour of the gradient norm at initialization. On the ordered phase, the gradient norm vanishes exponentially quickly as it back-propagates, thus resulting in an ill-conditioned network. We use another approximation for the sake of simplification of the proof (Approximation 3 in the Supplementary) but the result holds without this approximation although the resulting constants would be a bit different. Theorem 1 shows that the upper bound decreases the farther χ(σb, σw) is from 1, i.e. the farther the initialization is from the EOC. For constant width FFNN with L = 100, N = 100 and κ = 0.2, the theoretical upper bound is E[scr] / 27% while we obtain E[scr] ≈ 22% based on 10 simulations. A similar result can be obtained when the NN is initialized on the chaotic phase; in this case too, the NN is ill-conditioned. To illustrate these results, Figure 1 shows the impact of the initialization with sparsity s = 70%. The dark area in Figure 1(b) corresponds to layers that are fully pruned in the chaotic phase due to exploding gradients. Using an EOC initialization, Figure 1(a) shows that pruned weights are well distributed in the NN, ensuring that no layer is fully pruned." }, { "heading": "2.3 TRAINING PRUNED NETWORKS USING THE RESCALING TRICK", "text": "We have shown previously that an initialization on the EOC is crucial for SBP. However, we have not yet addressed the key problem of training the resulting pruned NN. This can be very challenging in practice (Li et al., 2018), especially for deep NN.\nConsider as an example a FFNN architecture. After pruning, we have for an input x ŷli(x) = ∑Nl−1 j=1 W l ijδ l ijφ(ŷ l−1 j (x)) +B l i, for l ≥ 2, (5)\nwhere δ is the pruning mask. While the original NN initialized on the EOC was satisfying cl+1(x, x′) = f(cl(x, x′)) for f ′(1) = χ(σb, σw) = 1, the pruned architecture leads to ĉl+1(x, x′) = fpruned(ĉ\nl(x, x′)) with f ′pruned(1) 6= 1, hence pruning destroys the EOC. Consequently, the pruned NN will be difficult to train (Schoenholz et al., 2017; Hayou et al., 2019) especially if it is deep. Hence, we propose to bring the pruned NN back on the EOC. This approach consists of rescaling the weights obtained after SBP in each layer by factors that depend on the pruned architecture itself.\nProposition 1 (Rescaling Trick). Consider a NN of type (2) or (3) (FFNN or CNN) initialized on the EOC. Then, after pruning, the pruned NN is not initialized on the EOC anymore. However, the rescaled pruned NN\nyl(x) = F(ρl ◦ δl ◦W l, yl−1(x)) +Bl, for l ≥ 1, (6)\nwhere\nρlij = (E[Nl−1(W li1)2δli1]) − 12 for FFNN , ρli,j,β = (E[nl−1(W li,1,β)2δli,1,β ]) − 12 for CNN, (7)\nis initialized on the EOC. (The scaling is constant across j).\nThe scaling factors in equation 7 are easily approximated using the weights kept after pruning. Algorithm 1 (see Appendix I) details a practical implementation of this rescaling technique for FFNN. We illustrate experimentally the benefits of this approach in Section 4." }, { "heading": "3 SENSITIVITY-BASED PRUNING FOR STABLE RESIDUAL NETWORKS", "text": "Resnets and their variants (He et al., 2015; Huang et al., 2017) are currently the best performing models on various classification tasks (CIFAR10, CIFAR100, ImageNet etc (Kolesnikov et al., 2019)). Thus, understanding Resnet pruning at initialization is of crucial interest. Yang and Schoenholz (2017) showed that Resnets naturally ‘live’ on the EOC. Using this result, we show that Resnets are actually better suited to SBP than FFNN and CNN. However, Resnets suffer from an exploding gradient problem (Yang and Schoenholz, 2017) which might affect the performance of SBP. We\naddress this issue by introducing a new Resnet parameterization. Let a standard Resnet architecture be given by\ny1(x) = F(W 1, x), yl(x) = yl−1(x) + F(W l, yl−1), for l ≥ 2, (8) where F defines the blocks of the Resnet. Hereafter, we assume that F is either of the form (2) or (3) (FFNN or CNN).\nThe next theorem shows that Resnets are well-conditioned independently from the initialization and are thus well suited for pruning at initialization. Theorem 2 (Resnet are Well-Conditioned). Consider a Resnet with either Fully Connected or Convolutional layers and ReLU activation function. Then for all σw > 0, the Resnet is wellconditioned. Moreover, for all l ∈ {1, ..., L}, we have ml = Θ((1 + σ 2 w\n2 ) L).\nThe above theorem proves that Resnets are always well-conditioned. However, taking a closer look at ml, which represents the variance of the pruning criterion (Definition 1), we see that it grows exponentially in the number of layers L. Therefore, this could lead to a ‘higher variance of pruned networks’ and hence high variance test accuracy. To this end, we propose a Resnet parameterization which we call Stable Resnet. Stable Resnets prevent the second moment from growing exponentially as shown below. Proposition 2 (Stable Resnet). Consider the following Resnet parameterization\nyl(x) = yl−1(x) + 1√ L F(W l, yl−1), for l ≥ 2, (9)\nthen the NN is well-conditioned for all σw > 0. Moreover, for all l ≤ L we have ml = Θ(L−1).\nIn Proposition 2, L is not the number of layers but the number of blocks. For example, ResNet32 has 15 blocks and 32 layers, hence L = 15. Figure 2 shows the percentage of weights in each layer kept after pruning ResNet32 and Stable ResNet32 at initialization. The jumps correspond to limits between sections in ResNet32 and are caused by max-pooling. Within each section, Stable Resnet tends to have a more uniform distribution of percentages of weights kept after pruning compared to standard Resnet. In Section 4 we show that this leads to better performance of Stable Resnet compared to standard Resnet. Further theoretical and experimental results for Stable Resnets are presented in (Hayou et al., 2021).\nIn the next proposition, we establish that, unlike FFNN or CNN, we do not need to rescale the pruned Resnet for it to be trainable as it lives naturally on the EOC before and after pruning. Proposition 3 (Resnet live on the EOC even after pruning). Consider a Residual NN with blocks of type FFNN or CNN. Then, after pruning, the pruned Residual NN is initialized on the EOC." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we illustrate empirically the theoretical results obtained in the previous sections. We validate the results on MNIST, CIFAR10, CIFAR100 and Tiny ImageNet." }, { "heading": "4.1 INITIALIZATION AND RESCALING", "text": "According to Theorem 1, an EOC initialization is necessary for the network to be well-conditioned. We train FFNN with tanh activation on MNIST, varying depth L ∈ {2, 20, 40, 60, 80, 100} and\nsparsity s ∈ {10%, 20%, .., 90%}. We use SGD with batchsize 100 and learning rate 10−3, which we found to be optimal using a grid search with an exponential scale of 10. Figure 3 shows the test accuracy after 10k iterations for 3 different initialization schemes: Rescaled EOC, EOC, Ordered. On the Ordered phase, the model is untrainable when we choose sparsity s > 40% and depth L > 60 as one layer being fully pruned. For an EOC initialization, the set (s, L) for which NN are trainable becomes larger. However, the model is still untrainable for highly sparse deep networks as the sparse NN is no longer initialized on the EOC (see Proposition 1). As predicted by Proposition 1, after application of the rescaling trick to bring back the pruned NN on the EOC, the pruned NN can be trained appropriately." }, { "heading": "4.2 RESNET AND STABLE RESNET", "text": "Although Resnets are adapted to SBP (i.e. they are always well-conditioned for all σw > 0), Theorem 2 shows that the magnitude of the pruning criterion grows exponentially w.r.t. the depth L. To resolve this problem we introduced Stable Resnet. We call our pruning algorithm for ResNet SBP-SR (SBP with Stable Resnet). Theoretically, we expect SBP-SR to perform better than other methods for deep Resnets according to Proposition 2. Table 2 shows test accuracies for ResNet32, ResNet50 and ResNet104 with varying sparsities s ∈ {90%, 95%, 98%} on CIFAR10 and CIFAR100. For all our experiments, we use a setup similar to (Wang et al., 2020), i.e. we use SGD for 160 and 250 epochs for CIFAR10 and CIFAR100, respectively. We use an initial learning rate of 0.1 and decay it by 0.1\nat 1/2 and 3/4 of the number of total epoch. In addition, we run all our experiments 3 times to obtain more stable and reliable test accuracies. As in (Wang et al., 2020), we adopt Resnet architectures where we doubled the number of filters in each convolutional layer. As a baseline, we include pruning results with the classical OBD pruning algorithm (LeCun et al., 1990) for ResNet32 (train→ prune → repeat). We compare our results against other algorithms that prune at initialization, such as SNIP (Lee et al., 2018), which is a SBP algorithm, GraSP (Wang et al., 2020) which is a Hessian based pruning algorithm, and SynFlow (Tanaka et al., 2020), which is an iterative data-agnostic pruning algorithm. As we increase the depth, SBP-SR starts to outperform other algorithms that prune at initialization (SBP-SR outperforms all other algorithms with ResNet104 on CIFAR10 and CIFAR100). Furthermore, using GraSP on Stable Resnet did not improve the result of GraSP on standard Resnet, as our proposed Stable Resnet analysis only applies to gradient based pruning. The analysis of Hessian based pruning could lead to similar techniques for improving trainability, which we leave for future work.\nTo confirm these results, we also test SBP-SR against other pruning algorithms on Tiny ImageNet. We train the models for 300 training epochs to make sure all algorithms converge. Table 3 shows test accuracies for SBP-SR, SNIP, GraSP, and SynFlow for s ∈ {85%, 90%, 95%}. Although SynFlow competes or outperforms GraSP in many cases, SBP-SR has a clear advantage over SynFlow and other algorithms, especially for deep networks as illustrated on ResNet104.\nAdditional results with ImageNet dataset are provided in Appendix F." }, { "heading": "4.3 RESCALING TRICK AND CNNS", "text": "The theoretical analysis of Section 2 is valid for Vanilla CNN i.e. CNN without pooling layers. With pooling layers, the theory of signal propagation applies to sections between successive pooling layers; each of those section can be seen as Vanilla CNN. This applies to standard CNN architectures such as VGG. As a toy example, we show in Table 4 the test accuracy of a pruned V-CNN with sparsity s = 50% on MNIST dataset. Similar to FFNN results in Figure 3, the combination of the EOC Init and the ReScaling trick allows for pruning deep V-CNN (depth 100) while ensuring their trainability.\nHowever, V-CNN is a toy example that is generally not used in practice. Standard CNN architectures such as VGG are popular among practitioners since they achieve SOTA accuracy on many tasks. Table 5 shows test accuracies for SNIP, SynFlow, and our EOC+ReScaling trick for VGG16 on CIFAR10. Our results are close to the results presented by Frankle et al. (2020). These three\nalgorithms perform similarly. From a theoretical point of view, our ReScaling trick applies to vanilla CNNs without pooling layers, hence, adding pooling layers might cause a deterioration. However, we know that the signal propagation theory applies to vanilla blocks inside VGG (i.e. the sequence of convolutional layers between two successive pooling layers). The larger those vanilla blocks are, the better our ReScaling trick performs. We leverage this observation by training a modified version of VGG, called 3xVGG16, which has the same number of pooling layers as VGG16, and 3 times the number of convolutional layers inside each vanilla block. Numerical results in Table 5 show that the EOC initialization with the ReScaling trick outperforms other algorithms, which confirms our hypothesis. However, the architecture 3xVGG16 is not a standard architecture and it does not seem to improve much the test accuracy of VGG16. An adaptation of the ReScaling trick to standard VGG architectures would be of great value and is left for future work.\nSummary of numerical results. We summarize in Table 6 our numerical results. The letter ‘C’ refers to ‘Competition’ between algorithms in that setting, and indicates no clear winner is found, while the dash means no experiment has been run with this setting. We observe that our algorithm SBP-SR consistently outperforms other algorithms in a variety of settings." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have formulated principled guidelines for SBP at initialization. For FNNN and CNN, we have shown that an initialization on the EOC is necessary followed by the application of a simple rescaling trick to train the pruned network. For Resnets, the situation is markedly different. There is no need for a specific initialization but Resnets in their original form suffer from an exploding gradient problem. We propose an alternative Resnet parameterization called Stable Resnet, which allows for more stable pruning. Our theoretical results have been validated by extensive experiments on MNIST, CIFAR10, CIFAR100, Tiny ImageNet and ImageNet. Compared to other available one-shot pruning algorithms, we achieve state-of the-art results in many scenarios." }, { "heading": "A DISCUSSION ABOUT APPROXIMATIONS 1 AND 2", "text": "" }, { "heading": "A.1 APPROXIMATION 1: INFINITE WIDTH APPROXIMATION", "text": "" }, { "heading": "FeedForward Neural Network", "text": "Consider a randomly initialized FFNN of depth L, widths (Nl)1≤l≤L, weights W lij iid∼ N (0, σ 2 w\nNl−1 )\nand bias Bli iid∼ N (0, σ2b ), where N (µ, σ2) denotes the normal distribution of mean µ and variance σ2. For some input x ∈ Rd, the propagation of this input through the network is given by\ny1i (x) = d∑ j=1 W 1ijxj +B 1 i , (10)\nyli(x) = Nl−1∑ j=1 W lijφ(y l−1 j (x)) +B l i, for l ≥ 2. (11)\nWhere φ : R → R is the activation function. When we take the limit Nl−1 → ∞, the Central Limit Theorem implies that yli(x) is a Gaussian variable for any input x. This approximation by infinite width solution results in an error of order O(1/ √ Nl−1) (standard Monte Carlo error). More generally, an approximation of the random process yli(.) by a Gaussian process was first proposed by Neal (1995) in the single layer case and has been recently extended to the multiple layer case by Lee et al. (2018) and Matthews et al. (2018). We recall here the expressions of the limiting Gaussian process kernels. For any input x ∈ Rd, E[yli(x)] = 0 so that for any inputs x, x′ ∈ Rd\nκl(x, x′) = E[yli(x)yli(x′)] = σ2b + σ 2 wE[φ(y l−1 i (x))φ(y l−1 i (x ′))]\n= σ2b + σ 2 wFφ(κ l−1(x, x), κl−1(x, x′), κl−1(x′, x′)),\nwhere Fφ is a function that only depends on φ. This provides a simple recursive formula for the computation of the kernel κl; see, e.g., Lee et al. (2018) for more details." }, { "heading": "Convolutional Neural Networks", "text": "Similar to the FFNN case, the infinite width approximation with 1D CNN (introduced in the main paper) yields a recursion for the kernel. However, the infinite width here means infinite number of channels, and results in an error O(1/√nl−1). The kernel in this case depends on the choice of the neurons in the channel and is given by\nκlα,α′(x, x ′) = E[yli,α(x)yli,α′(x′)] = σ2b + σ2w 2k + 1 ∑ β∈ker E[φ(yl−11,α+β(x))φ(y l−1 1,α′+β(x ′))]\nso that\nκlα,α′(x, x ′) = σ2b + σ2w 2k + 1 ∑ β∈ker Fφ(κ l−1 α+β,α′+β(x, x), κ l−1 α+β,α′+β(x, x ′), κl−1α+β,α′+β(x ′, x′)).\nThe convolutional kernel κlα,α′ has the ‘self-averaging’ property; i.e. it is an average over the kernels corresponding to different combination of neurons in the previous layer. However, it is easy to simplify the analysis in this case by studying the average kernel per channel defined by κ̂l = 1N2 ∑ α,α′ κ l α,α′ . Indeed, by summing terms in the previous equation and using the fact that we use circular padding, we obtain\nκ̂l(x, x′) = σ2b + σ 2 w\n1\nN2 ∑ α,α′ Fφ(κ l−1 α,α′(x, x), κ l−1 α,α′(x, x ′), κl−1α,α′(x ′, x′)).\nThis expression is similar in nature to that of FFNN. We will use this observation in the proofs.\nNote that our analysis only requires the approximation that, in the infinite width limit, for any two inputs x, x′, the variables yli(x) and y l i(x ′) are Gaussian with covariance κl(x, x′) for FFNN, and\nyli,α(x) and y l i,α′(x ′) are Gaussian with covariance κlα,α′(x, x ′) for CNN. We do not need the much stronger approximation that the process yli(x) (y l i,α(x) for CNN) is a Gaussian process." }, { "heading": "Residual Neural Networks", "text": "The infinite width limit approximation for ResNet yields similar results with an additional residual terms. It is straighforward to see that, in the case a ResNet with FFNN-type layers, we have that\nκl(x, x′) = κl−1(x, x′) + σ2b + σ 2 wFφ(κ l−1(x, x), κl−1(x, x′), κl−1(x′, x′)),\nwhereas for ResNet with CNN-type layers, we have that\nκlα,α′(x, x ′) = κl−1α,α′(x, x ′) + σ2b\n+ σ2w\n2k + 1 ∑ β∈ker Fφ(κ l−1 α+β,α′+β(x, x), κ l−1 α+β,α′+β(x, x ′), κl−1α+β,α′+β(x ′, x′))." }, { "heading": "A.2 APPROXIMATION 2: GRADIENT INDEPENDENCE", "text": "For gradient back-propagation, an essential assumption in prior literature in Mean-Field analysis of DNNs is that of the gradient independence which is similar in nature to the practice of feedback alignment (Lillicrap et al., 2016). This approximation allows for derivation of recursive formulas for gradient back-propagation, and it has been extensively used in literature and verified empirically; see references below.\nGradient Covariance back-propagation: this approximation was used to derive analytical formulas for gradient covariance back-propagation in (Hayou et al., 2019; Schoenholz et al., 2017; Yang and Schoenholz, 2017; Lee et al., 2018; Poole et al., 2016; Xiao et al., 2018; Yang, 2019). It was shown empirically through simulations that it is an excellent approximation for FFNN in Schoenholz et al. (2017), for Resnets in Yang and Schoenholz (2017) and for CNN in Xiao et al. (2018).\nNeural Tangent Kernel (NTK): this approximation was implicitly used by Jacot et al. (2018) to derive the recursive formula of the infinite width Neural Tangent Kernel (See Jacot et al. (2018), Appendix A.1). Authors have found that this approximation yields excellent match with exact NTK. It was also exploited later in (Arora et al., 2019; Hayou et al., 2020) to derive the infinite NTK for different architectures. The difference between the infinite width NTK Θ and the empirical (exact) NTK Θ̂ was studied in Lee et al. (2019) where authors have shown that ‖Θ− Θ̂‖F = O(N−1) where N is the width of the NN.\nMore precisely, we use the approximation that, for wide neural networks, the weights used for forward propagation are independent from those used for back-propagation. When used for the computation of gradient covariance and Neural Tangent Kernel, this approximation was proven to give the exact computation for standard architectures such as FFNN, CNN and ResNets, without BatchNorm in Yang (2019) (section D.5). Even with BatchNorm, in Yang et al. (2019), authors have found that the Gradient Independence approximation matches empirical results.\nThis approximation can be alternatively formulated as an assumption instead of an approximation as in Yang and Schoenholz (2017). Assumption 1 (Gradient Independence): The gradients are computed using an i.i.d. version of the weights used for forward propagation." }, { "heading": "B PRELIMINARY RESULTS", "text": "Let x be an input in Rd. In its general form, a neural network of depth L is given by the following set of forward propagation equations\nyl(x) = Fl(W l, yl−1(x)) +Bl, 1 ≤ l ≤ L, (12)\nwhere yl(x) is the vector of pre-activations andW l andBl are respectively the weights and bias of the lth layer. Fl is a mapping that defines the nature of the layer. The weights and bias are initialized with W l iid∼ N (0, σ 2 w\nvl ) where vl is a scaling factor used to control the variance of yl, and Bl iid∼ N (0, σ2b ). Hereafter, we denote by Ml the number of weights in the lth layer, φ the activation function and [n : m] the set of integers {n, n+ 1, ...,m} for n ≤ m. Two examples of such architectures are:\n• Fully-connected FeedForward Neural Network (FFNN) For a fully connected feedforward neural network of depth L and widths (Nl)0≤l≤L, the forward propagation of the input through the network is given by\ny1i (x) = d∑ j=1 W 1ijxj +B 1 i ,\nyli(x) = Nl−1∑ j=1 W lijφ(y l−1 j (x)) +B l i, for l ≥ 2.\n(13)\nHere, we have vl = Nl−1 and Ml = Nl−1Nl. • Convolutional Neural Network (CNN/ConvNet)\nFor a 1D convolutional neural network of depth L, number of channels (nl)l≤L and number of neurons per channel (Nl)l≤L. we have\ny1i,α(x) = nl−1∑ j=1 ∑ β∈kerl W 1i,j,βxj,α+β + b 1 i ,\nyli,α(x) = nl−1∑ j=1 ∑ β∈kerl W li,j,βφ(y l−1 j,α+β(x)) + b l i, for l ≥ 2,\n(14)\nwhere i ∈ [1 : nl] is the channel index, α ∈ [0 : Nl − 1] is the neuron location, kerl = [−kl : kl] is the filter range and 2kl + 1 is the filter size. To simplify the analysis, we assume hereafter that Nl = N and kl = k for all l. Here, we have vl = nl−1(2k + 1) and Ml = nl−1nl(2k + 1). We assume periodic boundary conditions, so yli,α = y l i,α+N = y l i,α−N . Generalization to multidimensional convolutions is straighforward.\nNotation: Hereafter, for FFNN layers, we denote by ql(x) the variance of yl1(x) (the choice of the index 1 is not crucial since, by the mean-field approximation, the random variables (yli(x))i∈[1:Nl] are iid Gaussian variables). We denote by ql(x, x′) the covariance between yl1(x) and y l 1(x ′), and cl1(x, x ′) the corresponding correlation. For gradient back-propagation, for some loss function L,\nwe denote by q̃l(x, x′) the gradient covariance defined by q̃l(x, x′) = E [ ∂L ∂yl1 (x) ∂L ∂yl1 (x′) ] . Similarly, q̃l(x) denotes the gradient variance at point x. For CNN layers, we use similar notation accross channels. More precisely, we denote by qlα(x) the variance of yl1,α,(x) (the choice of the index 1 is not crucial here either since, by the meanfield approximation, the random variables (yli,α(x))i∈[1:Nl] are iid Gaussian variables). We denote by qlα,α′(x, x ′) the covariance between yl1,α(x) and y l 1,α′(x ′), and clα,α′(x, x ′) the corresponding correlation.\nAs in the FFNN case, we define the gradient covariance by q̃lα,α′(x, x ′) = E [ ∂L ∂yl1,α (x) ∂L ∂yl 1,α′ (x′) ] ." }, { "heading": "B.1 WARMUP : SOME RESULTS FROM THE MEAN-FIELD THEORY OF DNNS", "text": "We start by recalling some results from the mean-field theory of deep NNs." }, { "heading": "B.1.1 COVARIANCE PROPAGATION", "text": "" }, { "heading": "Covariance propagation for FFNN:", "text": "In Section A.1, we presented the recursive formula for covariance propagation in a FFNN, which we derive using the Central Limit Theorem. More precisely, for two inputs x, x′ ∈ Rd, we have\nql(x, x′) = σ2b + σ 2 wE[φ(y l−1 i (x))φ(y l−1 i (x ′))].\nThis can be rewritten as ql(x, x′) = σ2b + σ 2 wE [ φ (√ ql(x)Z1 ) φ (√ ql(x′)(cl−1Z1 + √ 1− (cl−1)2Z2 )] ,\nwhere cl−1 := cl−1(x, x′). With a ReLU activation function, we have\nql(x, x′) = σ2b + σ2w 2\n√ ql(x) √ ql(x′)f(cl−1),\nwhere f is the ReLU correlation function given by (Hayou et al. (2019))\nf(c) = 1\nπ (c arcsin c+\n√ 1− c2) + 1\n2 c." }, { "heading": "Covariance propagation for CNN:", "text": "Similar to the FFNN case, it is straightforward to derive recusive formula for the covariance. However, in this case, the independence is across channels and not neurons. Simple calculus yields\nqlα,α′(x, x ′) = E[yli,α(x)yli,α′(x′)] = σ2b + σ2w 2k + 1 ∑ β∈ker E[φ(yl−11,α+β(x))φ(y l−1 1,α′+β(x ′))]\nUsing a ReLU activation function, this becomes\nqlα,α′(x, x ′) = σ2b + σ2w 2k + 1 ∑ β∈ker √ qlα+β(x) √ qlα′+β(x ′)f(cl−1α+β,α′+β(x, x ′)).\nCovariance propagation for ResNet with ReLU : This case is similar to the non residual case. However, an added residual term shows up in the recursive formula. For ResNet with FFNN layers, we have\nql(x, x′) = ql−1(x, x′) + σ2b + σ2w 2\n√ ql(x) √ ql(x′)f(cl−1)\nand for ResNet with CNN layers, we have\nqlα,α′(x, x ′) = ql−1α,α′(x, x ′) + σ2b + σ2w\n2k + 1 ∑ β∈ker √ qlα+β(x) √ qlα′+β(x ′)f(cl−1α+β,α′+β(x, x ′))." }, { "heading": "B.1.2 GRADIENT COVARIANCE BACK-PROPAGATION", "text": "" }, { "heading": "Gradiant Covariance back-propagation for FFNN:", "text": "Let L be the loss function. Let x be an input. The back-propagation of the gradient is given by the set of equations\n∂L ∂yli = φ′(yli) Nl+1∑ j=1 ∂L ∂yl+1j W l+1ji .\nUsing the approximation that the weights used for forward propagation are independent from those used in backpropagation, we have as in Schoenholz et al. (2017)\nq̃l(x) = q̃l+1(x) Nl+1 Nl χ(ql(x)),\nwhere χ(ql(x)) = σ2wE[φ( √ ql(x)Z)2]." }, { "heading": "Gradient Covariance back-propagation for CNN:", "text": "Similar to the FFNN case, we have that\n∂L ∂W li,j,β = ∑ α ∂L ∂yli,α φ(yl−1j,α+β)\nand ∂L ∂yli,α = n∑ j=1 ∑ β∈ker ∂L ∂yl+1j,α−β W l+1i,j,βφ ′(yli,α).\nUsing the approximation of Gradient independence and averaging over the number of channels (using CLT) we have that\nE[ ∂L ∂yli,α\n2 ] = σ2wE[φ′(\n√ qlα(x)Z) 2]\n2k + 1\n∑ β∈ker E[ ∂L ∂yl+1i,α−β 2 ].\nWe can get similar recursion to that of the FFNN case by summing over α and using the periodic boundary condition, this yields∑\nα\nE[ ∂L ∂yli,α\n2 ] = χ(qlα(x)) ∑ α E[ ∂L ∂yl+1i,α 2 ]." }, { "heading": "B.1.3 EDGE OF CHAOS (EOC)", "text": "Let x ∈ Rd be an input. The convergence of ql(x) as l increases has been studied by Schoenholz et al. (2017) and Hayou et al. (2019). In particular, under weak regularity conditions, it is proven that ql(x) converges to a point q(σb, σw) > 0 independent of x as l → ∞. The asymptotic behaviour of the correlations cl(x, x′) between yl(x) and yl(x′) for any two inputs x and x′ is also driven by (σb, σw): the dynamics of cl is controlled by a function f i.e. cl+1 = f(cl) called the correlation function. The authors define the EOC as the set of parameters (σb, σw) such that σ2wE[φ′( √ q(σb, σw)Z)\n2] = 1 where Z ∼ N (0, 1). Similarly the Ordered, resp. Chaotic, phase is defined by σ2wE[φ′( √ q(σb, σw)Z) 2] < 1, resp. σ2wE[φ′( √ q(σb, σw)Z)\n2] > 1. On the Ordered phase, the gradient will vanish as it backpropagates through the network, and the correlation cl(x, x′) converges exponentially to 1. Hence the output function becomes constant (hence the name ’Ordered phase’). On the Chaotic phase, the gradient explodes and the correlation converges exponentially to some limiting value c < 1 which results in the output function being discontinuous everywhere (hence the ’Chaotic’ phase name). On the EOC, the second moment of the gradient remains constant throughout the backpropagation and the correlation converges to 1 at a sub-exponential rate, which allows deeper information propagation. Hereafter, f will always refer to the correlation function." }, { "heading": "B.1.4 SOME RESULTS FROM THE MEAN-FIELD THEORY OF DEEP FFNNS", "text": "Let ∈ (0, 1) and B = {(x, x′)Rd : c1(x, x′) < 1− } (For now B is defined only for FFNN). Using Approximation 1, the following results have been derived by Schoenholz et al. (2017) and Hayou et al. (2019):\n• There exist q, λ > 0 such that supx∈Rd |ql − q| ≤ e−λl. • On the Ordered phase, there exists γ > 0 such that supx,x′∈Rd |cl(x, x′)− 1| ≤ e−γl. • On the Chaotic phase, For all ∈ (0, 1) there exist γ > 0 and c < 1 such that\nsup(x,x′)∈B |c l(x, x′)− c| ≤ e−γl.\n• For ReLU network on the EOC, we have\nf(x) = x→1−\nx+ 2 √ 2\n3π (1− x)3/2 +O((1− x)5/2).\n• In general, we have\nf(x) = σ2b + σ 2 wE[φ(\n√ qZ1)φ( √ qZ(x))]\nq , (15)\nwhere Z(x) = xZ1 + √\n1− x2Z2 and Z1, Z2 are iid standard Gaussian variables. • On the EOC, we have f ′(1) = 1 • On the Ordered, resp. Chaotic, phase we have that f ′(1) < 1, resp. f ′(1) > 1. • For non-linear activation functions, f is strictly convex and f(1) = 1. • f is increasing on [−1, 1].\n• On the Ordered phase and EOC, f has one fixed point which is 1. On the chaotic phase, f has two fixed points: 1 which is unstable, and c ∈ (0, 1) which is a stable fixed point. • On the Ordered/Chaotic phase, the correlation between gradients computed with different\ninputs converges exponentially to 0 as we back-progapagate the gradients.\nSimilar results exist for CNN. Xiao et al. (2018) show that, similarly to the FFNN case, there exists q such that qlα(x) converges exponentially to q for all x, α, and studied the limiting behaviour of correlation between neurons at the same channel clα,α′(x, x) (same input x). These correlations describe how features are correlated for the same input. However, they do not capture the behaviour of these features for different inputs (i.e. clα,α′(x, x\n′) where x 6= x′). We establish this result in the next section." }, { "heading": "B.2 CORRELATION BEHAVIOUR IN CNN IN THE LIMIT OF LARGE DEPTH", "text": "Appendix Lemma 1 (Asymptotic behaviour of the correlation in CNN with smooth activation functions). We consider a 1D CNN. Let (σb, σw) ∈ (R+)2 and x 6= x′ be two inputs ∈ Rd. If (σb, σw) are either on the Ordered or Chaotic phase, then there exists β > 0 such that\nsup α,α′ |clα,α′(x, x′)− c| = O(e−βl),\nwhere c = 1 if (σb, σw) is in the Ordered phase, and c ∈ (0, 1) if (σb, σw) is in the Chaotic phase.\nProof. Let x 6= x′ be two inputs and α, α′ two nodes in the same channel i. From Section B.1, we have that\nqlα,α′(x, x ′) = E[yli,α(x)yli,α′(x′)] = σ2w 2k + 1 ∑ β∈ker E[φ(yl−11,α+β(x))φ(y l−1 1,α′+β(x ′))] + σ2b .\nThis yields\nclα,α′(x, x ′) =\n1\n2k + 1 ∑ β∈ker f(cl−1α+β,α′+β(x, x ′)),\nwhere f is the correlation function. We prove the result in the Ordered phase, the proof in the Chaotic phase is similar. Let (σb, σw) be in the Ordered phase and clm = minα,α′ c l α,a′(x, x\n′). Using the fact that f is non-decreasing (section B.1), we have that clα,α′(x, x ′) ≥ 12k+1 ∑ β∈ker c l−1 α+β,α′+β(x, x\n′)) ≥ f(cl−1m ). Taking the min again over α, α′, we have clm ≥ f(cl−1m ), therefore clm is non-decreasing and converges to a stable fixed point of f . By the convexity of f , the limit is 1 (in the Chaotic phase, f has two fixed point, a stable point c1 < 1 and c2 = 1 unstable). Moreover, the convergence is exponential using the fact that 0 < f ′(1) < 1. We conclude using the fact that supα,α′ |clα,α′(x, x′)− 1| = 1− clm." }, { "heading": "C PROOFS FOR SECTION 2 : SBP FOR FFNN/CNN AND THE RESCALING TRICK", "text": "In this section, we prove Theorem 1 and Proposition 1. Before proving Theorem 1, we state the degeneracy approximation.\nApproximation 3 (Degeneracy on the Ordered phase). On the Ordered phase, the correlation cl and the variance ql converge exponentially quickly to their limiting values 1 and q respectively. The degeneracy approximation for FFNN states that\n• ∀x 6= x′, cl(x, x′) ≈ 1\n• ∀x, ql(x) ≈ q" }, { "heading": "For CNN,", "text": "• ∀x 6= x′, α, α′, clα,α′(x, x′) ≈ 1\n• ∀x, qlα(x) ≈ q\nThe degeneracy approximation is essential in the proof of Theorem 1 as it allows us to avoid many unnecessary complications. However, the results holds without this approximation although the constants may be a bit different. Theorem 1 (Initialization is crucial for SBP). We consider a FFNN (2) or a CNN (3). Assume (σw, σb) are chosen on the ordered, i.e. χ(σb, σw) < 1, then the NN is ill-conditioned. Moreover, we have\nE[scr] ≤ 1\nL\n( 1 + log(κLN2)\nκ\n) +O ( 1\nκ2 √ LN2\n) ,\nwhere κ = | logχ(σb, σw)|/8. If (σw, σb) are on the EOC, i.e. χ(σb, σw) = 1, then the NN is well-conditioned. In this case, κ = 0 and the above upper bound no longer holds.\nProof. We prove the result using Approximation 3.\n1. Case 1 : Fully connected Feedforward Neural Networks To simplify the notation, we assume that Nl = N and Ml = N2 (i.e. αl = 1 and ζl = 1) for all l. We prove the result for the Ordered phase, the proof for the Chaotic phase is similar. Let L0 1, ∈ (0, 1 − 1L0 ), L ≥ L0 and x ∈ ( 1 L + , 1). With sparsity x, we keep\nkx = b(1− x)LN2c weights. We have\nP(scr ≤ x) ≥ P(max i,j |W 1ij | ∣∣ ∂L ∂W 1ij ∣∣ < t(kx)) where t(kx) is the kthx order statistic of the sequence {|W lij | ∣∣ ∂L ∂W lij\n∣∣, l > 0, (i, j) ∈ [1 : N ]2}. We have\n∂L ∂W lij = 1 |D| ∑ x∈D ∂L ∂yli(x) ∂yli(x) ∂W lij\n= 1 |D| ∑ x∈D ∂L ∂yli(x) φ(yl−1j (x)).\nOn the Ordered phase, the variance ql(x) and the correlation cl(x, x′) converge exponentially to their limiting values q, 1 (Section B.1). Under the degeneracy Approximation 3, we have\n• ∀x 6= x′, cl(x, x′) ≈ 1 • ∀x, ql(x) ≈ q\nLet q̃l(x) = E[ ∂L ∂yli(x) 2 ] (the choice of i is not important since (yli(x))i are iid ). Using these approximations, we have that yli(x) = y l i(x ′) almost surely for all x, x′. Thus\nE [ ∂L ∂W lij 2] = E[φ( √ qZ)2]q̃l(x),\nwhere x is an input. The choice of x is not important in our approximation. From Section B.1.2, we have\nq̃lx = q̃ l+1 x Nl+1 Nl χ.\nThen we obtain q̃lx =\nNL Nl q̃Lxχ L−l = q̃Lxχ L−l,\nwhere χ = σ2wE[φ( √ qZ)2] as we have assumed Nl = N . Using this result, we have\nE [ ∂L ∂W lij 2] = A χL−l,\nwhere A = E[φ(√qZ)2]q̃Lx for an input x. Recall that by definition, one has χ < 1 on the Ordered phase.\nIn the general case, i.e. without the degeneracy approximation on cl and ql, we can prove that\nE [ ∂L ∂W lij 2] = Θ(χL−l)\nwhich suffices for the rest of the proof. However, the proof of this result requires many unnecessary complications that do not add any intuitive value to the proof.\nIn the general case where the widths are different, q̃l will also scale as χL−l up to a different constant. Now we want to lower bound the probability\nP(max i,j |W 1ij | ∣∣ ∂L ∂W 1ij ∣∣ < t(kx)). Let t(kx) be the kthx order statistic of the sequence {|W lij | ∣∣ ∂L ∂W lij\n∣∣, l > 1 + L, (i, j) ∈ [1 : N ]2}. It is clear that t(kx) > t(kx) , therefore\nP(max i,j |W 1ij | ∣∣ ∂L ∂W 1ij ∣∣ < t(kx)) ≥ P(max i,j |W 1ij | ∣∣ ∂L ∂W 1ij ∣∣ < t(kx) ). Using Markov’s inequality, we have that\nP( ∣∣ ∂L ∂W 1ij\n∣∣ ≥ α) ≤ E [∣∣ ∂L ∂W 1ij ∣∣2] α2 . (16)\nNote that Var(χ l−L 2 ∣∣ ∂L ∂W lij ∣∣) = A. In general, the random variables χ l−L2 ∣∣ ∂L ∂W lij ∣∣ have a density f lij for all l > 1 + L, (i, j) ∈ [1 : N ]2, such that f lij(0) 6= 0. Therefore, there exists a constant λ such that for x small enough,\nP(χ l−L 2 ∣∣ ∂L ∂W lij ∣∣ ≥ x) ≥ 1− λx. By selecting x = χ (1− /2)L−1 2 , we obtain\nχ l−L 2 × x ≤ χ (1+ L)−L 2 χ (1− /2)L−1 2 = χ L/2.\nTherefore, for L large enough, and all l > 1 + L, (i, j) ∈ [1 : Nl]× [1 : Nl−1], we have\nP( ∣∣ ∂L ∂W lij ∣∣ ≥ χ (1− /2)L−12 ) ≥ 1− λ χ l−( L/2+1)2 ≥ 1− λ χ L/2. Now choosing α = χ (1− /4)L−1 2 in inequality (16) yields\nP( ∣∣ ∂L ∂W 1ij ∣∣ ≥ χ (1− /4)L−12 ) ≥ 1−A χ L/4. Since we do not know the exact distribution of the gradients, the trick is to bound them using the previous concentration inequalities. We define the event B := {∀(i, j) ∈ [1 : N ]× [1 : d], ∣∣ ∂L ∂W 1ij ∣∣ ≤ χ (1− /4)L−12 } ∩ {∀l > 1 + L, (i, j) ∈ [1 : N ]2, ∣∣ ∂L ∂W lij\n∣∣ ≥ χ (1− /2)L−12 }. We have\nP(max i,j |W 1ij | ∣∣ ∂L ∂W 1ij ∣∣ < t(kx) ) ≥ P(max i,j |W 1ij | ∣∣ ∂L ∂W 1ij ∣∣ < t(kx) ∣∣B)P(B).\nBut, by conditioning on the event B, we also have\nP(max i,j |W 1ij | ∣∣ ∂L ∂W 1ij ∣∣ < t(kx) ∣∣B) ≥ P(max i,j |W 1ij | < χ− L/8t′ (kx)),\nwhere t′ (kx) is the kthx order statistic of the sequence {|W lij |, l > 1 + L, (i, j) ∈ [1 : N ]2}.\nNow, as in the proof of Proposition 4 in Appendix E (MBP section), define xζ,γL = min{y ∈ (0, 1) : ∀x > y, γLQx > Q\n1−(1−x)γ 2−ζ L }, where γL = χ− L/8. Since limζ→2 xζ,γL = 0,\nthen there exists ζ < 2 such that xζ ,γL = + 1 L .\nAs L grows, t′ (kx) converges to the quantile of order x− 1− . Therefore,\nP(max i,j |W 1ij | < χ− L/8t′ (kx)) ≥ P(max i,j |W 1ij | < Q 1−(1− x− 1− ) γ 2−ζ L ) +O( 1√ LN2 )\n≥ 1−N2(x− 1− )γ 2−ζ L +O( 1√ LN2 ).\nUsing the above concentration inequalities on the gradient, we obtain\nP(B) ≥ (1−A χ L/4)N 2 (1− λ χ L/2)LN 2 .\nTherefore there exists a constant η > 0 independent of such that\nP(B) ≥ 1− ηLN2χ L/4. Hence, we obtain\nP(scr ≥ x) ≤ N2( x− 1− )γ 2−ζ L + ηLN2χ L/4 +O( 1√ LN2 ).\nIntegration of the previous inequality yields\nE[scr] ≤ + 1\nL +\nN2\n1 + γ2−ζ L + ηLN2χ L/4 +O( 1√ LN2 ).\nNow let κ = | log(χ)|8 and set = log(κLN2) κL . By the definition of xζ , we have\nγLQxζ ,γL = Q1−(1−xζ ,γL ) γ 2−ζ L .\nFor the left hand side, we have\nγLQxζ ,γL ∼ αγL log(κLN2)\nκL\nwhere α > 0 is the derivative at 0 of the function x→ Qx. Since γL = κLN2, we have γLQxζ ,γL ∼ αN 2 log(κLN2)\nWhich diverges as L goes to infinity. In particular this proves that the right hand side diverges and therefore we have that (1− xζ ,γL)γ 2−ζ L converges to 0 as L goes to infinity. Using the asymptotic equivalent of the right hand side as L → ∞, we have Q\n1−(1−xζ ,γL ) γ 2−ζ L\n∼ √ −2 log((1− xζ ,γL)γ 2−ζ L ) = γ 1−ζ /2 L √ −2 log(1− xζ ,γL).\nTherefore, we obtain\nQ 1−(1−xζ ,γL ) γ 2−ζ L\n∼ γ1−ζ /2L\n√ 2 log(κLN2)\nκL .\nCombining this result to the fact that γLQxζ ,γL ∼ αγL log(κLN2) κL we obtain\nγ−ζ L ∼ β log(κLN2)\nκL ,\nwhere β is a positive constant. This yields\nE[scr] ≤ log(κLN2)\nκL +\n1 L +\nµ\nκLN2 log(κLN2) (1 + o(1)) + η\n1 κ2LN2 +O( 1√ LN2 )\n= 1\nL (1 +\nlog(κLN2)\nκ ) +O( 1 κ2 √ LN2 ),\nwhere κ = | log(χ)|8 and µ is a constant.\n2. Case 2 : Convolutional Neural Networks The proof for CNNs in similar to that of FFNN once we prove that\nE [ ∂L ∂W li,j,β 2] = A χL−l\nwhere A is a constant. We have that\n∂L ∂W li,j,β = ∑ α ∂L ∂yli,α φ(yl−1j,α+β)\nand ∂L ∂yli,α = n∑ j=1 ∑ β∈ker ∂L ∂yl+1j,α−β W l+1i,j,βφ ′(yli,α).\nUsing the approximation of Gradient independence and averaging over the number of channels (using CLT) we have that\nE[ ∂L ∂yli,α\n2 ] = σ2wE[φ′(\n√ qZ)2]\n2k + 1\n∑ β∈ker E[ ∂L ∂yl+1i,α−β 2 ].\nSumming over α and using the periodic boundary condition, this yields∑ α E[ ∂L ∂yli,α 2 ] = χ ∑ α E[ ∂L ∂yl+1i,α 2 ].\nHere also, on the Ordered phase, the variance ql and the correlation cl converge exponentially to their limiting values q and 1 respectively. As for FFNN, we use the degeneracy approximation that states\n• ∀x 6= x′, α, α′, clα,α′(x, x′) ≈ 1, • ∀x, qlα(x) ≈ q.\nUsing these approximations, we have\nE [ ∂L ∂W li,j,β 2] = E[φ( √ qZ)2]q̃l(x),\nwhere q̃l(x) = ∑ α E[\n∂L ∂yli,α(x) 2 ] for an input x. The choice of x is not important in our\napproximation.\nFrom the analysis above, we have\nq̃l(x) = q̃L(x)χL−l,\nso we conclude that\nE [ ∂L ∂W li,j,β 2] = A χL−l\nwhere A = E[φ(√qZ)2]q̃L(x).\nAfter pruning, the network is usually ‘deep’ in the Ordered phase in the sense that χ = f ′(1) 1. To re-place it on the Edge of Chaos, we use the Rescaling Trick.\nProposition 1 (Rescaling Trick). Consider a NN of the form (2) or (3) (FFNN or CNN) initialized on the EOC. Then, after pruning, the sparse network is not initialized on the EOC. However, the rescaled sparse network\nyl(x) = F(ρl ◦ δl ◦W l, yl−1(x)) +Bl, for l ≥ 1, (17)\nwhere\n• ρlij = 1√E[Nl−1(W li1)2δli1] for FFNN of the form (2),\n• ρli,j,β = 1√E[nl−1(W li,1,β)2δli,1,β ] for CNN of the form (3), is initialized on the EOC.\nProof. For two inputs x, x′, the forward propagation of the covariance is given by\nq̂l(x, x′) = E[yli(x)yli(x′)]\n= E[ Nl−1∑ j,k W lijW l ikδ l ijδ l ikφ(ŷ l−1 j (x))φ(ŷ l−1 j (x ′))] + σ2b .\nWe have\n∂L ∂W lij = 1 |D| ∑ x∈D ∂L ∂yli(x) ∂yli(x) ∂W lij\n= 1 |D| ∑ x∈D ∂L ∂yli(x) φ(yl−1j (x)).\nUnder the assumption that the weights used for forward propagation are independent from the weights used for back-propagation, W lij and\n∂L ∂yli(x) are independent for all x ∈ D. We also have that W lij and φ(yl−1j (x)) are independent for all x ∈ D. Therefore, W lij and ∂L∂W lij are independent for all l, i, j. This yields\nq̂l(x, x′) = σ2wαlE[φ(ŷ l−1 1 (x))φ(ŷ l−1 1 (x ′))] + σ2b ,\nwhere αl = E[Nl−1(W l11)2δl11] (the choice of i, j does not matter because they are iid). Unless we do not prune any weights from the lth layer, we have that αl < 1. These dynamics are the same as a FFNN with the variance of the weights given by σ̂2w = σ 2 wαl.\nSince the EOC equation is given by σ2wE[φ′( √ qZ)2] = 1, with the new variance, it is clear that\nσ̂2wE[φ′( √ q̂Z)2] 6= 1 in general. Hence, the network is no longer on the EOC and this could be problematic for training. With the rescaling, this becomes\nq̂l(x, x′) = σ2wρ 2 l αlE[φ(ỹ l−1 1 (x))φ(ỹ l−1 1 (x ′))] + σ2b\n= σ2wE[φ(ỹ l−1 1 (x))φ(ỹ l−1 1 (x ′))] + σ2b .\nTherefore, the new variance after re-scaling is σ̃2w = σ 2 w, and the limiting variance q̃ = q remains also\nunchanged since the dynamics are the same. Therefore σ̃2wE[φ′( √ q̃Z)2] = σ2wE[φ′( √ qZ)2] = 1. Thus, the re-scaled network is initialized on the EOC. The proof is similar for CNNs." }, { "heading": "D PROOF FOR SECTION 3 : SBP FOR STABLE RESIDUAL NETWORKS", "text": "Theorem 2 (Resnet is well-conditioned). Consider a Resnet with either Fully Connected or Convolutional layers and ReLU activation function. Then for all σw > 0, the Resnet is well-conditioned. Moreover, for all l ∈ {1, ..., L},ml = Θ((1 + σ 2 w\n2 ) L).\nProof. Let us start with the case of a Resnet with Fully Connected layers. we have that\n∂L ∂W lij = 1 |D| ∑ x∈D ∂L ∂yli(x) ∂yli(x) ∂W lij\n= 1 |D| ∑ x∈D ∂L ∂yli(x) φ(yl−1j (x))\nand the backpropagation of the gradient is given by the set of equations\n∂L ∂yli = ∂L ∂yl+1i + φ′(yli) Nl+1∑ j=1 ∂L ∂yl+1j W l+1ji .\nRecall that ql(x) = E[yli(x)2] and q̃l(x, x′) = E[ ∂L∂yli(x) ∂L ∂yli(x ′) ] for some inputs x, x′. We have that\nql(x) = E[yl−1i (x) 2] + σ2wE[φ(y l−1 1 ) 2] = (1 + σ2w 2 )ql−1(x),\nand q̃l(x, x′) = (1 + σ2wE[φ′(yli(x))φ′(yli(x′))])q̃l+1(x, x′).\nWe also have\nE[ ∂L ∂W lij\n2\n] = 1 |D|2 ∑ x,x′ tlx,x′ ,\nwhere tlx,x′ = q̃ l(x, x′)\n√ ql(x)ql(x′)f(cl−1(x, x′)) and f is defined in the preliminary results (Eq\n15). Let k ∈ {1, 2, ..., L} be fixed. We compare the terms tlx,x′ for l = k and l = L. The ratio between the two terms is given by (after simplification)\ntkx,x′ tLx,x′ =\n∏L−1 l=k (1 + σ2w 2 f ′(cl(x, x′)))\n(1 + σ2w 2 ) L−k\nf(ck−1(x, x′)) f(cL−1(x, x′)) .\nWe have that f ′(cl(x, x)) = f ′(1) = 1. A Taylor expansion of f near 1 yields f ′(cl(x, x′)) = 1 − l−1 + o(l−1) and f(cl(x, x)) = 1 − sl−2 + o(l−2) (see Hayou et al. (2019) for more details).\nTherefore, there exist two constants A,B > 0 such that A < ∏L−1 l=k (1+ σ2w 2 f ′(cl(x,x′)))\n(1+ σ2w 2 )\nL−k < B for all L\nand k ∈ {1, 2, ..., L}. This yields\nA ≤ E[ ∂L ∂W lij 2 ]\nE[ ∂L ∂WLij\n2 ] ≤ B,\nwhich concludes the proof.\nFor Resnet with convolutional layers, we have\n∂L ∂W li,j,β = 1 |D| ∑ x∈D ∑ α ∂L ∂yli,α(x) φ(yl−1j,α+β(x))\nand ∂L ∂yli,α = ∂L ∂yl+1i,α + n∑ j=1 ∑ β∈ker ∂L ∂yl+1j,α−β W l+1i,j,βφ ′(yli,α).\nRecall the notation q̃lα,α′(x, x ′) = E[ ∂L ∂yli,α(x) ∂L ∂yl i,α′ (x ′) ]. Using the hypothesis of independence of forward and backward weights and averaging over the number of channels (using CLT), we have\nq̃lα,α′(x, x ′) = q̃l+1α,α′(x, x\n′) + σ2wf ′(clα,α′(x, x ′))\n2(2k + 1)\n∑ β q̃l+1α+β,α′+β(x, x ′).\nLet Kl = ((q̃lα,α+β(x, x ′))α∈[0:N−1])β∈[0:N−1] be a vector in RN\n2\n. Writing this previous equation in matrix form, we obtain\nKl = (I + σ2wf ′(clα,α′(x, x ′))\n2(2k + 1) U)Kl+1\nand\nE[ ∂L\n∂W li,j,β\n2\n] = 1 |D|2 ∑\nx,x′∈D ∑ α,α′ tlα,α′(x, x ′),\nwhere tlα,α′(x, x ′) = q̃lα,α′(x, x ′) √ qlα+β(x)q l α′+β(x ′)f(cl−1α+β,α′+β(x, x ′)). Since we have f ′(clα,α′(x, x ′))→ 1, then by fixing l and letting L goes to infinity, it follows that\nKl ∼L→∞ (1 + σ2w 2 )L−le1e T 1 KL\nand, from Lemma 2, we know that√ qlα+β(x)q l α′+β(x ′) = (1 + σ2w 2 )l−1 √ q0,xq0,x′ .\nTherefore, for a fixed k < L, we have tkα,α′(x, x ′) ∼ (1 + σ\n2 w\n2 ) L−1f(ck−1α+β,α′+β(x, x ′))(eT1 KL) =\nΘ(tLα,α′(x, x ′)). This concludes the proof.\nProposition 2 (Stable Resnet). Consider the following Resnet parameterization\nyl(x) = yl−1(x) + 1√ L F(W l, yl−1), for l ≥ 2, (18)\nthen the network is well-conditioned for all choices of σw > 0. Moreover, for all l ∈ {1, ..., L} we have ml = Θ(L−1).\nProof. The proof is similar to that of Theorem 2 with minor differences. Let us start with the case of a Resnet with fully connected layers, we have\n∂L ∂W lij = 1 |D| √ L ∑ x∈D ∂L ∂yli(x) ∂yli(x) ∂W lij\n= 1\n|D| √ L ∑ x∈D ∂L ∂yli(x) φ(yl−1j (x))\nand the backpropagation of the gradient is given by\n∂L ∂yli = ∂L ∂yl+1i + 1√ L φ′(yli) Nl+1∑ j=1 ∂L ∂yl+1j W l+1ji .\nRecall that ql(x) = E[yli(x)2] and q̃l(x, x′) = E[ ∂L∂yli(x) ∂L ∂yli(x ′) ] for some inputs x, x′. We have\nql(x) = E[yl−1i (x) 2] + σ2w L E[φ(yl−11 (x)) 2] = (1 + σ2w 2L )ql−1(x)\nand\nq̃l(x, x′) = (1 + σ2w L E[φ′(yli(x))φ′(yli(x′))])q̃l+1(x, x′).\nWe also have\nE[ ∂L ∂W lij\n2\n] = 1 L|D|2 ∑ x,x′ tlx,x′ ,\nwhere tlx,x′ = q̃ l(x, x′)\n√ ql(x)ql(x′)f(cl−1(x, x′)) and f is defined in the preliminary results (Eq.\n15). Let k ∈ {1, 2, ..., L} be fixed. We compare the terms tlx,x′ for l = k and l = L. The ratio between the two terms is given after simplification by\ntkx,x′ tLx,x′ =\n∏L−1 l=k (1 + σ2w 2L f ′(cl(x, x′)))\n(1 + σ2w 2L ) L−k\nf(ck−1(x, x′)) f(cL−1(x, x′)) .\nAs in the proof of Theorem 2, we have that f ′(cl(x, x)) = 1, f ′(cl(x, x′)) = 1 − l−1 + o(l−1) and f(cl(x, x)) = 1 − sl−2 + o(l−2). Therefore, there exist two constants A,B > 0 such that\nA < ∏L−1 l=k (1+ σ2w 2L f ′(cl(x,x′)))\n(1+ σ2w 2L )\nL−k < B for all L and k ∈ {1, 2, ..., L}. This yields\nA ≤ E[ ∂L ∂W lij 2 ]\nE[ ∂L ∂WLij\n2 ] ≤ B.\nMoreover, since (1+ σ 2 w 2L ) L → eσ2w/2, thenml = Θ(1) for all l ∈ {1, ..., L}. This concludes the proof.\nFor Resnet with convolutional layers, the proof is similar. With the scaling, we have\n∂L ∂W li,j,β = 1√ L|D| ∑ x∈D ∑ α ∂L ∂yli,α(x) φ(yl−1j,α+β(x))\nand ∂L ∂yli,α = ∂L ∂yl+1i,α + 1√ L n∑ j=1 ∑ β∈ker ∂L ∂yl+1j,α−β W l+1i,j,βφ ′(yli,α).\nLet q̃lα,α′(x, x ′) = E[ ∂L ∂yli,α(x) ∂L ∂yl i,α′ (x ′) ]. Using the hypothesis of independence of forward and backward weights and averaging over the number of channels (using CLT) we have\nq̃lα,α′(x, x ′) = q̃l+1α,α′(x, x\n′) + σ2wf ′(clα,α′(x, x ′))\n2(2k + 1)L\n∑ β q̃l+1α+β,α′+β(x, x ′).\nLet Kl = ((q̃lα,α+β(x, x ′))α∈[0:N−1])β∈[0:N−1] is a vector in RN\n2\n. Writing this previous equation in matrix form, we have\nKl = (I + σ2wf ′(clα,α′(x, x ′))\n2(2k + 1)L U)Kl+1,\nand\nE[ ∂L\n∂W li,j,β\n2\n] = 1 L|D|2 ∑\nx,x′∈D ∑ α,α′ tlα,α′(x, x ′),\nwhere tlα,α′(x, x ′) = q̃lα,α′(x, x ′) √ qlα+β(x)q l α′+β(x ′)f(cl−1α+β,α′+β(x, x ′)). Since we have f ′(clα,α′(x, x ′))→ 1, then by fixing l and letting L goes to infinity, we obtain\nKl ∼L→∞ (1 + σ2w 2L )L−le1e T 1 KL\nand we know from Appendix Lemma 2 (using αβ = σ2w 2L for all β) that√\nqlα+β(x)q l α′+β(x ′) = (1 + σ2w 2L )l−1 √ q0,xq0,x′ .\nTherefore, for a fixed k < L, we have tkα,α′(x, x ′) ∼ (1 + σ\n2 w\n2L ) L−1f(ck−1α+β,α′+β(x, x ′))(eT1 KL) =\nΘ(tLα,α′(x, x ′)) which proves that the stable Resnet is well conditioned. Moreover, since (1 + σ2w 2L ) L−1 → eσ2w/2, then ml = Θ(L−1) for all l.\nIn the next Lemma, we study the asymptotic behaviour of the variance qlα. We show that, as l→∞, a phenomenon of self averaging shows that qlα becomes independent of α. Appendix Lemma 2. Let x ∈ Rd. Assume the sequence (al,α)l,α is given by the recursive formula\nal,α = al−1,α + ∑ β∈ker λβal−1,α+β\nwhere λβ > 0 for all β. Then, there exists ζ > 0 such that for all x ∈ Rd and α, al,α(x) = (1 + ∑ β αβ) la0 +O((1 + ∑ β αβ) le−ζl)),\nwhere a0 is a constant and the O is uniform in α.\nProof. Recall that\nal,α = al−1,α + ∑ β∈ker λβal−1,α+β .\nWe rewrite this expression in a matrix form\nAl = UAl−1,\nwhere Al = (al,α)α is a vector in RN and U is the is the convolution matrix. As an example, for k = 1, U given by\nU = 1 + λ0 λ1 0 ... 0 λ−1 λ−1 1 + λ0 λ1 0 . . . 0 0 λ−1 1 + λ0 λ1 . . . 0 0 0 λ−1 1 + λ0 . . . 0\n. . . . . . . . . . . . λ1 0 . . . 0 λ−1 1 + λ0\n .\nU is a circulant symmetric matrix with eigenvalues b1 > b2 ≥ b3... ≥ bN . The largest eigenvalue of U is given by b1 = 1 + ∑ β λβ and its equivalent eigenspace is generated by the vector e1 = 1√ N (1, 1, ..., 1) ∈ RN . This yields\nb−l1 U l = e1e T 1 +O(e −ζl),\nwhere ζ = log( b1b2 ). Using this result, we obtain\nb−l1 Al = (b −l 1 U l)A0 = e1e T 1 A0 +O(e −ζl).\nThis concludes the proof.\nUnlike FFNN or CNN, we do not need to rescale the pruned network. The next proposition establishes that a Resnet lives on the EOC in the sense that the correlation between yli(x) and y l i(x ′) converges to 1 at a sub-exponential O(l−2) rate. Proposition 3 (Resnet live on the EOC even after pruning). Let x 6= x′ be two inputs. The following statments hold\n1. For Resnet with Fully Connected layers, let ĉl(x, x′) be the correlation between ŷli(x) and ŷli(x ′) after pruning the network. Then we have\n1− ĉl(x, x′) ∼ κ l2 ,\nwhere κ > 0 is a constant.\n2. For Resnet with Convolutional layers, let ĉl(x, x′) = ∑ α,α′ E[y l 1,α(x)y l 1,α′ (x\n′)]∑ α,α′ √ qlα(x) √ qlα ′(x′) be an ‘average’\ncorrelation after pruning the network. Then we have\n1− ĉl(x, x′) & l−2.\nProof. 1. Let x and x′ be two inputs. The covariance of ŷli(x) and ŷ l i(x ′) is given by\nq̂l(x, x′) = q̂l−1(x, x′) + αE(Z1,Z2)∼N (0,Ql−1)[φ(Z1)φ(Z2)] where Ql−1 = [ q̂l−1(x) q̂l−1(x, x′) q̂l−1(x, x′) q̂l−1(x′) ] and α = E[Nl−1W l112δl11].\nConsequently, we have q̂l(x) = (1 + α2 )q̂ l−1(x). Therefore, we obtain\nĉl(x, x′) = 1\n1 + λ ĉl−1(x, x′) +\nλ\n1 + λ f(ĉl−1(x, x′)),\nwhere λ = α2 and f(x) = 2E[φ(Z1)φ(xZ1 + √\n1− x2Z2)] and Z1 and Z2 are iid standard normal variables.\nUsing the fact that f is increasing (Section B.1), it is easy to see that ĉl(x, x′) → 1. Let ζl = 1 − ĉl(x, x′). Moreover, using a Taylor expansion of f near 1 (Section B.1) f(x) =\nx→1− x+ β(1− x)3/2 +O((1− x)5/2), it follows that\nζl = ζl−1 − ηζ3/2l−1 +O(ζ 5/2 l−1),\nwhere η = λβ1+λ . Now using the asymptotic expansion of ζ −1/2 l given by\nζ −1/2 l = ζ −1/2 l−1 +\nη 2 +O(ζl−1),\nthis yields ζ−1/2l ∼ l→∞ η 2 l. We conclude that 1− ĉ l(x, x′) ∼ 4η2l2 .\n2. Let x be an input. Recall the forward propagation of a pruned 1D CNN\nyli,α(x) = y l−1 i,α (x) + c∑ j=1 ∑ β∈ker δli,j.βW l i,j,βφ(y l−1 j,α+β(x)) + b l i.\nUnlike FFNN, neurons in the same channel are correlated since we use the same filters for all of them. Let x, x′ be two inputs and α, α′ two nodes in the same channel i. Using the Central Limit Theorem in the limit of large nl (number of channels), we have\nE[yli,α(x)yli,α′(x′)] = E[y l−1 i,α (x)y l−1 i,α′(x\n′)]+ 1\n2k + 1 ∑ β∈ker αβE[φ(yl−11,α+β(x))φ(y l−1 1,α′+β(x ′))],\nwhere αβ = E[δli,1.βW li,1,β2nl−1].\nLet qlα(x) = E[yl1,α(x)2]. The choice of the channel is not important since for a given α, neurons (yli,α(x))i∈[c] are iid. Using the previous formula, we have\nqlα(x) = q l−1 α (x) +\n1\n2k + 1 ∑ β∈ker αβE[φ(yl−11,α+β(x)) 2]\n= ql−1α (x) + 1\n2k + 1 ∑ β∈ker αβ ql−1α+β(x) 2 .\nTherefore, letting ql(x) = 1N ∑ α∈[N ] q l α(x) and σ = ∑ β αβ 2k+1 , we obtain\nql(x) = ql−1(x) + 1\n2k + 1 ∑ β∈ker αβ ∑ α∈[n] ql−1α+β(x) 2\n= (1 + σ\n2 )ql−1(x) = (1 +\nσ 2 )l−1q1(x),\nwhere we have used the periodicity ql−1α = q l−1 α−N = q l−1 α+N . Moreover, we have minα q l α(x) ≥ (1 + σ2 ) minα q l−1 α (x) ≥ (1 + σ2 ) l−1 minα q 1 α(x).\nThe convolutional structure makes it hard to analyse the correlation between the values of a neurons for two different inputs. Xiao et al. (2018) studied the correlation between the values of two neurons in the same channel for the same input. Although this could capture the propagation of the input structure (say how different pixels propagate together) inside the network, it does not provide any information on how different structures from different inputs propagate. To resolve this situation, we study the ’average’ correlation per channel defined as\ncl(x, x′) = ∑ α,α′ E[yl1,α(x)yl1,α′(x′)]∑ α,α′ √ qlα(x) √ qlα ′(x′) ,\nfor any two inputs x 6= x′. We also define c̆l(x, x′) by\nc̆l(x, x′) = 1 N2 ∑ α,α′ E[yl1,α(x)yl1,α′(x′)]√\n1 N ∑ α q l α(x) √ 1 N ∑ α q l α(x ′) .\nUsing the concavity of the square root function, we have√ 1\nN ∑ α qlα(x)\n√ 1\nN ∑ α qlα(x ′) =\n√ 1\nN2 ∑ α,α′ qlα(x)q l α(x ′)\n≥ 1 N2 ∑ α,α′ √ qlα(x) √ qlα(x ′)\n≥ 1 N2 ∑ α,α′ |E[yl1,α(x)yl1,α′(x′)]|.\nThis yields c̆l(x, x′) ≤ cl(x, x′) ≤ 1. Using Appendix Lemma 2 twice with al,α = qlα(x), al,α = q l α(x ′), and λβ = αβ 2(2k+1) , there exists ζ > 0 such that\ncl(x, x′) = c̆l(x, x′)(1 +O(e−ζl)). (19)\nThis result shows that the limiting behaviour of cl(x, x′) is equivalent to that of c̆l(x, x′) up to an exponentially small factor. We study hereafter the behaviour of c̆l(x, x′) and use this result to conclude. Recall that\nE[yli,α(x)yli,α′(x′)] = E[y l−1 i,α (x)y l−1 i,α′(x\n′)]+ 1\n2k + 1 ∑ β∈ker αβE[φ(yl−11,α+β(x))φ(y l−1 1,α′+β(x ′))].\nTherefore,∑ α,α′ E[yl1,α(x)yl1,α′(x′)]\n= ∑ α,α′ E[yl−11,α (x)y l−1 1,α′(x ′)] + 1 2k + 1 ∑ α,α′ ∑ β∈ker αβE[φ(yl−11,α+β(x))φ(y l−1 1,α′+β(x ′))]\n= ∑ α,α′ E[yl−11,α (x)y l−1 1,α′(x ′)] + σ ∑ α,α′ E[φ(yl−11,α (x))φ(y l−1 1,α′(x ′))]\n= ∑ α,α′ E[yl−11,α (x)y l−1 1,α′(x ′)] + σ 2 ∑ α,α′ √ ql−1α (x) √ ql−1α ′(x′)f(c l−1 α,α′(x, x ′)),\nwhere f is the correlation function of ReLU.\nLet us first prove that c̆l(x, x′) converges to 1. Using the fact that f(z) ≥ z for all z ∈ (0, 1) (Section B.1), we have that\n∑ α,α′ E[yl1,α(x)yl1,α′(x′)] ≥ ∑ α,α′ E[yl−11,α (x)y l−1 1,α′(x ′)] + σ 2 ∑ α,α′ √ ql−1α (x) √ ql−1α ′(x′)c l−1 α,α′(x, x ′)\n= ∑ α,α′ E[yl−11,α (x)y l−1 1,α′(x ′)] + σ 2 ∑ α,α′ E[yl−11,α (x)y l−1 1,α′(x ′)]\n= (1 + σ\n2 )E[yl−11,α (x)y l−1 1,α′(x ′)].\nCombining this result with the fact that ∑ α q l α(x) = (1 + σ 2 ) ∑ α q l−1 α (x), we have c̆l(x, x′) ≥ c̆l−1(x, x′). Therefore c̆l(x, x′) is non-decreasing and converges to a limiting point c. Let us prove that c = 1. By contradiction, assume the limit c < 1. Using equation (19), we have that c\nl(x,x′) c̆l(x,x′) converge to 1 as l goes to infinity. This yields cl(x, x′) → c. Therefore, there exists α0, α′0 and a constant δ < 1 such that for all l, c\nl α0,α′0 (x, x′) ≤ δ < 1. Knowing that f is strongly convex and that f ′(1) = 1, we have that f(clα0,α′0(x, x\n′)) ≥ clα0,α′0 (x, x′) + f(δ)− δ. Therefore,\nc̆l(x, x′) ≥ c̆l−1(x, x′) + σ 2\n√ ql−1α0 (x)q\nl−1 α′0 (x′) N2 √ ql(x) √ ql(x′) (f(δ)− δ)\n≥ c̆l−1(x, x′) + σ 2\n√ minα q1α(x) minα′ q 1 α′(x ′)\nN2 √ q1(x) √ q1(x′)\n(f(δ)− δ).\nBy taking the limit l→∞, we find that c ≥ c+ σ 2\n√ minα q1α(x) minα′ q 1 α′ (x ′)\nN2 √ q1(x) √ q1(x′) (f(δ)− δ). This cannot be true since f(δ) > δ. Thus we conclude that c = 1.\nNow we study the asymptotic convergence rate. From Section B.1, we have that\nf(x) = x→1−\nx+ 2 √ 2\n3π (1− x)3/2 +O((1− x)5/2).\nTherefore, there exists κ > 0 such that, close to 1− we have that\nf(x) ≤ x+ κ(1− x)3/2.\nUsing this result, we can upper bound cl(x, x′)\nc̆l(x, x′) ≤ c̆l−1(x, x′) + κ ∑ α,α′ 1 N2\n√ ql−1α (x) √ ql−1α′ (x\n′)√ ql(x) √ ql(x′) (1− clα,α′(x, x′))3/2.\nTo get a polynomial convergence rate, we should have an upper bound of the form c̆l ≤ c̆l−1 + ζ(1− c̆l−1)1+ (see below). However, the function x3/2 is convex, so the sum cannot be upper-bounded directly using Jensen’s inequality. We use here instead (Pečarić et al., 1992, Theorem 1) which states that for any x1, x2, ...xn > 0 and s > r > 0, we have(∑\ni\nxsi )1/s < (∑\ni\nxri )1/r . (20)\nLet zlα,α′ = 1 N2\n√ ql−1α (x) √ ql−1 α′ (x\n′)√ ql(x) √ ql(x′)\n, we have∑ α,α′ zlα,α′(1− clα,α′(x, x′))3/2 ≤ ζl ∑ α,α′ [zlα,α′(1− clα,α′(x, x′))]3/2,\nwhere ζl = maxα,α′ 1zl α,α′ 1/2 . Using the inequality (20) with s = 3/2 and r = 1, we have∑ α,α′ [zlα,α′(1− clα,α′(x, x′))]3/2 ≤ ( ∑ α,α′ zlα,α′(1− clα,α′(x, x′)))3/2\n= ( ∑ α,α′ zlα,α′ − c̆l(x, x′)))3/2.\nMoreover, using the concavity of the square root function, we have ∑ α,α′ z l α,α′ ≤ 1. This yields\nc̆l(x, x′) ≤ c̆l−1(x, x′) + ζ(1− c̆l−1(x, x′))3/2, where ζ is constant. Letting γl = 1 − c̆l(x, x′), we can conclude using the following inequality (we had an equality in the case of FFNN)\nγl ≥ γl−1 − ζγ3/2l−1 which leads to\nγ −1/2 l ≤ γ −1/2 l−1 (1− ζγ 1/2 l−1) −1/2 = γ −1/2 l−1 +\nζ 2 + o(1).\nHence we have γl & l −2.\nUsing this result combined with (19) again, we conclude that\n1− cl(x, x′) & l−2." }, { "heading": "E THEORETICAL ANALYSIS OF MAGNITUDE BASED PRUNING (MBP)", "text": "In this section, we provide a theoretical analysis of MBP. The two approximations from Appendix A are not used here.\nMBP is a data independent pruning algorithm (zero-shot pruning). The mask is given by\nδli = { 1 if |W li | ≥ ts, 0 if |W li | < ts,\nwhere ts is a threshold that depends on the sparsity s. By defining ks = (1− s) ∑ lMl, ts is given by ts = |W |(ks) where |W |(ks) is the kths order statistic of the network weights (|W li |)1≤l≤L,1≤i≤Ml (|W |(1) > |W |(2) > ...). With MBP, changing σw does not impact the distribution of the resulting sparse architecture since it is a common factor for all the weights. However, in the case of different scaling factors vl, the variances σ 2 w\nvl used to initialize the weights vary across layers. This gives potentially the erroneous\nintuition that the layer with the smallest variance will be highly likely fully pruned before others as we increase the sparsity s. This is wrong in general since layers with small variances might have more weights compared to other layers. However, we can prove a similar result by considering the limit of large depth with fixed widths.\nProposition 4 (MBP in the large depth limit). Assume N is fixed and there exists l0 ∈ [|1, L|] such that αl0 > αl for all l 6= l0. Let Qx be the xth quantile of |X| where X\niid∼ N (0, 1) and γ = minl 6=l0 αl0 αl\n. For ∈ (0, 2), define x ,γ = inf{y ∈ (0, 1) : ∀x > y, γQx > Q1−(1−x)γ2− } and x ,γ =∞ for the null set. Then, for all ∈ (0, 2), x ,γ is finite and there exists a constant ν > 0 such that\nE[scr] ≤ inf ∈(0,2)\n{x ,γ + ζl0N\n2\n1 + γ2− (1− x ,γ)1+γ\n2− }+O( 1√\nLN2 ).\nProposition 4 gives an upper bound on E[scr] in the large depth limit. The upper bound is easy to approximate numerically. Table 7 compares the theoretical upper bound in Proposition 4 to the empirical value of E[scr] over 10 simulations for a FFNN with depth L = 100, N = 100, α1 = γ and α2 = α3 = · · · = αL = 1. Our experiments reveal that this bound can be tight.\nProof. Let x ∈ (0, 1) and kx = (1− x)ΓLN2, where ΓL = ∑ l 6=l0 ζl. We have\nP(scr ≤ x) ≥ P(max i |W l0i | < |W | (kx)),\nwhere |W |(kx) is the kthx order statistic of the sequence {|W li |, l 6= l0, i ∈ [1 : Ml]}; i.e |W |(1) > |W |(2) > ... > |W |(kx).\nLet (Xi)i∈[1:Ml0 ] and (Zi)i∈[1:ΓLN2] be two sequences of iid standard normal variables. It is easy to see that\nP(max i,j |W l0ij | < |W | (kx)) ≥ P(max i |Xi| < γ|Z|(kx))\nwhere γ = minl 6=l0 αl0 αl .\nMoreover, we have the following result from the theory of order statistics, which is a weak version of Theorem 3.1. in Puri and Ralescu (1986)\nAppendix Lemma 3. Let X1, X2, ..., Xn be iid random variables with a cdf F . Assume F is differentiable and let p ∈ (0, 1) and let Qp be the order p quantile of the distribution F , i.e. F (Qp) = p. Then we have\n√ n(X(pn) −Qp)F ′(Qp)σ−1p →\nD N (0, 1),\nwhere the convergence is in distribution and σp = p(1− p).\nUsing this result, we obtain\nP(max i |Xi| < γ|Z|(kx)) = P(max i |Xi| < γQx) +O( 1√ LN2 ),\nwhere Qx is the x quantile of the folded standard normal distribution.\nThe next result shows that x ,γ is finite for all ∈ (0, 2). Appendix Lemma 4. Let γ > 1. For all ∈ (0, 2), there exists x ∈ (0, 1) such that, for all x > x , γQx > Q1−(1−x)γ2− .\nProof. Let > 0, and recall the asymptotic equivalent of Q1−x given by Q1−x ∼x→0 √ −2 log(x)\nTherefore, γQxQ 1−(1−x)γ2−\n∼x→1 √ γ > 1. Hence x exists and is finite.\nLet > 0. Using Appendix Lemma 4, there exists x > 0 such that P(max\ni |Xi| < γQx) ≥ P(max i |Xi| < Q1−(1−x)γ2− )\n= (1− (1− x)γ 2− )ζl0N 2\n≥ 1− ζl0N2(1− x)γ 2− ,\nwhere we have used the inequality (1−t)z ≥ 1−zt for all (t, z) ∈ [0, 1]×(1,∞) and β = αl0αl0+1.\nUsing the last result, we have\nP(scr ≥ x) ≤ βN2(1− x)γ 2− +O( 1√ LN2 ).\nNow we have\nE[scr] = ∫ 1\n0\nP(scr ≥ x)dx\n≤ x + ∫ 1 x P(scr ≥ x)dx\n≤ x + βN2\n1 + γ2− (1− x )γ 2− +1 +O( 1√ LN2 ).\nThis is true for all ∈ (0, 2), and the additional term O( 1√ LN2 ) does not depend on . Therefore there exists a constant ν ∈ R such that for all\nE[scr] ≤ x + βN2\n1 + γ2− (1− x )γ 2− +1 + ν√ LN2 .\nWe conclude by taking the infimum over .\nAnother interesting aspect of MBP is when the depth is fixed and the width goes to infinity. The next result gives a lower bound on the probability of pruning at least one full layer. Proposition 5 (MBP in the large width limit). Assume there exists l0 ∈ [1 : L] such that αl0 > αl (i.e. vl0 > vl) for all l, and let s0 = Ml0∑ lMl\n. For some sparsity s, let PRl0(s) be the event that layer l0 is fully pruned before other layers, i.e.\nPRl0(s) = {|Al0 | = Ml0} ∩l∈[1:L] {|Al| < Ml}, and let PRl0 = ∪s∈(s0,smax)PRl0(s) be the event where there exists a sparsity s such that layer l0 is fully pruned before other layers. Then, we have\nP(PRl0) ≥ 1− Lπ2 4(γ − 1)2 log(N)2 + o ( 1 log(N)2 ) ,\nwhere γ = mink 6=l0 αl0 αk .\nProposition 5 shows that when the width is not the same for all layers, MBP will result in one layer being fully pruned with a probability that converges to 1 as the width goes to infinity. The larger the ratio γ (ratio of widths between the largest and the second largest layers), the faster this probability goes to 1.\nThe intuition behind Proposition 5 comes from a result in Extreme Value Theory stated in Appendix Lemma 6. Indeed, the problem of pruning one whole layer before the others is essentially a problem of maxima: we prune one whole layer l0 before the others if and only if for all maxi |W l0i | < minl 6=l0 maxi |W li |. The expected value of n iid standard Gaussian variables is known to scale as√\nlog n for large n; see e.g. Van Handel (2016).\nThe proof of Proposition 5 relies on the following two auxiliary results. Appendix Lemma 5 (Rearrangement inequality (Hardy et al., 1952)). Let f, g : R → R+ be functions which are either both non-decreasing or non-increasing and let X be a random variable. Then E[f(X)g(X)] ≥ E[f(X)]E[g(X)]. Appendix Lemma 6 (Von Mises (1936)). Let (Xi)1≤i≤n be iid random variables with common density f and cumulative distribution function F . Assume limx→F−1(1)( ddx (1−F (x)) f(x) ) = 0, then limn→∞ P(maxiXi ≤ anx+ bn) = G(x) where G is the Gumbel cumulative distribution function and series an and bn are given by bn = F−1(1− 1n ) and an = 1 nf(bn) .\nWe are now in a position to prove Proposition 5.\nProof. Assume there exists l0 ∈ [1 : L] such that αl0 > αl for all l. The trick is to see that\nPRl0 = {∀k 6= l0,max i |W l0i | < max ij |W ki |}.\nLet us prove that P(PRl0) ≥ ∏ k 6=l0 P(max i |W l0i | < max j |W ki |).\nLet X = maxi |W l0i |. We have that P(PRl0) = E[ ∏ k 6=l0 P(X < max i |W ki ||X)]\nusing the rearrangement inequality presented in Appendix Lemma 5 with functions fi(x) = P(X < maxi |W ki ||X = x) which are all non-increasing, we obtain\nP(PRl0) ≥ ∏ k 6=l0 E[P(X < max i |W ki ||X)] = ∏ k 6=l0 P(max i |W l0i | < max i |W ki |).\nIn order to deal with the probability P(maxi |W l0i | < maxi |W ki |), we use Appendix Lemma 6 which is a result from Extreme Value Theory which provides a comprehensive description of the law of maxiXi needed in our analysis. In our case, we want to characterise the behaviour of maxi |Xi| where Xi are iid Gaussian random variables. Let Ψ and ψ be the cdf and density of a standard Gaussian variable X . The cdf of |X| is given by F = 2Ψ− 1 and its density is given by f = 2ψ on the positive real line. Thus, 1−Ff = 1−Ψ ψ and it is sufficient to verify the conditions of Appendix Lemma 6 for the standard Gaussian distribution. We have limx→F−1(1) ddx 1−Ψ(x) ψ(x) = limx→F−1(1) x (1−Ψ(x)) ψ(x) − 1 = x/x− 1 = 0, where we have used the fact that x(1−Ψ(x)) ∼ φ(x) in the large x limit. Let us now find the values of an and bn. In the large x limit, we have\n1− F (x) = 2 ∫ ∞ x e− t2 2 √ 2π dt\n=\n√ π\n2 e−\nx2 2 ( 1\nx +\n1\nx3 + o(\n1\nx3 )).\nTherefore, one has\nlog(1− F (x)) ∼ −x 2\n2 .\nThis yields\nbn = F −1(1− 1\nn ) ∼\n√ 2 log n.\nUsing the same asymptotic expansion of 1−F (x), we can obtain a more precise approximation of bn\nbn = √ 2 log n ( 1− log(log n)\n4 log n +\n1 2 log( π 4 )\n2 log n − log(log n) 8(log n)2 + o( log(log n) (log n)2 ) ) .\nNow let us find an approximation for an. We have\nψ(bn) ∼ √ 2\nπn\n√ log n.\nTherefore, it follows that\nan ∼ π√\n2 log n .\nWe use these results to lower bound the probability P(maxi |W l0i | < maxi |W ki |). We have\nP(max i |W l0i | ≥ max i |W ki |) = P(max i |Xi| ≥ γk max i |Yi|),\nwhere γk = αl0 αk\nand (Xi) and (Yi) are standard Gaussian random variables. Note that γk > 1. Let AN = maxi |Xi| and BN = maxi |Yi|. We have that\nP(AN ≥ γkBN ) = P(AN − E[AN ] ≥ γk(BN − E[BN ]) + γkE[BN ]− E[AN ])\n≤ E [ (AN − E[AN ])2 (γk(BN − E[BN ]) + γkE[BN ]− E[AN ]))2 ] ∼ N→∞\nπ2\n4(γk − 1)2 log(N)2 .\nWe conclude that for large N\nP(PRl0) ≥ 1− Lπ2\n4(γ − 1)2 log(N)2 + o(\n1\nlog(N)2 ),\nwhere γ = mink 6=l0 αl0 αk .\nF IMAGENET EXPERIMENTS\nTo validate our results on large scale datasets, we prune ResNet50 using SNIP, GraSP, SynFlow and our algorithm SBP-SR, and train the pruned network on ImageNet. We train the pruned model for 90 epochs with SGD. The training starts with a learning rate 0.1 and it drops by a factor of 10 at epochs 30, 60, 80. We report in table 8 Top-1 test accuracy for different sparsities. Our algorithm SBP-SR has a clear advantage over other algorithms. We are currently running extensive simulations on ImageNet to confirm these results." }, { "heading": "G ADDITIONAL EXPERIMENTS", "text": "In Table 10, we present additional experiments with varying Resnet Architectures (Resnet32/50), and sparsities (up to 99.9%) with Relu and Tanh activation functions on Cifar10. We see that overall, using our proposed Stable Resnet performs overall better that standard Resnets.\nIn addition, we also plot the remaining weights for each layer to get a better understanding on the different pruning strategies and well as understand why some of the Resnets with Tanh activation functions are untrainable. Furthermore, we added additional MNIST experiments with different activation function (ELU, Tanh) and note that our rescaled version allows us to prune significantly more for deeper networks." }, { "heading": "H ON THE LOTTERY TICKET HYPOTHESIS", "text": "The Lottery Ticket Hypothesis (LTH) (Frankle and Carbin, 2019) states that “randomly initialized networks contain subnetworks that when trained in isolation reach test accuracy comparable to the original network”. We have shown so far that pruning a NN initialized on the EOC will output sparse NNs that can be trained after rescaling. Conversely, if we initialize a random NN with any hyperparameters (σw, σb), then intuitively, we can prune this network in a way that ensures that the pruned NN is on the EOC. This would theoretically make the sparse architecture trainable. We formalize this intuition as follows.\nWeak Lottery Ticket Hypothesis (WLTH): For any randomly initialized network, there exists a subnetwork that is initialized on the Edge of Chaos.\nIn the next theorem, we prove that the WLTH is true for FFNN and CNN architectures that are initialized with Gaussian distribution. Theorem 3. Consider a FFNN or CNN with layers initialized with variances σ2w > 0 for weights and variance σ2b for bias. Let σw,EOC be the value of σw such that (σw,EOC , σb) ∈ EOC. Then, for all σw > σw,EOC , there exists a subnetwork that is initialized on the EOC. Therefore WLTH is true.\nThe idea behind the proof of Theorem 3 is that by removing a fraction of weights from each layer, we are changing the covariance structure in the next layer. By doing so in a precise way, we can find a subnetwork that is initialized on the EOC.\nWe prove a slightly more general result than the one stated. Theorem 4 (Winning Tickets on the Edge of Chaos). Consider a neural network with layers initialized with variances σw,l ∈ R+ for each layer and variance σb > 0 for bias. We define σw,EOC to be the value of σw such that (σw,EOC , σb) ∈ EOC. Then, for all sequences (σw,l)l such that σw,l > σw,EOC for all l, there exists a distribution of subnetworks initialized on the Edge of Chaos.\nProof. We prove the result for FFNN. The proof for CNN is similar. Let x, x′ be two inputs. For all l, let (δl)ij be a collection of Bernoulli variables with probability pl. The forward propagation of the covariance is given by\nq̂l(x, x′) = E[yli(x)yli(x′)]\n= E[ Nl−1∑ j,k W lijW l ikδ l ijδ l ikφ(ŷ l−1 j (x))φ(ŷ l−1 j (x ′))] + σ2b .\nThis yields\nq̂l(x, x′) = σ2w,lplE[φ(ŷ l−1 1 (x))φ(ŷ l−1 1 (x ′))] + σ2b .\nBy choosing pl = σ2w,EOC σ2w,l , this becomes\nq̂l(x, x′) = σ2w,EOCE[φ(ỹ l−1 1 (x))φ(ỹ l−1 1 (x ′))] + σ2b .\nTherefore, the new variance after pruning with the Bernoulli mask δ is σ̃2w = σ 2 w,EOC . Thus, the subnetwork defined by δ is initialized on the EOC. The distribution of these subnetworks is directly linked to the distribution of δ. We can see this result as layer-wise pruning, i.e. pruning each layer aside. The proof is similar for CNNs.\nTheorem 3 is a special case of the previous result where the variances σw,l are the same for all layers." }, { "heading": "I ALGORITHM FOR SECTION 2.3", "text": "Algorithm 1 Rescaling trick for FFNN Input: Pruned network, size m for L = 1 to L do\nfor i = 1 to Nl do αli ← ∑Nl−1 j=1 (Wij)\n2δlij ρlij ← 1/ √ αli for all j\nend for end for" } ]
2,021
null
SP:934bf46c7ff0d3a3b1f0b75e48235dd0c902558c
[ "This paper study the fundamental relationship between adversarial transferability and knowledge transferability. Theoretical analysis is conducted, revealing that adversarial transferability can indicate knowledge transferability. In this procedure, two quantities are formally defined to measure adversarial transferability from different aspects. Furthermore, empirical evaluation in three different transfer learning scenarios on diverse datasets are carried out, showing a strong positive correlation between the adversarial transferability and knowledge transferability." ]
Despite the immense success that deep neural networks (DNNs) have achieved, adversarial examples, which are perturbed inputs that aim to mislead DNNs to make mistakes, have recently led to great concerns. On the other hand, adversarial examples exhibit interesting phenomena, such as adversarial transferability. DNNs also exhibit knowledge transfer, which is critical to improving learning efficiency and learning in domains that lack high-quality training data. To uncover the fundamental connections between these phenomena, we investigate and give an affirmative answer to the question: does adversarial transferability indicate knowledge transferability? We theoretically analyze the relationship between adversarial transferability and knowledge transferability, and outline easily checkable sufficient conditions that identify when adversarial transferability indicates knowledge transferability. In particular, we show that composition with an affine function is sufficient to reduce the difference between the two models when they possess high adversarial transferability. Furthermore, we provide empirical evaluation for different transfer learning scenarios on diverse datasets, showing a strong positive correlation between the adversarial transferability and knowledge transferability, thus illustrating that our theoretical insights are predictive of practice.
[]
[ { "authors": [ "Alessandro Achille", "Michael Lam", "Rahul Tewari", "Avinash Ravichandran", "Subhransu Maji", "Charless C Fowlkes", "Stefano Soatto", "Pietro Perona" ], "title": "Task2vec: Task embedding for meta-learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Q. Cao", "L. Shen", "W. Xie", "O.M. Parkhi", "A. Zisserman" ], "title": "Vggface2: A dataset for recognising faces across pose and age", "venue": "In International Conference on Automatic Face and Gesture Recognition,", "year": 2018 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Ambra Demontis", "Marco Melis", "Maura Pintor", "Matthew Jagielski", "Battista Biggio", "Alina Oprea", "Cristina Nita-Rotaru", "Fabio Roli" ], "title": "Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks", "venue": "In 28th {USENIX} Security Symposium ({USENIX} Security", "year": 2019 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Niannan Xue", "Stefanos Zafeiriou" ], "title": "Arcface: Additive angular margin loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Tom Diethe", "Tom Borchert", "Eno Thereska", "Borja Balle", "Neil Lawrence" ], "title": "Continual learning in practice", "venue": "arXiv preprint arXiv:1903.05202,", "year": 2019 }, { "authors": [ "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Evading defenses to transferable adversarial examples by translation-invariant attacks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Leon A Gatys", "Alexander S Ecker", "Matthias Bethge" ], "title": "Image style transfer using convolutional neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ross Girshick" ], "title": "Fast r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Gary B. Huang", "Manu Ramesh", "Tamara Berg", "Erik Learned-Miller" ], "title": "Labeled faces in the wild: A database for studying face recognition in unconstrained environments", "venue": "Technical Report 07-49,", "year": 2007 }, { "authors": [ "Minyoung Huh", "Pulkit Agrawal", "Alexei A Efros" ], "title": "What makes imagenet good for transfer learning", "venue": "arXiv preprint arXiv:1608.08614,", "year": 2016 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Zero-shot recognition with unreliable attributes", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Di Jin", "Zhijing Jin", "Joey Tianyi Zhou", "Peter Szolovits" ], "title": "Is bert really robust? natural language attack on text classification and entailment", "venue": "arXiv preprint arXiv:1907.11932,", "year": 1932 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Seong Joon Oh", "Mario Fritz", "Bernt Schiele" ], "title": "Adversarial image perturbation for privacy protection– a game theory perspective", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Jiman Kim", "Chanjong Park" ], "title": "End-to-end ego lane estimation based on sequential transfer learning for self-driving cars", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "arXiv preprint arXiv:1611.02770,", "year": 2016 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Large-scale celebfaces attributes (celeba) dataset", "venue": "Retrieved August,", "year": 2018 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Mingsheng Long", "Yue Cao", "Jianmin Wang", "Michael Jordan" ], "title": "Learning transferable features with deep adaptation networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "arXiv preprint arXiv:1801.02613,", "year": 2018 }, { "authors": [ "Ana I Maqueda", "Antonio Loquercio", "Guillermo Gallego", "Narciso García", "Davide Scaramuzza" ], "title": "Event-based vision meets deep learning on steering prediction for self-driving cars", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Stylianos Moschoglou", "Athanasios Papaioannou", "Christos Sagonas", "Jiankang Deng", "Irene Kotsia", "Stefanos Zafeiriou" ], "title": "Agedb: the first manually collected, in-the-wild age database", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop,", "year": 2017 }, { "authors": [ "Muhammad Muzammal Naseer", "Salman H Khan", "Muhammad Haris Khan", "Fahad Shahbaz Khan", "Fatih Porikli" ], "title": "Cross-domain transferability of adversarial perturbations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "arXiv preprint arXiv:1605.07277,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Bernardino Romera-Paredes", "Philip Torr" ], "title": "An embarrassingly simple approach to zero-shot learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Olga Russakovsky", "Li Fei-Fei" ], "title": "Attribute learning in large-scale datasets", "venue": "In European Conference on Computer Vision,", "year": 2010 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "J.C. Cheng" ], "title": "Frontal to profile face verification in the wild", "venue": "In IEEE Conference on Applications of Computer Vision,", "year": 2016 }, { "authors": [ "Chuen-Kai Shie", "Chung-Hisang Chuang", "Chun-Nan Chou", "Meng-Hsi Wu", "Edward Y Chang" ], "title": "Transfer representation learning for medical image analysis", "venue": "In 2015 37th annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC),", "year": 2015 }, { "authors": [ "Yosuke Shinya", "Edgar Simo-Serra", "Taiji Suzuki" ], "title": "Understanding the effects of pre-training for object detectors via eigenspectrum", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Trevor Standley", "Amir R Zamir", "Dawn Chen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Which tasks should be learned together in multi-task learning", "venue": null, "year": 1905 }, { "authors": [ "Florian Tramèr", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "The space of transferable adversarial examples", "venue": "arXiv preprint arXiv:1704.03453,", "year": 2017 }, { "authors": [ "Annegreet Van Opbroek", "M Arfan Ikram", "Meike W Vernooij", "Marleen De Bruijne" ], "title": "Transfer learning improves supervised image segmentation across imaging protocols", "venue": "IEEE transactions on medical imaging,", "year": 2014 }, { "authors": [ "Weiyue Wang", "Naiyan Wang", "Xiaomin Wu", "Suya You", "Ulrich Neumann" ], "title": "Self-paced crossmodality transfer learning for efficient road segmentation", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Zirui Wang", "Zihang Dai", "Barnabás Póczos", "Jaime Carbonell" ], "title": "Characterizing and avoiding negative transfer", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Michael Wurm", "Thomas Stark", "Xiao Xiang Zhu", "Matthias Weigand", "Hannes Taubenböck" ], "title": "Semantic segmentation of slums in satellite images using transfer learning on fully convolutional neural networks", "venue": "ISPRS journal of photogrammetry and remote sensing,", "year": 2019 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan L Yuille" ], "title": "Improving transferability of adversarial examples with input diversity", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ruijia Xu", "Guanbin Li", "Jihan Yang", "Liang Lin" ], "title": "Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Amir R Zamir", "Alexander Sax", "William Shen", "Leonidas J Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling task transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Tianyue Zheng", "Weihong Deng", "Jiani Hu" ], "title": "Cross-age LFW: A database for studying cross-age face recognition in unconstrained environments", "venue": "CoRR, abs/1708.08197,", "year": 2017 }, { "authors": [ "Wen Zhou", "Xin Hou", "Yongjun Chen", "Mengyun Tang", "Xiangqi Huang", "Xiang Gan", "Yong Yang" ], "title": "Transferable adversarial perturbations", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zamir" ], "title": "Suppose we have N tasks in the pool, a tournament matrix MT for each task T is constructed, where the element of the matrix mi,j represents what percentages of adversarial examples generated from the ith task transfers better to task T than the ones of the jth task (untargeted attack success rate is used here)", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "Despite the immense success that deep neural networks (DNNs) have achieved, adversarial examples, which are perturbed inputs that aim to mislead DNNs to make mistakes, have recently led to great concerns. On the other hand, adversarial examples exhibit interesting phenomena, such as adversarial transferability. DNNs also exhibit knowledge transfer, which is critical to improving learning efficiency and learning in domains that lack high-quality training data. To uncover the fundamental connections between these phenomena, we investigate and give an affirmative answer to the question: does adversarial transferability indicate knowledge transferability? We theoretically analyze the relationship between adversarial transferability and knowledge transferability, and outline easily checkable sufficient conditions that identify when adversarial transferability indicates knowledge transferability. In particular, we show that composition with an affine function is sufficient to reduce the difference between the two models when they possess high adversarial transferability. Furthermore, we provide empirical evaluation for different transfer learning scenarios on diverse datasets, showing a strong positive correlation between the adversarial transferability and knowledge transferability, thus illustrating that our theoretical insights are predictive of practice." }, { "heading": "1 INTRODUCTION", "text": "Knowledge transferability and adversarial transferability are two fundamental properties when a learned model transfers to other domains. Knowledge transferability, also known as learning transferability, has attracted extensive studies in machine learning. Long before it was formally defined, the computer vision community has exploited it to perform important visual manipulations (Johnson et al., 2016), such as style transfer and super-resolution, where pretrained VGG networks (Simonyan & Zisserman, 2014) are utilized to encode images into semantically meaningful features. After the release of ImageNet (Russakovsky et al., 2015), pretrained ImageNet models (e.g., on TensorFlow Hub or PyTorch-Hub) has quickly become the default option for the transfer source, because of its broad coverage of visual concepts and compatibility with various visual tasks (Huh et al., 2016). Adversarial transferability, on the other hand, is a phenomenon that adversarial examples can not only attack the model they are generated against, but also affect other models (Goodfellow et al., 2014; Papernot et al., 2016). Thus, adversarial transferability is extensively exploited to inspire black-box attacks (Ilyas et al., 2018; Liu et al., 2016). Many theoretical analyses have been conducted to establish sufficient conditions of adversarial transferability (Demontis et al., 2019; Ma et al., 2018).\nKnowledge transferability and adversarial transferability both reveal some nature of machine learning models and the corresponding data distributions. Particularly, the relation between these two phenomena interests us the most. We begin by showing that adversarial transferability can indicate knowledge transferability. This tie can potentially provide a similarity measure between data distributions, an identifier of important features focused by a complex model, and an affinity map between complicated tasks. Thus, we believe our results have further implications in model interpretability and verification, fairness, robust and efficient transfer learning, and etc.\nTo the best of our knowledge, this is the first work studying the fundamental relationship between adversarial transferability and knowledge transferability both theoretically and empirically. Our main contributions are as follows.\n• We formally define two quantities, τ1 and τ2, to measure adversarial transferability from different aspects, which enables in-depth understanding of adversarial transferability from a geometric point of view in the feature representation space. • We derive an upper bound for knowledge transferability with respect to adversarial transfer-\nability. We rigorously depict their underlying relation and show that adversarial transferability can indicate knowledge transferability. • We conduct thorough controlled experiments for diverse knowledge transfer scenarios (e.g.\nknowledge transfer among data distributions, attributes, and tasks) on benchmark datasets including STL-10, CIFAR-10, CelebA, Taskonomy-data, and four language datasets. Our empirical results show strong positive correlation between adversarial and knowledge transferability, which validates our theoretical prediction." }, { "heading": "2 RELATED WORK", "text": "Knowledge transferability has been widely applied in scenarios where the available data for certain domain is limited, and has achieved great success (Van Opbroek et al., 2014; Wurm et al., 2019; Wang et al., 2017; Kim & Park, 2017; Maqueda et al., 2018; Devlin et al., 2018). Several studies have been conducted to understand the factors that affect knowledge transferability (Yosinski et al., 2014; Long et al., 2015b; Wang et al., 2019; Xu et al., 2019; Shinya et al., 2019). Empirical observations show that the correlation between learning tasks (Achille et al., 2019; Zamir et al., 2018), the similarity of model architectures, and data distribution are all correlated with different knowledge transfer effects.\nAdversarial Transferability has been observed by several works (Papernot et al., 2016; Goodfellow et al., 2014; Joon Oh et al., 2017). Since the early work, a lot of studies have been conducted, aiming to further understand the phenomenon and design more transferable adversarial attacks. Regardless of the threat model, a lot of attack methods have been proposed to boost adversarial transferability (Zhou et al., 2018; Demontis et al., 2019; Dong et al., 2019; Xie et al., 2019). Naseer et al. (2019) propose to produce adversarial examples that transfer cross-domain via a generative adversarial network. In addition to the efficacy, efficiency (Ilyas et al., 2018) and practicality (Papernot et al., 2017) are also optimized. Beyond the above empirical studies, there is some work dedicated to analyzing this phenomenon, showing different conditions that may enhance adversarial transferability (Athalye et al., 2018; Tramèr et al., 2017; Ma et al., 2018; Demontis et al., 2019). Building upon these observations, it is clear that there exist certain connections between adversarial transferability and other knowledge transfer scenarios, and here we aim to provide the first theoretic justification to verify it and design systematic empirical studies to measure such correlation." }, { "heading": "3 ADVERSARIAL TRANSFERABILITY VS. KNOWLEDGE TRANSFERABILITY", "text": "In this section, we establish connections between adversarial examples and knowledge transferability rigorously. We first formally state the problem studied in this section. Then, we move on to subsection 3.1 to introduce two metrics that encode information about adversarial attacks. Finally, we present our theoretical results about the relationship between adversarial and knowledge transferability in subsection 3.2.\nNotations. We use blackboard bold to denote sets, e.g., R. We use calligraphy to denote distributions, e.g., D. The support of a distribution D is denoted as supp(D). We use bold lower case letters to denote vectors, e.g., x ∈ Rn. We use bold uppercase letter to denote a matrix, e.g.,A. We useA† to denote the Moore–Penrose inverse of matrix A. We use ◦ to denote the composition of functions, i.e., g ◦ f(x) = g(f(x)). We use ‖ · ‖2 to denote Euclidean norm induced by standard inner product 〈·, ·〉. Given a function f , we use f(x) to denote its evaluated value at x, and we use f to represent this function in function space. We use 〈·, ·〉D to denote inner product induced by distribution D, i.e., 〈f1, f2〉D = Ex∼D〈f1(x), f2(x)〉. Accordingly, we use ‖ · ‖D to denote a norm induced by inner product 〈·, ·〉D, i.e., ‖f‖D = √ 〈f, f〉D. For a matrix function F : supp(D) → Rd×m, we\ndefine its L2(D)-norm in accordance with matrix 2-norm as ‖F‖D,2 = √ Ex∼D‖F (x)‖22. We define projection operator proj(·, r) to project a matrix to a hyperball of spectral norm radius r, i.e.,\nproj(A, r) = { A, if ‖A‖2 ≤ r rA/‖A‖2 if ‖A‖2 > r .\nSetting. Assume we are given a target problem defined by data distribution x ∼ D, where x ∈ Rn, and y : Rn → Rd represent the ground truth labeling function. As a first try, a reference model fT : Rn → Rd trained on the target dataset is obtained through optimizing over a function class fT ∈ FT . Now suppose we have a source model fS : Rn → Rm pretrained on source data, and we are curious how would fS transfer to the target data D? Knowledge transferability. Given a trainable function g : Rm → Rd, where g ∈ G is from a small function class for efficiency purpose, we care about whether fS can achieve low loss L(·; y,D), e.g., mean squared error, after stacking with a trainable function g comparing with fT , i.e.,\nmin g∈G\nL(g ◦ fS ; y,D) compare with L(fT ; y,D).\nClearly, the solution to this optimization problem depends on the choice of G. Observing that in practice it is common to stack and fine-tune a linear layer given a pretrained feature extractor, we consider the class of affine functions. Formally, the problem that is studied in our theory is stated as follows. Problem 1. Given a reference model fT trained on target distribution D, and a source model fS pre-trained on source data. Can we predict the best possible performance of the composite function g◦fS onD, where g is from a bounded affine function class, given adversarial transferability between fS and fT ?" }, { "heading": "3.1 ADVERSARIAL TRANSFERABILITY", "text": "We use the `2-norm to characterize the effectiveness of an attack. Definition 1 (Virtual Adversarial Attack (Miyato et al., 2018)). Given a model f : Rn → Rd, the attack on point x within -ball is defined as argmax‖δ‖≤ ‖f(x)− f(x+ δ)‖2. As this is intractable in practice, we consider the use of the tangent function to approximate the difference:\nδf, (x) = arg max ‖δ‖≤\n‖∇f(x)>δ‖2,\nwhere ∇f(x) ∈ Rn×d is the Jacobian matrix. The will be dropped in clear context or when it is irrelevant.\nTo provide a quantitative view of adversarial transferability, we define two metrics τ1 and τ2. Both the metrics are in the range of [0, 1], where higher values indicate more adversarial transferability. Definition 2 (Adversarial Transferability (Angle)). Given two function f1, f2, we assume they have the same input dimension, and may have different output dimensions. The Adversarial Transferability (Angle) of f1 and f2 at point x is defined as the squared cosine value of the angle between the two attacks, i.e.,\nτ1(x) = 〈δf1(x), δf2(x)〉2\n‖δf1(x)‖22 · ‖δf2(x)‖22 .\nWe denote its expected value as τ1 = Ex∼D[τ1(x)].\nIntuitively, τ1 characterizes the similarity of the two attacks. The higher the cosine similarity, the better they can be attacked together. Noting that we are suggesting to use the square of their cosine values, which means that cosine value being either 1 or−1 has the same indication of high knowledge transferability. This is because fine-tuning the last layer can rectify such difference by changing the sign of the last linear layer. However, it is not sufficient to fully characterize how good fS will perform only knowing the angle of two attack directions. For example, it is not difficult to construct two functions with highest τ1 = 1, but not transferable with affine functions. Moreover, it is also oberserved in our experiments that only τ1 is not sufficient.\nTherefore, in addition to the information of attacks δf captured by τ1, we also need information about deviation of a function given attacks. We denote the deviation of a function f , given attack δ(x), as f(x+ δ(x))− f(x), and we define its approximation as\n∆f,δ(x) = ∇f(x)>δ(x). (1) Accordingly, we define another metric to answer the following question: applying f1’s adversarial attacks on both the models, how much can the deviation of their function value be aligned by affine transformations?\nDefinition 3 (Adversarial Transferability (Deviation)). Given two functions f1, f2 with the same input dimensions and potentially different output dimensions, the Adversarial Transferability (Deviation) of adversarial attacks from f1 to f2 given data distribution D is defined as\nτf1→f22 = 〈2∆f2,δf1 −A∆f1,δf1 ,A∆f1,δf1 〉D\n‖∆f2,δf1‖ 2 D\n,\nwhereA is a constant matrix defined as\nA = proj(Ex∼D[∆f2,δf1 (x)∆f1,δf1 (x) >] ( Ex∼D[∆f1,δf1 (x)∆f1,δf1 (x) >] )† , ‖∆f2,δf1‖D ‖∆f1,δf1‖D ).\nWe note that A is the best linear map trying to align the two deviations (∆f2,δf1 and ∆f1,δf1 ) in the function space. It serves as a guess on the best linear map to align f1 and f2, using only the information from adversarial attacks. To have better sense of τ2 and the relationships with other quantities, we present an example for visual illustration in Figure 1. Note that high τ2 does not necessarily require ∆f1,δf1 and ∆f2,δf1 to be similar, but they can be well aligned by the constant linear transformationA. We refer to the proof of Proposition 1 at section B in appendix for detailed explanation of τ2. Proposition 1. Both τ1 and τ2 are in [0, 1]." }, { "heading": "3.2 ADVERSARIAL TRANSFERABILITY INDICATES KNOWLEDGE TRANSFERABILITY", "text": "In this subsection, we will provide our theoretical results. First, to have a better intuition, we will show a special case where the theorems are simplified, i.e., where fS and fT are both Rn → R. Then, we present the general case where fS and fT are multi-dimensional. Note that their output dimensions are not necessarily the same.\nWhen fS and fT are both Rn → R, the τ1 and τ2 come out in a surprisingly elegant form. Let\nus show what the two metrics are to have further intuition on what τ1 and τ2 characterize.\nFirst, let us see what the attack is in this case. As function f has one-dimensional output, its gradient is a vector∇f ∈ Rn. Thus,\nδf, (x) = arg max ‖δ‖≤ ‖∇f(x)>δ‖2 = ∇f(x) ‖∇f(x)‖2\nis simply the gradient with its scale normalized. Then, the τ1 becomes\nτ1(x) = 〈∇fS(x),∇fT (x)〉2\n‖∇fS(x)‖22 · ‖∇fT (x)‖22 ,\nwhich is the squared cosine (angle) between two gradients. For τ2, the matrix A degenerates to a scalar constant, which makes τ2 simpler as well, i.e.,\nA = 〈∆fT ,δfS ,∆fS ,δfS 〉D ‖∆fS ,δfS ‖ 2 D , and τfS→fT2 = 〈∆fS ,δfS ,∆fT ,δfS 〉 2 D ‖∆fS ,δfS ‖ 2 D · ‖∆fT ,δfS ‖ 2 D .\nWe can see, in this case τ2 is interestingly in the same form of the first metric τ1. We will simply use τ2 to denote τ fS→fT 2 afterwards.\nAccordingly, when fS and fT are both Rn → R, the result also comes out in an elegant form. In this case, adversarial attacks reflect all the information of the gradients of the two models, enabling τ1 and τ2 to encode all the information we need to prove the following theorem. Theorem 1. For two functions fS and fT that both are Rn → R, there is an affine function g : R→ R, such that\n‖∇fT −∇(g ◦ fS)‖2D = Ex∼D [ (1− τ1(x)τ2)‖∇fT (x)‖22 ] ,\nwhere g(x) = Ax + Const. Moreover, though not necessarily, if assuming that fT is L-Lipschitz continuous, i.e.,‖∇fT (x)‖2 ≤ L for ∀x ∈ supp(D), we have a more elegant statement:\n‖∇fT −∇(g ◦ fS)‖2D ≤ (1− τ1τ2)L2.\nThe theorem suggests that, if adversarial transferability is high, there exists an affine transformation with bounded norm, such that g ◦ fS is close to fT . As an intuition of the proof, the difference between two gradients can be represented by the angle between them, which can be characterized by τ1; and the norm difference between them, which can be characterized by τ2.\nAs for the general case, we consider when the output dimensions of both functions are multidimensional and not necessarily the same. In this scenario, adversarial attacks correspond to the largest singular value of the Jacobian matrix. Therefore, we need to introduce the following definition to capture other information that is not revealed by adversarial attacks.\nDefinition 4 (Singular Value Ratio). For any function f , the Singular Value Ratio for the function gradient at x is defined as λf (x) = σ2(x)σ1(x) , where σ1(x), σ2(x) are the largest and the second largest singular value in absolute value of∇f(x), respectively. In addition, we define the worst-case singular value ratio as λf = maxx∈supp(D) λf (x).\nTheorem 2. For two functions fS : Rn → Rm, and fT : Rn → Rd, assuming that fT is L-Lipschitz continuous, i.e., ‖∇fT (x)‖2 ≤ L for ∀x ∈ supp(D), there is an affine function g : Rm → Rd, such that\n‖∇fT −∇(g ◦ fS)‖2D ≤ ( (1− τ1τ2) + (1− τ1)(1− τ2)λ2fT + (λfT + λfS ) 2 ) 5L2,\nwhere g is defined as g(z) = Az +Const.\nWe note that this theorem also has a statement offering tighter bound where we do not assume Lipschitz continuous. The full version of this theorem is provided in appendix. Theorem 2 suggests that big τ1 and τ2 indicate potentially small differences of gradients between the target model and the transferred model. Based on this, intuitively, given the right constant value shift, minimal difference in gradients implies minimal difference in function value, which should result in bounded loss. Indeed, we prove in Theorem 3 that the squared loss of the transferred model g ◦ fS is bounded by the loss of fT and their gradient difference, by assuming the β-smoothness of both the functions.\nDefinition 5 (β-smoothness). A function f is β-smooth if for all x,y,\n‖∇f(x)−∇f(y)‖2 ≤ β‖x− y‖2.\nFor the target data distribution D, and its ground truth labeling function y, the mean squared loss of the transferred model is Ex∼D‖g ◦ fS(x)− y(x)‖22 = ‖g ◦ fS − y‖2D. Therefore, the following theorem presents upper bound on the mean squared loss of the transferred model.\nTheorem 3. Without loss of generality we assume ‖x‖2 ≤ 1 for ∀x ∈ supp(D). Consider functions fS : Rn → Rm, fT : Rn → Rd, and an affine function g : Rm → Rd, suggested by Theorem 1 or Theorem 2, with the constant set to let g(fS(0)) = fT (0). If both fT , fS are β-smooth, then\n‖g ◦ fS − y‖2D ≤ ( ‖fT − y‖D + ‖∇fT −∇g ◦ fS‖D + ( 1 + ‖∇fT ‖D,2 ‖∇fS‖D,2 ) β )2 ." }, { "heading": "3.3 PRACTICAL MEASUREMENT OF ADVERSARIAL TRANSFERABILITY", "text": "Existing studies have shown that similar models share high adversarial transferability (Liu et al., 2016; Papernot et al., 2016; Tramèr et al., 2017). In previous work, it is common to use cross adversarial loss as an indication of adversarial transferability, e.g., the loss of fT with attacks generated on fS . It is intuitive to consider that the higher cross adversarial loss, the higher adversarial transferability. However, it may have a drawback comparing to the τ1, τ2 defined in this work.\nDefinition 6 (Cross Adversarial Loss). Given a loss function `T (·, y) on the target domain, where y is ground truth, the adversarial loss of fT with attack δfS generated against source model fS is\nLadv(fT , δfS ; y,D) = Ex∼D `T (fT (x+ δfS (x)), y(x)).\nThe cross adversarial loss depends on the choice of loss function, the output dimension, etc. Thus, it can be incomparable when we want to test adversarial transferability among different fT , unlike that τ1, τ2 are always between [0, 1]. To investigate the relationship between the adversarial loss and the adversarial transferability we defined, we show in the following proposition that the cross adversarial loss is similar to τ1. In the next section, we verify the theoretical predictions through thorough experiments.\nProposition 2. If `T is mean squared loss and fT achieves zero loss on D, then the adversarial loss defined in Definition 6 is approximately upper and lower bounded by\nLadv(fT , δfS ; y,D) ≥ 2Ex∼D [ τ1(x) ‖∇fT (x)‖22 ] +O( 3),\nLadv(fT , δfS ; y,D) ≤ 2Ex∼D [( λ2fT + (1− λ 2 fT )τ1(x) ) ‖∇fT (x)‖22 ] +O( 3),\nwhere O( 3) denotes a cubic error term." }, { "heading": "4 EXPERIMENTAL EVALUATION", "text": "The empirical evaluation of the relationship between adversarial transferability and knowledge transferability is done by four different sets of experiment. First we present a set of synthetic experiment that verifies our theoretical study, and then we present our empirical study on realworld datasets with models widely used in practice, described in three knowledge transfer scenarios: knowledge transfer on data distributions, attributes, and tasks. Details regarding the three scenarios are elaborated below, and all training details are deferred to the Appendix.\nKnowledge-transfer among data distributions is the most common setting of transfer learning. It transfers the knowledge of a model trained/gained from one data domain to the other data domains. For instance, Shie et al. (2015) manage to use pre-trained ImageNet representations to achieve state-of-the-art accuracy for medical data analysis. The relation between adversarial and knowledge transferability can not only determine the best pretrained models to use, but also detect distribution shifts, which is crucial in learning agents deployed in continual setting (Diethe et al., 2019). Knowledge-transfer among attributes is a popular method to handle zero-shot and few-shot learning (Jayaraman & Grauman, 2014; Romera-Paredes & Torr, 2015). It transfers the knowledge learned from the attributes of the source problem to a new target problem Russakovsky & Fei-Fei (2010). The relation between adversarial and knowledge transferability can be used as a probe to deployed classification models to verify attributes that their decisions are based on. This will have profound implications on fairness and interpretability. Knowledge-transfer among tasks is widely applied across various vision tasks, such as super resolution (Johnson et al., 2016), style transfer (Gatys et al., 2016), semantic and instance segmentation (Girshick, 2015; He et al., 2017; Long et al., 2015a). It involves transferring the knowledge the model gains by learning to do one task to another novel task. The relation between adversarial and knowledge transferability, as many recent works (Achille et al., 2019; Standley et al., 2019; Zamir et al., 2018), can be used to charting the affinity map between tasks, aiming to guide potential transfer." }, { "heading": "4.1 SYNTHETIC EXPERIMENT ON RADIAL BASIS FUNCTIONS REGRESSION", "text": "In the synthetic experiment, we compute quantities that are otherwise inefficient to compute to verify our theoretical results. We also try different settings to see how other factors affect the results. Details follow.\nModels. Both the source model fS and the target model fT are one-hidden-layer neural networks with sigmoid activation.\nOverall Steps. First, sample D = {(xi,yi)}Ni=1 from a distribution (details later), where x is n-dimensional, y is d-dimensional, and there are N samples. Then we train a target model fT on D. Denoting the weights of fT asW , we randomly sample a direction V where each entry of V is sampled from U(−0.5, 0.5), and choose a scale t ∈ [0, 1]. To derive the source model, we perturb the target model as W ′ := W + tV . Define the source model fS to be a one-hidden-layer neural network with weightsW ′. Then, we compute each of the quantities we care about, including τ1, τ2, cross adversarial loss (Definition 6), the upper bound in theorem 2 on the difference of gradients, etc.\nNoting that we reported the cross adversarial loss normalized by its own adversarial loss, defined as α = ‖∆fT ,δfS ‖ 2 D/‖∆fT ,δfT ‖ 2 D ≈ Ladv(fT , δfS ; y,D)/Ladv(fT , δfT ; y,D) when fT achieves low error. Note that α ∈ [0, 1]. Finally, we fine-tune the last layer of fS , and get the true transferred loss.\nDataset. Denote a radial basis function as φi(x) = e−‖x−µi‖ 2 2/(2σi) 2\n, and we set the target ground truth function to be the sum of M = 100 basis functions as f = ∑M i=1 φi, where each entry of the parameters are sampled once from U(−0.5, 0.5). We set the dimension of x to be 30, and the dimension of y to be 10. We generate N = 1200 samples of x from a Gaussian mixture formed by three Gaussian with different centers but the same covariance matrix Σ = I . The centers are sampled randomly from U(−0.5, 0.5)n. We use the ground truth regressor f to derive the corresponding y for each x. That is, we want our neural networks to approximate f on the Gaussian mixture.\nResults. We present two sets of experiment in Figure 2. The correlations between adversarial transferabilities (τ1, τ2, α) and the knowledge transferability (transferred loss) are observed. The upper bound for the difference of gradients (Theorem 2) basically tracks its true value. Although the absolute value of the upper bound on the transferred loss (Theorem 3) can be big compared to the true transferred loss, their trends are similar. We note the big difference in absolute value is due to the use of β-smoothness, which considers the worst case scenario. It is also observed that τ1 tracks the normalized adversarial cross loss α, as Proposition 2 suggests." }, { "heading": "4.2 ADVERSARIAL TRANSFERABILITY INDICATES KNOWLEDGE-TRANSFER AMONG DATA", "text": "DISTRIBUTIONS\nIn this experiment, we show that the closer the source data distribution is to the target data distribution, the more adversarially transferable the source model to the reference model, thus we observe that the source model is more knowledge transferable to the target dataset. We demonstrate this on both image and natural language domains. Dataset. Image: 5 source datasets (5 source models) are constructed based on CIFAR-10 (Hinton et al., 2012) and a single target dataset (1 reference model) based on STL-10 (Coates et al., 2011). Each of the source datasets consists of 4 classes from CIFAR10 and the target dataset also consists of 4 classes from STL10. Natural Language: We select 4 diverse natural language datasets: AG’s News (AG), Fake News Detection (Fake), IMDB, Yelp Polarity (Yelp). Then we pick IMDB as the target and the rest as sources. Adversarial Transferability. Image: We take 1000 images (STL-10) from the target dataset and generate 1000 adversarial examples on each of the five source models. We run 10 step PGD L∞ attack with = 0.1. Then we measure the effectiveness of the adversarial examples by the cross-entropy loss on the reference model. Natural Language: We take 100 sample sentences from target dataset(IMDB) and generate adversarial sentences on each of the source models(AG, Fake, Yelp) with TextFooler(Jin et al., 2019). The ratio of changed words is constrained to less or equal to 0.1. Then, we measure their adversarial transferability against the reference model(IMDB). Knowledge Transferability. To measure the knowledge transferability, we fine-tune a new linear layer on the target dataset to replace the last layer of each source model to generate the corresponding transferred models. Then we measure the performance of the transferred models on the target dataset based on the standard accuracy and cross-entropy loss. Results From Figure 4.2, it’s clear that if the source models that has highest adversarial transferability, its corresponding transferred model achieves the highest transferred accuracy. This phenomenon is prominent in both image and natural language domains. The results in Figure 4.2 (b) could verify the implication by our theory that only τ1 is not sufficient for indicating knowledge transferability." }, { "heading": "4.3 ADVERSARIAL TRANSFERABILITY INDICATING KNOWLEDGE-TRANSFER AMONG ATTRIBUTES", "text": "In addition to the data distributions, we validate our theory on another dimension, attributes. This experiment suggests that the more adversarially transferable the source model of certain attributes is to the reference model, the better the model performs on the target task aiming to learn target attributes. Dataset CelebA (Liu et al., 2018) consists of 202,599 face images from 10,177 identities. A reference facial recognition model is trained on this identities. Each image also comes with 40 binary attributes, on which we train 40 source models. Our goal is to test whether source models of source attributes, can transfer to perform facial recognition. Adversarial Transferability We sample 1000 images from CelebA and perform a virtual adversarial attack as described in section 3 on each of the 40 attribute classifiers. Then we measure the adversarial transfer effectiveness of these adversarial examples on the reference facial recognition model. Knowledge Transferability To fairly assess the knowledge transferability, we test the 40 transferred models on 7 well-known facial recognition benchmarks, LFW (Huang et al., 2007), CFP-FF, CFP-FP (S. Sengupta, 2016), AgeDB (Moschoglou et al., 2017), CALFW, CPLFW (Zheng et al., 2017) and VGG2-FP (Cao et al., 2018). We report the average classification accuracy target datasets. Result In Table 1, we list the top-5 attribute source models that share the highest adversarial transferability and the performance of their transferred models on the 7 target facial recognition benchmarks. We observe that the attribute \"Young\" has the highest adversarial transferability; as a result, it also achieves highest classification average performance across the 7 benchmarks." }, { "heading": "4.4 ADVERSARIAL TRANSFERABILITY INDICATING KNOWLEDGE-TRANSFER AMONG TASKS", "text": "" }, { "heading": "5 CONCLUSION", "text": "We theoretically analyze the relationship between adversarial transferability and knowledge transferability, along with thorough experimental justifications in diverse scenarios. Both our theoretical and empirical results show that adversarial transferability can indicate knowledge transferability, which reveal important properties of machine learning models. We hope our discovery can inspire and facilitate further investigations, including model interpretability, fairness, robust and efficient transfer learning, and etc." }, { "heading": "A DISCUSSION ABOUT VALIDNESS OF THE NOTATIONS", "text": "Before starting proving our theory, it is necessary to show that our mathematical tools are indeed valid. It is easy to verify that 〈·, ·〉D is a valid inner product inherited form standard Euclidean inner product. Therefore, the norm ‖ · ‖D, induced by the inner product, is also a valid norm. What does not come directly is the validness of the norm ‖ · ‖D,2. Particularly, whether it satisfies the triangle inequality. Recall that, for a function of matrix output F : supp(D) → Rd×m, its L2(D)-norm in accordance with matrix 2-norm is defined as\n‖F‖D,2 = √ Ex∼D‖F (x)‖22.\nFor two functions F,G, both are supp(D)→ Rd×m, we can verify the norm ‖ · ‖D,2 satisfies triangle inequality as shown in the following. Applying the triangle inequality of the spectral norm, and with some algebra manipulation, it holds that\n‖F +G‖D,2 = √ Ex∼D‖F (x) +G(x)‖22\n≤ √ Ex∼D (‖F (x)‖2 + ‖G(x)‖2)2\n= √ Ex∼D‖F (x)‖22 + Ex∼D‖G(x)‖22 + 2Ex∼D‖F (x)‖2‖G(x)‖2\n= √ ‖F‖2D,2 + ‖G‖2D,2 + 2Ex∼D‖F (x)‖2‖G(x)‖2. (2)\nApplying the Cauchy-Schwarz inequality, we can see that Ex∼D‖F (x)‖2‖G(x)‖2 ≤ √ Ex∼D‖F (x)‖22 · Ex∼D‖G(x)‖22\n= ‖F‖D,2 · ‖G‖D,2. Plugging this into (2) would complete the proof, i.e.,\n(2) ≤ √ ‖F‖2D,2 + ‖G‖2D,2 + 2‖F‖D,2 · ‖G‖D,2\n= √ (‖F‖D,2 + ‖G‖D,2)2\n= ‖F‖D,2 + ‖G‖D,2." }, { "heading": "B PROOF OF PROPOSITION 1", "text": "Proposition 1 (Restated). Both τ1 and τ2 are in [0, 1].\nProof. We are to prove that τ1 and τ2 are both in the range of [0, 1]. As τ1 is squared cosine, it is trivial that τ1 ∈ [0, 1]. Therefore, we will focus on τ2 in the following. Recall that the τ2 from f1 to f2 is defined as\nτf1→f22 = 〈2∆f2,δf1 −A∆f1,δf1 ,A∆f1,δf1 〉D\n‖∆f2,δf1‖ 2 D\n,\nwhereA is a constant matrix defined as A = proj(Ex∼D[∆f2,δf1 (x)∆f1,δf1 (x) >] ( Ex∼D[∆f1,δf1 (x)∆f1,δf1 (x) >] )† , ‖∆f2,δf1 ‖D ‖∆f1,δf1 ‖D ).\nFor notation convenience, we will simply use τ2 to denote τ f1→f2 2 in this proof.\nτ2 characterizes how similar are the changes in both the function values of f1 : Rn → Rm and f2 : Rn → Rd in the sense of linear transformable, given attack generated on f1. That is being said, it is associated to the function below, i.e,\nh(B) = ∥∥∆f2,δf1 −B∆f1,δf1∥∥2D = Ex∼D ∥∥∆f2,δf1 (x)−B∆f1,δf1 (x)∥∥22 ,\nwhere ∆f1,δf1 ∈ R m, ∆f2,δf1 ∈ R d, andB ∈ Rd×m.\nAs ∥∥∆f2,δf1 (x)−B∆f1,δf1 (x)∥∥22 is convex with respect to B, its expectation, i.e. h(B), is also convex.\nTherefore, h(B) it achieves global minima when ∂h∂B = 0.\n∂h ∂B = Ex∼D ∂ ∂B (∥∥∆f2,δf1 (x)−B∆f1,δf1 (x)∥∥22) = 2Ex∼D [( B∆f1,δf1 (x)−∆f2,δf1 (x) ) ∆f1,δf1 (x)\n>] = 2Ex∼D [ B∆f1,δf1 (x)∆f1,δf1 (x) > −∆f2,δf1 (x)∆f1,δf1 (x) >]\n= 2BEx∼D [ ∆f1,δf1 (x)∆f1,δf1 (x) >]− 2Ex∼D [∆f2,δf1 (x)∆f1,δf1 (x)>] . Letting ∂h∂B = 0, and denoting the solution asB ∗, we have\nB∗ = Ex∼D [ ∆f2,δf1 (x)∆f1,δf1 (x) >] (Ex∼D [∆f1,δf1 (x)∆f1,δf1 (x)>])† . Noting thatA = proj(B∗,\n‖∆f2,δf1 ‖D ‖∆f1,δf1 ‖D ) is scaledB∗, we denoteA = ψB∗, where ψ a scaling factor\ndepending onB∗ and ‖∆f2,δf1 ‖D ‖∆f1,δf1 ‖D . According to the definition of the projection operator, we can see that 0 < ψ ≤ 1. ReplacingB byA we have,\nh(A) = ∥∥∆f2,δf1 −A∆f1,δf1∥∥2D = 〈∆f2,δf1 −A∆f1,δf1 ,∆f2,δf1 −A∆f1,δf1 〉D\n= ∥∥∆f2,δf1∥∥2D − 〈2∆f2,δf1 −A∆f1,δf1 ,A∆f1,δf1 〉D\n= (1− τ2) ∥∥∆f2,δf1∥∥2D .\nIt is obvious that h(A) = ∥∥∆f2,δf1 −A∆f1,δf1∥∥2D ≥ 0, thus we have τ2 ≤ 1.\nAs for the lower bound for τ2, we will need to use properties ofB. DenotingO as an all-zero matrix, it holds that\nh(B∗) = min B {h(B)} ≤ h(O). (3)\nFor A = ψB∗, according to the convexity of h(·) and the fact that ψ ∈ [0, 1], we can see the following, i.e.,\nh(A) = h(ψB∗) = h(ψB∗ + (1− ψ)O) ≤ ψh(B∗) + (1− ψ)h(O).\nApplying (3) to the above, we can see that\nh(A) ≤ h(O).\nNoting that h(A) = (1− τ2) ∥∥∆f2,δf1∥∥2D and h(O) = ∥∥∆f2,δf1∥∥2D, the above inequality suggests that\n(1− τ2) ∥∥∆f2,δf1∥∥2D ≤ ∥∥∆f2,δf1∥∥2D ,\n0 ≤ τ2.\nTherefore, τ2 is upper bounded by 1 and lower bounded by 0." }, { "heading": "C PROOF OF THEOREM 1", "text": "Before actually proving the theorem, let us have a look at what τ1 and τ2 are in the case where fS and fT are both Rn → R. In this case, both τ1 and τ2 come out in an elegant form. Let us show what the two metrics are to have further intuition on what τ1 and τ2 characterize.\nFirst, let us see what the attack is in this case. As function f has one-dimensional output, its gradient is a vector∇f ∈ Rn. Thus,\nδf, (x) = arg max ‖δ‖≤ ‖∇f(x)>δ‖2 = ∇f(x) ‖∇f(x)‖2\nThen, the τ1 becomes\nτ1(x) = 〈∇fS(x),∇fT (x)〉2\n‖∇fS(x)‖22 · ‖∇fT (x)‖22 which is the squared cosine (angle) between two gradients.\nFor τ2, the matrixA degenerates to a scalar constant\nA = 〈∆fT ,δfS ,∆fS ,δfS 〉D ‖∆fS ,δfS ‖ 2 D ,\nand the second metric becomes\nτfS→fT2 = 〈∆fS ,δfS ,∆fT ,δfS 〉 2 D\n‖∆fS ,δfS ‖ 2 D · ‖∆fT ,δfS ‖ 2 D\nWe can see, it is interestingly in the same form of the first metric τ1. We will simply use τ2 to denote τfS→fT2 afterwards. Theorem 1 (Restated). For two functions fS and fT that both are Rn → R, there is an affine function g : R→ R, so that\n‖∇fT −∇(g ◦ fS)‖2D = Ex∼D [ (1− τ1(x)τ2)‖∇fT (x)‖22 ] ,\nwhere g is defined as g(x) = Ax+ Const.\nMoreover, if assuming that fT is L-Lipschitz continuous, i.e., ‖∇fT (x)‖2 ≤ L for ∀x ∈ supp(D), we can have a more elegant statement:\n‖∇fT −∇(g ◦ fS)‖2D ≤ (1− τ1τ2)L2.\nProof. In the case where g is a one-dimensional affine function, we write is as g(z) = Az+ b, where A is defined in the definition of τ2 (Definition 3). In this case, it enjoys a simple form of\nA = 〈∆fT ,δfS ,∆fS ,δfS 〉D ‖∆fS ,δfS ‖ 2 D .\nThen, we can see that\n‖∇fT −∇(g ◦ fS)‖2D = ‖∇fT −A∇fS‖2D = Ex∼D [ ‖∇fT (x)−A∇fS(x)‖22 ] . (4)\nTo continue, we split ∇fT as two terms, i.e., one on the direction on ∇fS and one orthogonal to ∇fS . Denoting φ(x) as the angle between∇fT (x) and ∇fS(x) in Euclidean space, we have\n∇fT (x) = cos(φ(x)) ‖∇fT (x)‖2 ‖∇fS(x)‖2 ∇fS(x) +∇fT (x)− cos(φ(x)) ‖∇fT (x)‖2 ‖∇fS(x)‖2 ∇fS(x)\n= cos(φ(x)) ‖∇fT (x)‖2 ‖∇fS(x)‖2 ∇fS(x) + v(x), (5)\nwhere we denote v(x) = ∇fT (x)− cos(φ(x))‖∇fT (x)‖2‖∇fS(x)‖2∇fS(x) for notation convenience.\nWe can see that v(x) is orthogonal to ∇fS(x), thus ‖v(x)‖2 = √\n1− cos2(φ(x))‖∇fT (x)‖2. Recall that actually τ1(x) = cos2(φ(x)), it can be written as ‖v(x)‖2 = √ 1− τ1(x)‖∇fT (x)‖2.\nThen, plugging (5) into (4) we have (4) = Ex∼D [ ‖ cos(φ(x))‖∇fT (x)‖2\n‖∇fS(x)‖2 ∇fS(x) + v(x)−A∇fS(x)‖22 ] = Ex∼D [∥∥∥∥(cos(φ(x))‖∇fT (x)‖2‖∇fS(x)‖2 −A ) ∇fS(x) + v(x) ∥∥∥∥2 2 ]\n= Ex∼D [∥∥∥∥(cos(φ(x))‖∇fT (x)‖2‖∇fS(x)‖2 −A ) ∇fS(x) ∥∥∥∥2 2 + ‖v(x)‖22 ]\n= Ex∼D [∥∥∥∥(cos(φ(x))‖∇fT (x)‖2‖∇fS(x)‖2 −A ) ∇fS(x) ∥∥∥∥2 2 + (1− τ1(x))‖∇fT (x)‖22 ]\n= Ex∼D [∥∥∥∥(cos(φ(x))‖∇fT (x)‖2‖∇fS(x)‖2 −A ) ∇fS(x) ∥∥∥∥2 2 ] + Ex∼D(1− τ1(x))‖∇fT (x)‖22\n= Ex∼D [( cos(φ(x))\n‖∇fT (x)‖2 ‖∇fS(x)‖2\n−A )2 ‖∇fS(x)‖22 ] + Ex∼D(1− τ1(x))‖∇fT (x)‖22.\n(6)\nNow let us deal with the first term by plugging in\nA = 〈∆fT ,δfS ,∆fS ,δfS 〉D ‖∆fS ,δfS ‖ 2 D ,\nwhere ∆fT ,δfS (x) = cos(φ(x))‖∇fT (x)‖2 and ∆fS ,δfS (x) = ‖∇fS(x)‖2, and we have\nEx∼D (\ncos(φ(x)) ‖∇fT (x)‖2 ‖∇fS(x)‖2\n−A )2 ‖∇fS(x)‖22\n= Ex∼D (cos(φ(x))‖∇fT (x)‖2 −A ‖∇fS(x)‖2) 2\n= 1\n2 Ex∼D ( ∆fT ,δfS (x)−A∆fS ,δfS (x) )2 = 1\n2 Ex∼D ( ∆fT ,δfS (x) 2 +A2∆fS ,δfS (x) 2 − 2A∆fT ,δfS (x)∆fS ,δfS (x) ) = 1\n2 (∥∥∥∆fT ,δfS ∥∥∥2D +A2 ∥∥∥∆fS ,δfS ∥∥∥2D − 2A〈∆fT ,δfS ,∆fS ,δfS 〉D )\n= 1\n2 (∥∥∥∆fT ,δfS ∥∥∥2D + 〈∆fT ,δfS ,∆fS ,δfS 〉 2 D\n‖∆fS ,δfS ‖ 2 D\n− 2 〈∆fT ,δfS ,∆fS ,δfS 〉 2 D\n‖∆fS ,δfS ‖ 2 D\n)\n= ∥∥∥∆fT ,δfS ∥∥∥2D 2 ( 1− 〈∆fT ,δfS ,∆fS ,δfS 〉 2 D\n‖∆fS ,δfS ‖ 2 D · ‖∆fT ,δfS ‖ 2 D ) = (1− τ2)Ex∼D [ cos2(x)‖∇fT (x)‖22\n] = (1− τ2)Ex∼D [ τ1(x)‖∇fT (x)‖22 ] . (7)\nPlugging (7) into (6), we finally have\n‖∇fT −∇(g ◦ fS)‖2D = (1− τ2)Ex∼D [ τ1(x)‖∇fT (x)‖22 ] + Ex∼D(1− τ1(x))‖∇fT (x)‖22\n= Ex∼D [ (1− τ2τ1(x))‖∇fT (x)‖22 ] ≤ (1− τ1τ2)L2,\nwhich completes the proof." }, { "heading": "D PROOF OF THEOREM 2", "text": "Theorem 2 (Restated). For two functions fS : Rn → Rm, and fT : Rn → Rd, there is an affine function g : Rm → Rd, so that\n‖∇fT −∇(g ◦ fS)‖2D ≤ 5Ex∼D ( (1− τ1(x)τ2) + (1− τ1(x))(1− τ2)λfT (x)2 ) ‖∇fT (x)‖22\n+ (λfT (x) + λfS (x)) 2 ‖∇fS(x)‖ 2 2\n‖∇fS‖2D,2 ‖∇fT ‖2D,2 , where g is defined as g(z) = Az +Const.\nMoreover, if assuming that fT is L-Lipschitz continuous, i.e., ‖∇fT (x)‖2 ≤ L for ∀x ∼ supp(D), and considering the worst-case singular value ratio λ, we can have a more elegant statement:\n‖∇fT −∇(g ◦ fS)‖2D ≤ ( (1− τ1τ2) + (1− τ1)(1− τ2)λ2fT + (λfT + λfS ) 2 ) 5L2.\nProof. Recall that the matrixA is defined in Definition 3, i.e.,\nA = proj(Ex∼D[∆fT ,δfS (x)∆fS ,δfS (x) >] ( Ex∼D[∆fS ,δfS (x)∆fS ,δfS (x) >] )† , ‖∆fT ,δfS ‖D ‖∆fS ,δfS ‖D ),\nand we can see\n‖∇fT −∇(g ◦ fS)‖2D,2 = ‖∇f>T −∇(g ◦ fS)>‖2D,2 = ‖∇f>T −A∇f>S ‖2D,2 = Ex∼D‖∇fT (x)> −A∇fS(x)>‖22 = Ex∼D max\n‖t‖2=1 ‖∇fT (x)>t−A∇fS(x)>t‖22, (8)\nwhere the last equality is due to the definition of matrix spectral norm.\nDenoting∇f> as either the Jacobian matrix∇f>T or∇f>S , Singular Value Decomposition (SVD) suggests that ∇f(x)> = UΣV >, where Σ is a diagonal matrix containing all singular values ordered by their absolute values. Let σ1, · · · , σn denote ordered singular values. Nothing that the number of singular values that are non-zero may be less than n, so we fill the empty with zeros, such that each of them have corresponding singular vectors, i.e., the column vectors v1, · · · ,vn in V . That is being said, ∀i ∈ [n], we have\n‖∇f(x)>vi‖2 = |σi|.\nLet θi and vi denote the singular values and vectors for ∇fS(x)>. Noting that {vi}ni=1 define a orthonormal basis for Rn, we can represent\nt = n∑ i=1 θivi, (9)\nwhere ∑n i=1 θ 2 i = 1.\nAs adversarial attack is about the largest eigenvalue of the gradient, plugging (9) into (8), we can split it into two parts, i.e.,\n(8) = Ex∼D max ‖t‖2=1 ∥∥∥∥∥∇fT (x)> ( n∑ i=1 θivi ) −A∇fS(x)> ( n∑ i=1 θivi )∥∥∥∥∥ 2\n2\n= Ex∼D max ‖t‖2=1 ∥∥∥∥∥∥∥ ∇fT (x)> (θ1v1)−A∇fS(x)> (θ1v1) +∇fT (x)> ( n∑ i=2 θivi ) −A∇fS(x)> ( n∑ i=2 θivi )∥∥∥∥∥∥∥ 2\n2\n. (10)\nDenoting u = ∑n i=2 θivi, we can see this vector is orthogonal to v1. Let us denote v ′ 1 as the singular vector with the biggest absolute singular value of ∇fT (x)>, parallel with attack δfT . Now we split\nu = u1 + u2 into two terms, where u1 is parallel to v′1, and u2 is orthogonal to u1. As u1 is in the orthogonal space to v1 while parallel with v′1, it is bounded by the sine value of the angle between v1 and v′1, i.e., √ 1− τ1(x). Hence, noting that u is part of the unit vector t,\n‖u1‖2 ≤ √ 1− τ1(x)‖u‖2 ≤ √ 1− τ1(x). (11)\nPlugging u in (10), we have\n(10) = Ex∼D max ‖t‖2=1 ∥∥∥∥∥∇fT (x)> (θ1v1)−A∇fS(x)> (θ1v1)+∇fT (x)> (u1 + u2)−A∇fS(x)>u ∥∥∥∥∥ 2\n2\n≤ Ex∼D max ‖t‖2=1 ∥∥∇fT (x)> (θ1v1)−A∇fS(x)> (θ1v1)∥∥2︸ ︷︷ ︸ X1 + ∥∥∇fT (x)>u1∥∥2︸ ︷︷ ︸\nX2\n+ ∥∥∇fT (x)>u2 −A∇fS(x)>u∥∥2︸ ︷︷ ︸\nX3\n 2 , (12)\nwhere the inequality is due to triangle inequality.\nThere are three terms we have to deal with, i.e., X1, X2 and X3. Regarding the first term, v1 in X1 aligns with the attack δfS (x), which we have known through adversarial attack. The second term X2 is trivially bounded by (11). Although adversarial attacks tell us nothing about X3, it can be bounded by the second largest singular values.\nLet us first deal with two easiest, i.e., X2 and X3. Applying (11) on X2 directly, we have X2 = ‖∇fT (x)>‖2 · ‖u1‖2 ≤ √ 1− τ1(x)‖∇fT (x)>‖2.\nFor X3, noting that u2 is orthogonal to v′1, and u is orthogonal to v1, we can see that u2 has no components of the largest absolute singular vector of ∇fT (x)>, and u has no components of the largest absolute singular vector of∇fT (x)>. Therefore,\nX3 ≤ ∥∥∇fT (x)>u2∥∥2 + ∥∥A∇fS(x)>u∥∥2\n≤ σfT ,2(x) ‖u2‖2 + σfS ,2(x) ‖A‖2 ‖u‖2 = λfT (x) ∥∥∇fT (x)>∥∥2 ‖u2‖2 + λfS (x)∥∥∇fS(x)>∥∥2 ‖A‖2 ‖u‖2 ≤ λfT (x)\n∥∥∇fT (x)>∥∥2 + λfS (x)∥∥∇fS(x)>∥∥2 ‖A‖2 , where the first inequality is due to triangle inequality, the second inequity is done by the attributes of singular values, and the definition of matrix 2-norm. The equality is done simply by applying the definition of singular values ratio (Definition 4), and the third inequality is due to the fact that ‖u2‖2 ≤ ‖u‖2 ≤ 1. Before dealing with X1, let us simplify (12) by relax the square of summed terms to sum of squared terms, as the following.\n(12) = Ex∼D max ‖t‖2=1 (X1 +X2 +X3) 2\n= Ex∼D max ‖t‖2=1 X21 +X 2 2 +X 2 3 + 2X1X2 + 2X2X3 + 2X1X3\n≤ Ex∼D max ‖t‖2=1 X21 +X 2 2 +X 2 3 + 2 max{X21 , X22}+ 2 max{X22 , X23}+ 2 max{X21 , X23}\n≤ Ex∼D max ‖t‖2=1 X21 +X 2 2 +X 2 3 + 2(X 2 1 +X 2 2 ) + 2(X 2 2 +X 2 3 ) + 2(X 2 1 +X 2 3 )\n= Ex∼D max ‖t‖2=1 5(X21 +X 2 2 +X 2 3 ). (13)\nWe note that this relaxation is not necessary, but simply for the simplicity of the final results without breaking what our theory suggests.\nBring what we we have about X2 and X3, and noting that θ1 ≤ 1 depends on t, we can drop the max operation by\n(13) = Ex∼D max ‖t‖2=1 5(X21 +X 2 2 +X 2 3 )\n= Ex∼D max ‖t‖2=1\n5( ∥∥∇fT (x)> (θ1v1)−A∇fS(x)> (θ1v1)∥∥22 +X22 +X23 )\n≤ 5Ex∼D ∥∥∇fT (x)>v1 −A∇fS(x)>v1∥∥22 + (1− τ1(x)) ‖∇fT (x)‖22 + ( (λfT (x) + λfS (x)) ∥∥∇fS(x)>∥∥2 ‖A‖2)2 . (14)\nNow, let us deal with the first term. As v1 is a unit vector and is in fact the direction of fS(x)’s adversarial attack, we can write δfS , (x) = v1. Hence,\nEx∼D ∥∥∇fT (x)>v1 −A∇fS(x)>v1∥∥22\n= Ex∼D 1 2 ∥∥∇fT (x)>δfS , (x)−A∇fS(x)>δfS , (x)∥∥22\n= Ex∼D 1\n2 ∥∥∥∆fT ,δfS (x)−A∆fS ,δfS (x)∥∥∥22 , (15) where the last equality is derived by applying the definition of ∆(x), i.e., equation (1). Note that we omit the in δfS , for notation simplicity.\nThe matrix A is deigned to minimize (15), as shown in the proof of Proposition 1. Expanding the term we have\n(15) = 1\n2 Ex∼D [∥∥∥∆fT ,δfS (x)∥∥∥22 + ∥∥∥A∆fS ,δfS (x)∥∥∥22 − 2〈∆fT ,δfS (x),A∆fS ,δfS (x)〉 ]\n= 1\n2 (∥∥∥∆fT ,δfS ∥∥∥2D + ∥∥∥A∆fS ,δfS ∥∥∥2D − 2〈∆fT ,δfS ,A∆fS ,δfS 〉D )\n= ∥∥∥∆fT ,δfS ∥∥∥2D 2 (1− τ2)\n= (1− τ2)Ex∼D ∥∥∇fT (x)>v1∥∥22 . (16)\nRecall that v1 is a unit vector aligns the direction of δfS , and we have used v ′ 1 to denote a unit vector that aligns the direction of δfT . As τ1 tells us about the angle between the two, let us split v1 into to orthogonal vectors, i.e., v1 = √ τ1(x)v ′ 1 + √ 1− τ1(x)v′1,⊥, where v′1,⊥ is a unit vector that is orthogonal to v′1.\nPlugging this into (16) we have\n(16) = (1− τ2)Ex∼D ∥∥∥∇fT (x)>(√τ1(x)v′1 +√1− τ1(x)v′1,⊥)∥∥∥2\n2 = (1− τ2)Ex∼D [∥∥∥∇fT (x)>√τ1(x)v′1∥∥∥2 2 + ∥∥∥∇fT (x)>√1− τ1(x)v′1,⊥∥∥∥2 2 ] = (1− τ2)Ex∼D [ τ1(x)\n∥∥∇fT (x)>∥∥22 + (1− τ1(x))λfT (x)2 ∥∥∇fT (x)>∥∥22] , where the second equality is due to the image of v′1 and v ′ 1,⊥ after linear transformation∇fT (x)> are orthogonal, which can be easily observed through SVD.\nPlugging this in (14), and with some regular algebra manipulation, finally we have\n(14) = 5Ex∼D (1− τ2) [ τ1(x) ∥∥∇fT (x)>∥∥22 + (1− τ1(x))λfT (x)2 ∥∥∇fT (x)>∥∥22] +(1− τ1(x)) ‖∇fT (x)‖22 + (λfT (x) + λfS (x)) 2 ∥∥∇fS(x)>∥∥22 ‖A‖22 \n= 5Ex∼D (1− τ1(x)τ2) ∥∥∇fT (x)>∥∥22 +(1− τ1(x))(1− τ2)λfT (x)2\n∥∥∇fT (x)>∥∥22 + (λfT (x) + λfS (x)) 2 ∥∥∇fS(x)>∥∥22 ‖A‖22 . (17) Recall thatA is from a norm-restricted matrix space, i.e., theA is scaled so that its spectral norm is no greater than ‖∆fT ,δfS ‖D ‖∆fS,δfS ‖D , thus\n‖A‖22 ≤ ‖∆fT ,δfS ‖ 2 D ‖∆fS ,δfS ‖ 2 D ≤ ‖∆fT ,δfT ‖ 2 D ‖∆fS ,δfS ‖ 2 D\n= Ex∼D‖∆fT ,δfT (x)‖ 2 2\nEx∼D‖∆fS ,δfS (x)‖ 2 2 = Ex∼D‖∇f>T (x)‖22 Ex∼D‖∇f>S (x)‖22\n= ‖∇f>T ‖2D,2 ‖∇f>S ‖2D,2 . (18)\nHence, plugging the above inequality to (17), the first statement of the theorem is proven, i.e.,\n(17) ≤ 5Ex∼D (1− τ1(x)τ2) ∥∥∇fT (x)>∥∥22 +(1− τ1(x))(1− τ2)λ2fT ∥∥∇fT (x)>∥∥22 + (λfT (x) + λfS (x))\n2 ∥∥∇fS(x)>∥∥22 ‖∇f>T ‖2D,2‖∇f>S ‖2D,2 . (19) To see the second statement of the theorem, we assume fT is L-Lipschitz continuous, i.e., ‖∇fT (x)‖2 ≤ L for ∀x ∈ supp(D), and considering the worst-case singular value ratio λ = maxx∈supp(D) for either fS , fT , we can continue as\n(19) ≤ 5 Ex∼D [ (1− τ1(x)τ2) ∥∥∇fT (x)>∥∥22] +Ex∼D [ (1− τ1(x))(1− τ2)λ2fT ∥∥∇fT (x)>∥∥22] +Ex∼D [ (λfT + λfS ) 2 ∥∥∇fS(x)>∥∥22 ‖∇f>T ‖2D,2‖∇f>S ‖2D,2 ] \n= 5 Ex∼D [ (1− τ1(x)τ2) ∥∥∇fT (x)>∥∥22] +Ex∼D [ (1− τ1(x))(1− τ2)λ2fT\n∥∥∇fT (x)>∥∥22] + (λfT + λfS ) 2 ‖∇f>T ‖2D,2 = 5Ex∼D ( (1− τ1(x)τ2) + (1− τ1(x))(1− τ2)λ2fT + (λfT + λfS ) 2 )∥∥∇fT (x)>∥∥22\n≤ Ex∼D ( (1− τ1(x)τ2) + (1− τ1(x))(1− τ2)λ2fT + (λfT + λfS ) 2 ) 5L2\n= ( (1− τ1τ2) + (1− τ1)(1− τ2)λ2fT + (λfT + λfS ) 2 ) 5L2,\nwhere the first inequality is due to the definition of worst-case singular value ratio, the last inequality is by Lipschitz condition, and the last equality is done be simply applying the definition of τ1." }, { "heading": "E PROOF OF THEOREM 3", "text": "The idea for proving Theorem 3 is straight-forward: bounded gradients difference implies bounded function difference, and then bounded function difference implies bounded loss difference.\nTo begin with, let us prove the following lemma. Lemma 1. Without loss of generality we assume ‖x‖2 ≤ 1 for ∀x ∈ supp(D). Consider functions fS : Rn → Rm, fT : Rn → Rd, and an affine function g : Rm → Rd, suggested by Theorem 1 or Theorem 2, such that g(fS(0)) = fT (0), if both fT , fS are β-smooth in {x | ‖x‖ ≤ 1}, we have\n‖fT − g ◦ fS‖D ≤ ‖∇fT −∇(g ◦ fS)‖D,2 + (\n1 + ‖∇fT ‖D,2 ‖∇fS‖D,2\n) β.\nProof. Let us denote v(x) = fT (x)− g ◦ fS(x), and we can show the smoothness of v(·). As g(·) is an affine function satisfying g(fS(0)) = fT (0), it can be denoted as g(z) = A(z − fS(0)) + fT (0), whereA is a matrix suggested by Theorem 1 or Theorem 2. Therefore, denoting B1 = {x | ‖x‖ ≤ 1} as a unit ball, for ∀x,y ∈ B1 it holds that ‖∇v(x)−∇v(y)‖2 = ∥∥∇v(x)> −∇v(y)>∥∥ 2\n= ∥∥∇fT (x)> −∇fT (y)> −A(∇fS(x)> −∇fS(y)>)∥∥2\n≤ ∥∥∇fT (x)> −∇fT (y)>∥∥2 + ∥∥A(∇fS(x)> −∇fS(y)>)∥∥2\n≤ ∥∥∇fT (x)> −∇fT (y)>∥∥2 + ‖A‖2 ∥∥∇fS(x)> −∇fS(y)>∥∥2 , (20)\nwhere the last second inequality is due to triangle inequality, and the last inequality is by the property of spectral norm.\nApplying the β-smoothness of fS and fT , and noting that ‖A‖2 ≤ ‖∇fT ‖D,2‖∇fS‖D,2 as shown in (18), we can continue as\n(20) ≤ β ‖x− y‖2 + ‖A‖2 β ‖x− y‖2 ≤ β ‖x− y‖2 + ‖∇fT ‖D,2 ‖∇fS‖D,2 β ‖x− y‖2\n= ( 1 + ‖∇fT ‖D,2 ‖∇fS‖D,2 ) β ‖x− y‖2 ,\nwhich suggests that v(·) is (\n1 + ‖∇fT ‖D,2 ‖∇fS‖D,2\n) β-smooth.\nWe are ready to prove the lemma now. Applying the mean value theorem, for ∀x ∈ B1, we have v(x)− v(0) = ∇v(ξx)>x,\nwhere ξ ∈ (0, 1) is a scalar number. Subtracting∇v(x)>x on both sides give v(x)− v(0)−∇v(x)>x = (∇v(ξx)−∇v(x))>x∥∥v(x)− v(0)−∇v(x)>x∥∥ 2 = ∥∥(∇v(ξx)−∇v(x))>x∥∥\n2∥∥v(x)− v(0)−∇v(x)>x∥∥ 2 ≤ ‖(∇v(ξx)−∇v(x))‖2 ‖x‖2 .\nLet us denote β1 = (\n1 + ‖∇fT ‖D,2 ‖∇fS‖D,2\n) β for notation convenience, and apply the definition of smooth-\nness: ‖v(x)− v(0)−∇v(x)>x‖2 ≤ β1(1− ξ)‖x‖22 ≤ β1. (21) Noting that v(0) = 0 and applying the triangle inequality, we have\n‖v(x)− v(0)−∇v(x)>x‖2 ≥ ‖v(x)‖2 − ‖∇v(x)>x‖2 ≥ ‖v(x)‖2 − ‖∇v(x)>‖2 Plugging it into (21), we have\n‖v(x)‖2 ≤ β1 + ‖∇v(x)>‖2 ‖v(x)‖22 ≤ β21 + ‖∇v(x)>‖22 + 2β1‖∇v(x)>‖2\nEx∼D‖v(x)‖22 ≤ β21 + Ex∼D‖∇v(x)>‖22 + 2β1Ex∼D‖∇v(x)>‖2 Ex∼D‖v(x)‖22 ≤ β21 + Ex∼D‖∇v(x)‖22 + 2β1Ex∼D‖∇v(x)‖2\n‖v‖2D ≤ β21 + ‖∇v‖2D,2 + 2β1Ex∼D‖∇v(x)‖2\nApplying Jensen’s inequality to the last term, we get ‖v‖2D ≤ β21 + ‖∇v‖2D,2 + 2β1 √ Ex∼D‖∇v(x)‖22\n= β21 + ‖∇v‖2D,2 + 2β1 √ ‖∇v‖2D,2 = β 2 1 + ‖∇v‖2D,2 + 2β1‖∇v‖D,2\n= (‖∇v‖D,2 + β1)2 Plugging β1 = (\n1 + ‖∇fT ‖D,2 ‖∇fS‖D,2\n) β and v = fT − g ◦ fS into the above inequality completes the\nproof.\nWith the above lemma, it is easy to show the mean squared loss on the transferred model is also bounded. Theorem 3 (Restated). Without loss of generality we assume ‖x‖2 ≤ 1 for ∀x ∈ supp(D). Consider functions fS : Rn → Rm, fT : Rn → Rd, and an affine function g : Rm → Rd, suggested by Theorem 1 or Theorem 2, such that g(fS(0)) = fT (0). If both fT , fS are β-smooth, then\n‖g ◦ fS − y‖2D ≤ ( ‖fT − y‖D + ‖∇fT −∇g ◦ fS‖D,2 + ( 1 + ‖∇fT ‖D,2 ‖∇fS‖D,2 ) β )2\nProof. Let us denote β1 = (\n1 + ‖∇fT ‖D,2 ‖∇fS‖D,2\n) β, and according to Lemma 1 we can see\n‖fT − g ◦ fS‖D ≤ ‖∇fT −∇(g ◦ fS)‖D,2 + β1 (22)\nApplying a standard algebra manipulation to the left hand side, and then applying triangle inequality, we have\n‖fT − g ◦ fS‖D = ‖fT − y + y − g ◦ fS‖D ≥ ‖y − g ◦ fS‖D − ‖fT − y‖D.\nPlugging this directly into (22), it holds that\n‖y − g ◦ fS‖D − ‖fT − y‖D ≤ ‖∇fT −∇(g ◦ fS)‖D,2 + β1 ‖y − g ◦ fS‖D ≤ ‖fT − y‖D + ‖∇fT −∇(g ◦ fS)‖D,2 + β1\nReplacing β1 by (\n1 + ‖∇fT ‖D,2 ‖∇fS‖D,2\n) β and taking the square, we can see Theorem 3 is proven." }, { "heading": "F PROOF OF PROPOSITION 2", "text": "Proposition 2 (Restated). If `T is mean squared loss and fT achieves zero loss on D, then the adversarial loss defined in Definition 6 is approximately upper and lower bounded by\nLadv(fT , δfS ; y,D) ≥ 2Ex∼D [ τ1(x) ‖∇fT (x)‖22 ] +O( 3),\nLadv(fT , δfS ; y,D) ≤ 2Ex∼D [( λ2fT + (1− λ 2 fT )τ1(x) ) ‖∇fT (x)‖22 ] +O( 3),\nwhere O( 3) denotes a cubic error term.\nProof. Recall that the empirical adversarial transferability is defined as a loss\nLadv(fT , δfS , ; y,D) = Ex∼D `T (fT (x+ δfS , (x)), y(x)).\nAs `T is mean squared loss, and fT achieves zero loss, i.e., fT = y, we have\nLadv(fT , δfS , ; y,D) = Ex∼D ‖fT (x+ δfS , (x))− y(x)‖ 2 2\n= Ex∼D ‖fT (x+ δfS , (x))− fT (x)‖ 2 2 .\nDenoting δfS , (x) = δfS ,1(x), and define an auxiliary function h as\nh(t) = fT (x+ tδfS ,1(x))− fT (x),\nwe can see that ‖fT (x+ δfS , (x))− fT (x)‖ 2 2 = ‖h( )‖ 2 2.\nWe can then apply Taylor expansion to approximate h( ) with a second order error term O( 2), i.e.,\nh( ) = ∂h\n∂t\n∣∣ t=0 +O( 2) = ∇fT (x)>δfS ,1 +O( 2).\nTherefore, assuming that ‖∇fT (x)‖2 is bounded for x ∈ supp(D), we have\n‖fT (x+ δfS , (x))− fT (x)‖ 2 2 = ‖h( )‖ 2 2 = 2 ∥∥∇fT (x)>δfS ,1(x)∥∥22 +O( 3), (23)\nwhere we have omit higher order error term, i.e., O( 4). Next, let us deal with the term ∥∥∇fT (x)>δfS ,1(x)∥∥22. Same us the technique we use in the proof of Theorem 2, we split δfS ,1(x) = v1 + v2, where v1 aligns the direction of δfT ,1(x), and v2 is orthogonal to v1. Noting that τ1(x) is the squared cosine of the angle between δfS ,1(x) and δfT ,1(x), we can see that\n‖v1‖22 = τ1(x) ‖δfS ,1(x)‖ 2 2 = τ1(x), ‖v2‖22 = (1− τ1(x)) ‖δfS ,1(x)‖ 2 2 = (1− τ1(x)).\nTherefore, we can continue as∥∥∇fT (x)>δfS ,1(x)∥∥22 = ∥∥∇fT (x)>(v1 + v2)∥∥22 = ∥∥∇fT (x)>v1∥∥22 + ∥∥∇fT (x)>v2∥∥22\n= τ1(x) ‖∇fT (x)‖22 + ∥∥∇fT (x)>v2∥∥22 , (24)\nwhere the second equality is because that v1 is corresponding to the largest singular value of ∇fT (x)>, and v2 is orthogonal to v1. Next, we derive the lower bound and upper bound for (24). The lower bounded can be derived as\nτ1(x) ‖∇fT (x)‖22 + ∥∥∇fT (x)>v2∥∥22 ≥ τ1(x) ‖∇fT (x)‖22 ,\nand the upper bounded can be derived as\nτ1(x) ‖∇fT (x)‖22 + ∥∥∇fT (x)>v2∥∥22 ≤ τ1(x) ‖∇fT (x)‖22 + λfT (x)2 ‖∇fT (x)‖22 ‖v2‖22\n= τ1(x) ‖∇fT (x)‖22 + λfT (x) 2 ‖∇fT (x)‖22 (1− τ1(x)) ≤ τ1(x) ‖∇fT (x)‖22 + λ 2 fT ‖∇fT (x)‖ 2 2 (1− τ1(x))\n= ( λ2fT + (1− λ 2 fT )τ1(x) ) ‖∇fT (x)‖22 ,\nwhere λfT (x) is the singular value ratio of fT at x, and λfT is the maximal singular value of fT .\nApplying the lower and upper bound to (23), we finally have\n‖fT (x+ δfS , (x))− fT (x)‖ 2 2 ≥ 2τ1(x) ‖∇fT (x)‖22 +O( 3), ‖fT (x+ δfS , (x))− fT (x)‖ 2 2 ≤ 2 ( λ2fT + (1− λ 2 fT )τ1(x) ) ‖∇fT (x)‖22 +O( 3). (25)\nNoting that\nLadv(fT , δfS , ; y,D) = Ex∼D ‖fT (x+ δfS , (x))− fT (x)‖ 2 2 ,\nwe can see that taking expectation to (25) completes the proof." }, { "heading": "G EXPERIMENT DETAILS", "text": "All experiments are conducted on 4 RTX 2080 Ti GPUs and in python3 Ubuntu 16.04 environment.\nG.1 ATTACK METHODS\nPGD Attack is generated iteratively: denote step size as ξ, the source model as fS , and the loss function on the source problem. `S(·, ·). We initialize x0 to be uniformly sampled from the -ball B (x) of radius centered as instance x, and then generate the adversarial instance iteratively: at step t we compute xt+1 = xt + ξ · sign(∇xt`S(fS(xt), fS(x))). Denoting the adversarial example at instance x using PGD on source model fS as PGDfS (x), we measure the adversarial loss from fS to fT based on the loss `T (·, y) of fT on target data D given attacks generated on fS , i.e.,\nLT (fT ◦ PGDfS ; y,D) = Ex∼D `T (fT (PGDfS (x)), y(x)).\nTextFooler iteratively replaces words in target sentences by looking up similar words in the dictionary. It pauses when the predicted label is changed or runs out of the attack budget. We modify it such that it pauses when the percentage of changed words reaches 10%.\nG.2 ADVERSARIAL TRANSFERABILITY INDICATES KNOWLEDGE-TRANSFER AMONG DATA DISTRIBUTIONS\nDetails of Dataset construction For the image domain, we divide the classes of the original datasets into two categories, animals (bird, cat, deer, dog) and transportation vehicles (airplane, automobile, ship, truck). Each of the source datasets consists of different a percentage of animals and transportation vehicles, while the target dataset contains only transportation vehicles, which is meant to control the closeness of the two data distributions. Details of Model Training Image: we train five source models on the five source datasets from 0% animals to 100% animals, and one reference models on STL-10 with identical architectures and hyperparameters. We use SGD optimizer and standard cross-entropy loss with learning rate 0.1, momentum 0.9, and weight decay 10−4. Each model is trained for 300 epochs. Natural Language: we fine-tune a Bert on each of the datasets with Adam and learning rate 0.0003 for 100 epochs. For transferred models, we run Adam with a smaller learning rate 0.0001 for 3 epochs.\nG.3 ADVERSARIAL TRANSFERABILITY INDICATING KNOWLEDGE-TRANSFER AMONG ATTRIBUTES\nDetails of Model Training We train 40 binary source classifiers on each of the 40 attributes of CelebA with ResNet18 (He et al., 2016). All the classifiers are trained with optimizer Adadelta with a learning rate of 1.0 for 14 epochs. We also train a facial recognition model as a reference model on CelebA with 10,177 identities using ResNet18 as the controlled experiment.The reference facial recognition model is optimized with SGD and initial learning rate 0.1 on the ArcFace (Deng et al., 2019) with focal loss (Lin et al., 2017) for 125 epochs. For each source model, we construct a transferred model by stripping off the last layers and attaching a facial recognition head without parameters. Then we use the 40 transferred models to evaluate the knowledge transferability on 7 facial recognition benchmarks.\nG.4 ADVERSARIAL TRANSFERABILITY INDICATING KNOWLEDGE-TRANSFER AMONG TASKS\nDetails of Model Training We use 15 pretrained models released in the task bank (Zamir et al., 2018) as the source models. Each source model consists of two parts, an encoder, and a decoder. The encoder is a modified ResNet50 without pooling, homogeneous across all tasks, whereas the decoder is customized to suit the output of each task. When measuring the adversarial transferability, we will use each source model as a reference model and compute the transferability matrix as described below.\nAdversarial Transferability Matrix (ATM) is used here to measure the adversarial transferability between multiple tasks, modified from the Affinity Matrix in (Zamir et al., 2018). In the experiment of determining similarity among tasks, it is hard to compare directly and fairly, since each task is of different loss functions, which is usually in a very different scale with each other. To solve this problem, we take the same ordinal normalization approach as Zamir et al. (2018). Suppose we have N tasks in the pool, a tournament matrix MT for each task T is constructed, where the element of the matrix mi,j represents what percentages of adversarial examples generated from the ith task transfers better to task T than the ones of the jth task (untargeted attack success rate is used here).\nThen we take the principal eigenvectors of the N tournament matrices and stack them together to build the N ×N adversarial transferability matrix. To generate the corresponding “task categories\" for comparison, we sample 1000 images from the public dataset and perform a virtual adversarial attack on each of the 15 source models. Adversarial perturbation with (L∞ norm) as 0.03,0.06 are used and we run 10 steps PGD-based attack for efficiency. Then we measure these adversarial examples’ effectiveness on each of the 15 tasks by the corresponding loss functions. After we obtain the 15×15 ATM, we take columns of this matrix as features for each task and perform agglomerative clustering to obtain the Task Similarity Tree." } ]
2,020
DOES ADVERSARIAL TRANSFERABILITY INDICATE KNOWLEDGE TRANSFERABILITY?
SP:4a6f5bb1d0f72df5782a09a1ffc5e19504010e36
[ "This work proposes an effective modification of language model token-level distribution during the training which prevents some forms of degeneration such as repetitions and dullness. The approach is based on the idea of encouraging the model to use tokens which were not observed in the previous context so far. In other words, this method changes softmax distribution such that unseen/novel tokens is being rescaled with a given hyper-parameter $\\gamma$ (eq.4). Authors conduct several experiments using different tasks such as open-ended generation, image captioning and abstractive text summarization. As a result they confirm substantial improvement over the standard mle training and **token-level** unlikelihood training. In addition to analysis of their method, authors discuss a potential issue of unlikelihood training criterion and how their approach avoids this issue." ]
Advanced large-scale neural language models have led to significant success in many natural language generation tasks. However, the most commonly used training objective, Maximum Likelihood Estimation (MLE), has been shown to be problematic, where the trained model prefers using dull and repetitive phrases. In this work, we introduce ScaleGrad, a modification straight to the gradient of the loss function, to remedy the degeneration issues of the standard MLE objective. By directly maneuvering the gradient information, ScaleGrad makes the model learn to use novel tokens during training. Empirical results show the effectiveness of our method not only in open-ended generation, but also in directed generation. With the simplicity in architecture, our method can serve as a general training objective that is applicable to most of the neural text generation tasks.
[]
[ { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot", "venue": "learners. arXiv,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Greg Durrett", "Taylor Berg-Kirkpatrick", "Dan Klein" ], "title": "Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Angela Fan", "Mike Lewis", "Yann Dauphin" ], "title": "Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Kilem Li Gwet" ], "title": "Computing inter-rater reliability and its variance in the presence of high agreement", "venue": "British Journal of Mathematical and Statistical Psychology,", "year": 2008 }, { "authors": [ "Karl Moritz Hermann", "Tomas Kocisky", "Edward Grefenstette", "Lasse Espeholt", "Will Kay", "Mustafa Suleyman", "Phil Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Li Du", "Maxwell Forbes", "Yejin Choi" ], "title": "The curious case of neural text degeneration", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Philipp Koehn", "Rebecca Knowles" ], "title": "Six challenges for neural machine translation", "venue": "In Proceedings of the First Workshop on Neural Machine Translation,", "year": 2017 }, { "authors": [ "Wouter Kool", "Herke Van Hoof", "Max Welling" ], "title": "Stochastic beams and where to find them: The Gumbel-top-k trick for sampling sequences without replacement", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jonathan Krause", "Justin Johnson", "Ranjay Krishna", "Li Fei-Fei" ], "title": "A hierarchical approach for generating descriptive image paragraphs", "venue": "In Computer Vision and Patterm Recognition", "year": 2017 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Ves Stoyanov", "Luke Zettlemoyer" ], "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, 2019", "venue": null, "year": 2019 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A package for automatic evaluation of summaries", "venue": "In Proc. ACL workshop on Text Summarization Branches Out, pp", "year": 2004 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,", "year": 2011 }, { "authors": [ "Mitchell P. Marcus", "Beatrice Santorini", "Mary Ann Marcinkiewicz" ], "title": "Building a large annotated corpus of English: The Penn Treebank", "venue": "Computational Linguistics,", "year": 1993 }, { "authors": [ "Luke Melas-Kyriazi", "Alexander Rush", "George Han" ], "title": "Training for diversity in image paragraph captioning", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Germany", "August" ], "title": "Association for Computational Linguistics", "venue": "doi: 10.18653/v1/K16-1028. URL https://www.aclweb.org/anthology/K16-1028.", "year": 2016 }, { "authors": [ "Romain Paulus", "Caiming Xiong", "Richard Socher" ], "title": "A deep reinforced model for abstractive summarization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "Open-AI Blog,", "year": 2019 }, { "authors": [ "Ehud Reiter", "Robert Dale" ], "title": "Building Natural Language Generation Systems", "venue": null, "year": 2000 }, { "authors": [ "Abigail See", "Peter J. Liu", "Christopher D. Manning" ], "title": "Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725, Berlin, Germany, August 2016", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb. org/anthology/P16-1162", "year": 2016 }, { "authors": [ "Felix Stahlberg", "Bill Byrne" ], "title": "On NMT search errors and model errors: Cat got your tongue", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Zhaopeng Tu", "Zhengdong Lu", "Yang Liu", "Xiaohua Liu", "Hang Li" ], "title": "Modeling coverage for neural machine translation", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Ramakrishna Vedantam", "C Lawrence Zitnick", "Devi Parikh" ], "title": "Cider: Consensus-based image description evaluation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Q. Wang", "A.B. Chan" ], "title": "Describing like humans: On diversity in image captioning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Sean Welleck", "Ilia Kulikov", "Stephen Roller", "Emily Dinan", "Kyunghyun Cho", "Jason Weston" ], "title": "Neural text generation with unlikelihood training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "R.J. Williams", "D. Zipser" ], "title": "A learning algorithm for continually running fully recurrent neural networks", "venue": "Neural Computation,", "year": 1989 } ]
[ { "heading": "1 INTRODUCTION", "text": "Text generation has been one of the most important research problems in natural language processing (NLP) (Reiter & Dale, 2000). Thanks to the advances in neural architectures, models are now capable of generating texts that are of better quality than before (Brown et al., 2020). However, despite the countless efforts that have been made to improve neural architectures, models trained with the standard Maximum Likelihood Estimation (MLE) objective are known to prefer generating dull and highly repetitive texts. For instance, in open-ended generation tasks, such as story continuation or open dialogue generation, it has been observed that even with large pre-trained models, e.g., GPT-2 (Radford et al., 2019), high frequency tokens largely dominate the generation (Welleck et al., 2020; Holtzman et al., 2020). The same observation has been reported in directed generation tasks such as text summarization (Nallapati et al., 2016; See et al., 2017), image captioning (Melas-Kyriazi et al., 2018; Wang & Chan, 2019) and machine translation (Tu et al., 2016; Stahlberg & Byrne, 2019).\nThe methods introduced to solve the aforementioned issues with neural text generation can be primarily categorized into two groups: (i) training based methods, which include incorporating auxiliary losses (See et al., 2017; Welleck et al., 2020) and coverage vector (See et al., 2017; Tu et al., 2016); (ii) decoding based methods, such as stochastic beam search (Kool et al., 2019), top-k sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2020).\nThough decoding based methods, in particular nucleus and top-k sampling, perform well in practice in open-ended generation tasks, significantly reducing degeneration problem, they do not address the fundamental issue that the token-level probabilities produced by the neural model are problematic (Welleck et al., 2020). In addition, our experiments demonstrate that sampling methods also fail to generate high-quality texts in directed generation tasks such as abstractive text summarization.\nIn this work, based on the known observation that the model trained with MLE objective tends to generate repititive tokens or phrases, we introduce a novel method called ScaleGrad for neural text generation training, by directly maneuvering the gradients to make the model learn to use novel tokens during training. Our method lies in the training based group, which aims to address the fundamental modeling problem, that is, the token-level distribution predicted by the model.\nWe conduct extensive experiments with different neural architectures including LSTM (Hochreiter & Schmidhuber, 1997) and Transformer (Vaswani et al., 2017) across different tasks in openedended and directed text generation. Through extensive analysis we demonstrate that ScaleGrad consistently improves the generation quality according to both human evaluation and automatic\nmetrics. Compared to other training based methods, ScaleGrad is architecturally simpler and easier to fit into current neural models (§3.2), while possessing a wider applicability to different tasks compared to decoding based methods (§4.2 and §5.2)." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 NEURAL TEXT GENERATION", "text": "The NLP tasks involving text generation can be broadly categorized into two types: directed generation and open-ended generation (Holtzman et al., 2020). In the former case, the output text can be seen as a constrained transformation of the input. Examples include text summarization, machine translation, and image captioning. In the later case, the input context only provides a certain degree of constraints such that the model is allowed to generate the following texts with a considerable degree of freedom. Story/text continuation and dialogue generation fall in this category. Neural models frame text generation tasks as some form of conditional language modeling, which is typically trained to maximize the log likelihood (equivalently, minimize the negative log likelihood) of the training data. The Maximum Likelihood Estimation or MLE objective for an input-output pair (x,y) can be expressed as follows.\nLMLE = − T∑ t=1 log pθ(yt|y<t,x) (1)\nwhere θ denotes model parameters, T is the length of the output sequence y, and x is the taskspecific input condition, e.g., source document in summarization, image in image captioning, conversation history in dialogue generation and ∅ in text continuation. Teacher Forcing (Williams & Zipser, 1989), where current step’s target token is passed as the next input to the decoder rather than the predicted token, is usually used to train neural text generation models for faster convergence.\nDegeneration Degeneration has been a key problem in neural text generation models for openended tasks, where the model generates texts that are repetitive, overly generic (dull), incoherent and gibberish. It can happen at different levels of granularity – token, phrase, sentence and paragraph. The problem has not been mitigated even with large-scale pre-trained models like GPT-2 Large (Radford et al., 2019; Holtzman et al., 2020). Degeneration has also been observed in directed generation tasks even though the output in these tasks is confined by the input. For instance, in text summarization, most of the advanced models such as BertSum (Liu & Lapata, 2019), BART (Lewis et al., 2019) and ProphetNet (Yan et al., 2020) make use of tri-gram blocking (Paulus et al., 2018) within beam search to remove duplicate trigrams during decoding, which improves the generation quality in terms of automatic metric. This implies that even with involvement of large-scale pretrained models, degeneration still exists. Similar issues have been reported in machine translation (Koehn & Knowles, 2017; Stahlberg & Byrne, 2019) and image-description generation (MelasKyriazi et al., 2018; Wang & Chan, 2019)." }, { "heading": "2.2 COMBATING NEURAL TEXT DEGENERATION", "text": "Out of the methods proposed to tackle neural text degeneration, top-k sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2020) stand out as representatives of decoding based methods and unlikelihood training (Welleck et al., 2020) as a representative training based method. During each decoding step, nucleus and top-k sampling use different functions to filter the candidate tokens, thus reformalizing the probability distribution and sample the next token from the new distribution instead of maximizing the actual likelihood. Randomness brought by these sampling methods reduces duplicate tokens in the output. However, decoding strategy solely does not solve the underlying modeling problem with MLE, as pointed out by Welleck et al. (2020). Our analysis in §5.2 also reveals that sampling methods fail to generate high-quality texts in directed generation tasks. To address the issue with MLE, neural unlikelihood (UL) training has been proposed. During training, at each decoding step t, UL adds an auxiliary loss to the original cross entropy loss as follows.\nLt = LtMLE + LtUL = − log pθ(yt|y<t)− α · ∑ c∈Ct log(1− pθ(c|y<t)) (2)\nwhere α is a hyper-parameter and Ct is the set of negative tokens at step t, which is constructed by previous context tokens that are not the current token, Ct = {y1, . . . , yt−1} \\ yt. The auxiliary UL\nloss decreases the total loss based on the “unlikely” probabilities of negative tokens, thus implicitly reducing the probability assigned to the repetitive tokens. UL training targets at improving the underlying modeling problem, which accords with our goal. Therefore, we mainly compare our method with UL training1. In addition, we discuss one how our method is different from UL training from the gradient perspective in §5.4." }, { "heading": "3 METHODOLOGY: LEARNING TO USE NOVEL TOKENS", "text": "Training a text generation model with MLE objective treats each token in the gold (ground truth) sequence equally. With this approach, the model exhibits the tendency to generate repetitive tokens/phrases during inference. To mitigate this degeneration problem, we argue that the model should focus on learning to use novel tokens, rather than treating all the tokens equally.\nFormally, let y = (y1, . . . , yt, . . . , yT ) be the ground-truth token sequence that the model is learning to generate in an auto-regressive manner, one token at a time. At time step t, we define the token ỹti in the vocabulary V as a novel token, if ỹti has not been generated before, i.e., ỹti /∈ {y1, . . . , yt−1}. By the definition, we have a set of novel tokens Stnovel ⊆ V at each decoding step t in training, which shrinks over time as new tokens are generated (or observed) in the ground-truth sequence (see Appendix B for an illustration). Note that the shrinking set of novel tokens is equivalent to the negative tokens in UL except that it may contain the current target token yt, if it was observed before. To encourage the model to focus on learning to use novel tokens, we propose an architecturallysimple method that can fit into most of the auto-regressive generation models. Our method, requiring no carefully-designed components, goes straight to the gradient analysis of the loss function." }, { "heading": "3.1 GRADIENT INFORMATION IN MLE TRAINING", "text": "Let us first consider the gradient analysis of the model trained with MLE. Let ot denote the presoftmax scores (i.e., logits) over the vocabulary at time step t, where oti is the score for the token with index i. Similarly, let ptk = [softmax(o\nt)]k represent the probability of the ground truth token with index k in the vocabulary. The partial derivative of the MLE objective (Eq. 1) at time step t with respect to the logit oti can be shown as (omitting t and ‘MLE’ subscript for simplicity):\n∇oiL = ∂L ∂pk · ∂pk ∂oi = pi − 1(i = k) (3)\nwhere pi = [softmax(o)]i (derivation is given in Appendix A). Specifically, the gradient of the loss w.r.t. the ground truth token logit ok is (pk − 1) and for any other token logit oi is pi. As the gradient-based optimization proceeds, the gradient converges to , a number that is close enough to 0. Another interpretation is that the gradient of the loss is supposed to be close to 0 around a (local) minimum. Therefore, to reach the minimum point, or to make the gradient close to 0, the model would try to reduce the probability of non-ground truth token pi and increase the probability of ground truth token pk in the MLE training.\nFrom Eq. 3, it is clear that the gradient that every token oi in the vocabulary receives is directly related to its generation probability pi. Therefore, we hypothesize that directly manipulating the generation probabilities of tokens, thereby controlling their gradients, can help us achieve our goal, which is to train the model so that it is encouraged to use novel tokens." }, { "heading": "3.2 OUR METHOD: SCALEGRAD", "text": "To encourage the model to learn to use novel tokens for generation, we can control the gradient to force the model to either increase the probability of novel tokens or decrease the probability for non-novel tokens. Based on this basic idea, we propose an effective training method keeping it in the simplest form. Specifically, at each decoding step of training, we re-normalize the softmax output (the probability distribution over the vocabulary) in a way such that the model is informed of the current set of novel tokens and encouraged to use them. Assuming that p̃t is the softmax output at step t and Stnovel is the corresponding set of novel tokens at that step according to our definition, we\n1Note that we focus on comparing our work with token-level UL in this work.\nre-compute the probability distribution as follows (again omitting t for notational simplicity):\npi = γ · p̃i∑|V| j=1 pj , if i ∈ Snovel\np̃i∑|V| j=1 pj , otherwise (4)\nwhere γ ∈ (0, 1) is the only hyper-parameter in our method that controls to what degree we want to encourage the model to focus on novel tokens; a smaller value of γ incurs more aggressive push for using novel tokens. The effect of this change is that we directly modify the generation probability (after re-normalization) of the novel tokens with a factor of λi, such that pi = λi · p̃i for i ∈ Snovel with λi ∈ (0, 1). Similarly, we have pi = αi ·p̃i for i /∈ Snovel with αi > 1.2 Consequently, assuming that the ground truth token is indexed with k, the gradient for each token has been changed to:\n∇oiL = pi − 1(i = k) = λi · p̃i − 1, if i = k and i ∈ Snovel αi · p̃i − 1, if i = k and i /∈ Snovel λi · p̃i, if i 6= k and i ∈ Snovel αi · p̃i, if i 6= k and i /∈ Snovel\n(5)\nWe now discuss why these changes encourage the model to use novel tokens. As mentioned, during training the model tries to decrease the gradient norm to 0 to reach a local minimum. First, for a ground truth token (i.e., i = k), if it is also a novel token, the gradient norm |λi · p̃i − 1| is pushed away from 0 so that the model has to learn to increase the probability p̃i further to reduce the gradient norm; if it is not a novel token, |αi · p̃i−1| is pushed slightly closer to 0, which still makes the model learn to predict the ground truth but with a relatively lower strength. For non-ground truth tokens (i.e., i 6= k), when it is not a novel token, |αi · p̃i| increases the gradient norm so that the model learns to assign much lower probability p̃i to reduce it. Similarly, when the token is novel but not a ground truth token, the resulting gradient norm |λi · p̃i| becomes smaller, for which the model only moderately learns to decrease the probability p̃i to reduce the norm further.\nWhile ScaleGrad is derived from the gradient analysis of neural generation models (supervised training), it shares some commonalities with policy gradient methods in Reinforcement Learning in the sense that both operate by scaling the gradient based on different needs – learning to get more reward in policy gradient and learning to generate novel tokens in ScaleGrad (Appendix C draws this connection). Also note that the notion of novel token set can be adapted for different purposes. For example, one can define it to be a set of important tokens (e.g., based on TF-IDF scores) to promote extractiveness or factual correctness in summarization. We leave such explorations for future work." }, { "heading": "4 EXPERIMENTS", "text": "We showcase the performance of ScaleGrad in both open-ended and directed generation tasks. To verify the effectiveness of our approach, for all the experiments below, we use exactly the same hyper-parameters (except for method-specific ones) and setup as the corresponding baseline unless stated otherwise. All the experimental details, such as model hyper-parameters, training and dataset settings regarding the reproducibility can be found in Appendix G. For qualitative assessments, we show examples of generated texts in Table 4 and more in Appendix L." }, { "heading": "4.1 OPEN-ENDED GENERATION", "text": "Setup We consider language modeling and text auto-completion, where we compare the performance of the model trained with ScaleGrad against the models trained with MLE and unlikelihood (UL) training (Welleck et al., 2020) introduced lately to mitigate degeneration in open-ended tasks. We follow the same setup as Welleck et al. (2020). Specifically, we fine-tune the pre-trained GPT2 (Radford et al., 2019) on Wikitext-103 (Merity et al., 2017). The maximum sequence length is fixed to 300 tokens for all the models. Each model is trained for a maximum of 35k iterations and evaluated based on the perplexity on the validation set after every 1k iterations. We report language modeling results on the testset for each model selected according to the perplexity on the validation\n2Since αi · p̃i and λi · p̃i are new re-normalized probabilities, they are both naturally bounded in [0, 1].\nset. The same saved models are also used for text auto-completion, where 50 BPE (Sennrich et al., 2016) tokens (from testset) are given as prefix and the models are to generate the continuation of 100 next tokens. Following Welleck et al. (2020), we apply greedy decoding in all our experiments in this section. This allows us to evaluate the modeling capability exclusively. Later, in §5.1, we analyze how our method performs with different decoding methods in open-ended generation.\nIn language modeling, we measure the generation quality by the standard perplexity (ppl), and Rep/l and ‘uniq’ measures of token-level distribution as Welleck et al. (2020). Rep/l measures the number of times that a token from the previous l tokens is repeated, when generating the following token; in our case, l ∈ {16, 32, 128}. The ‘uniq’ is defined as the number of unique next-token predictions on a test/validation set. For auto-completion, we report the repetition ratios of n-gram words (Rep-n) as well as the number of unique words (uniq-w) that are used during generation on the testset. Results From the results in Table 1, we notice that in language modeling, the model trained with ScaleGrad (SG) yields a token distribution that is much closer to human, while maintaining a lower perplexity. In particular, compared to the best baseline, SG achieves 1%, 2%, 4% lower repetitions in Rep/16, Rep/32 and Rep/128, respectively, while having 11% lower perplexity. It also uses more unique tokens compared to others (e.g., 3% more compared to UL training). Overall, our method significantly improves the token-level distribution and keeps a high generation quality. In autocompletion, from the quantitative perspective, SG produces texts with much fewer repetitive n-grams compared to MLE and UL. It uses nearly 5.5k more unique words compared to the MLE baseline. Human evaluation We have conducted a user study to verify the quality of generated texts. The study is conducted for two pairs of systems (SG vs. UL, SG vs. MLE). For each pair, we randomly choose the same 100 prefixes for the systems to produce their own continuations and ask two native speakers of English to judge which text is the better continuation of the given prefix in terms of their relevance to the prefix, grammaticality and readability. More details can be found in Appendix D.\nFrom the results in Table 2, we can observe that the texts produced by the models trained with ScaleGrad (SG) are preferred by the human users in most of the cases, i.e., 84.0% and 70.5%, respectively. We also compute the percentage agreement and chance-correlated Gwet’s AC1/gamma coefficient (Gwet, 2008) as inter-user agreement to verify the reliability of the study (details in Appendix D). We see that the agreements are substantial in both measures. Generalizability We further verify the generalizability of SG by evaluating the WikiText-103 finetuned model on PTB (Marcus et al., 1993) and IMDB (Maas et al., 2011) LM datasets. From the results in Appendix F.2, we see that SG generalizes well to these domains. We also show that SG consistently improves degeneration with varied decoding lengths (Appendix F.3).\nTable 2: Human evaluation results for autocompletion. % Agr. is the percent agreement and AC1 denotes Gwet’s AC1/gamma coefficient. Winners are marked in bold.\nWin Rate % Agr. AC1 SG vs MLE 84.0% 84.0% 0.78 SG vs UL 70.5% 79.0% 0.64\nTable 3: Results for image paragraph captioning.\nModels CIDEr MLE w/o 3-block 10.51 UL w/o 3-block (α=0.5) 14.65 SG w/o 3-block (γ=0.5) 19.42 MLE w/ 3-block 22.77 UL w/ 3-block (α=0.5) 22.25 SG w/ 3-block (γ=0.5) 24.62\nTable 4: An example of greedy generated continuations for the same prefix.\nPrefix: You’s patriotism, and Mei’s reflections on the quotidian are a few examples. More broadly, Du Fu’s work in transforming the lushi from mere word play into “a vehicle for serious poetic utterance”\nMLE: is a major contribution to the genre. The first of the three major works of the genre, was written in the early 1920s, and was the first of the three to be published in English. It was published in the United States in the United States, and was translated into French in the 1930s. The work was the first of the three\nUL: is a major influence on the modern Chinese novel. The modern Chinese novel is often characterized as being “literary in nature”, but it is not limited to literary genres. The modern Chinese novel has been described as “a literary genre that has been influenced by the Western literary tradition”. The modern Chinese novel has been described as “a literary genre that has been influenced by the Western literary tradition”\nSG: is a powerful example of how the Chinese literary tradition can be used to explore themes of nationalism and self determination. In addition to his own poetry, Du Fu wrote several works of prose. The first was a collection of short stories called “The Story of the Three Sisters”, which was published in 1885. It was followed by a series of novels, including the first two volumes of the Song dynasty epic poem “The Three Sisters”" }, { "heading": "4.2 DIRECTED GENERATION", "text": "For directed generation, we consider two tasks: image paragraph captioning and text summarization." }, { "heading": "4.2.1 IMAGE PARAGRAPH CAPTIONING", "text": "Setup We use the captioning model proposed by Melas-Kyriazi et al. (2018) as the baseline, which comprises a CNN encoder that is pre-trained for object detection and a 1-layer LSTM decoder. The models are trained and evaluated on the paragraph captioning dataset, Visual Genome (Krause et al., 2017). We train the model with SG and compare it to the ones trained with MLE and UL. The performance is measured by CIDEr (Vedantam et al., 2015), which computes TF-IDF weighted ngram overlaps between the model generated captions and the reference captions. We follow MelasKyriazi et al. (2018) to apply greedy inference since beam search did not yield any further gain.\nResults Table 3 shows the CIDEr scores for different training methods on Visual Genome testset with and without tri-gram blocking (Paulus et al., 2018) during inference. Without tri-gram blocking, MLE produces texts that are full of repetitive phrases (see Appendix L for examples), which leads to a low CIDEr score. When UL or SG is incorporated, the performance has been notably improved from 10.51 to 14.65 and 19.42, respectively.3 When tri-gram blocking is applied, our method is still capable of yielding 1.85 point improvement. This is because SG further improves the token-level degeneration on top of tri-gram blocking. In contrast, the model trained with UL has a slightly worse CIDEr score compared to the MLE baseline. We analyze n-gram level degeneration further in §5.2." }, { "heading": "4.2.2 ABSTRACTIVE TEXT SUMMARIZATION", "text": "Setup We use the abstractive summarization model BertSum (Liu & Lapata, 2019) as our baseline, which adopts a Transformer architecture to take advantage of pre-trained BERT (Devlin et al., 2019) as the encoder. At the first stage, the encoder is trained with an extractive summarization objective (binary classification for sentence selection). At the second stage, it initializes the decoder randomly and (re)trains the entire encoder-decoder model with an abstrac-\ntive (or generative) objective. For our experiments, we take the encoder that was trained at the first stage and train the entire (abstractive) model with different training methods (MLE, UL and SG) using the default training setup on two benchmark datasets: CNN/DM (Hermann et al., 2015; Nallapati et al., 2016) and NYT50 (Durrett et al., 2016). During inference, length normalization (Wu et al., 2016), tri-gram blocking and beam search (beam size = 5) are used as in (Liu & Lapata, 2019).\n3Although UL was originally proposed for open-ended generation, it is applicable to directed generation. We did the same scale of hyper-parameter searching for UL. Details can be seen in Appendix E.\nWe evaluate performance of the models with the standard F1-based ROUGE (Lin, 2004) scores (R1, R-2, R-L) and a model-based evaluation MoverScore (Zhao et al., 2019), which computes the Word Mover Distance (WMD) between the reference summary and generated summary based on the representations from BERT. We report 1-gram MoverScore (WMD-1), which has been proven to have higher correlation with human than other metrics (Zhao et al., 2019). Results From Table 5, we notice that on CNN/DM, the model trained with SG outperforms the models trained with MLE and UL when measured by ROUGE. In WMD-1, UL yields similar performance as ours. Both SG and UL further improve over the MLE baseline. The improvements imply that token-level degeneration may still exist even when tri-gram blocking is applied. On NYT-50, UL underperforms MLE, while our method improves in all measures. We discuss the possible reason why UL underperforms from a gradient perspective in §5.4." }, { "heading": "5 ANALYSIS", "text": "In this section, we perform a series of analysis to gain more insights about our method." }, { "heading": "5.1 OPEN-ENDED GENERATION", "text": "Compatibility with decoding strategies One advantage of our method is that it is compatible with decodingbased methods. One can choose different decoding strategies based on the specific needs. Table 6 provides the results of different decoding strategies used along with our SG training for text auto-completion (results for more varia-\ntions are in Appendix H). We observe that beam search, even with larger beam size, is not effective in mitigating the degeneration issue, which accords with the observation in (Holtzman et al., 2020). As expected, stochastic decoding, top-k and nucleus (top-p) sampling, help to further reduce repetition. This sets good examples of combining training and decoding strategies for the task in hand." }, { "heading": "5.2 DIRECTED GENERATION", "text": "Comparison with stochastic decoding Although top-p and top-k sampling have been proven successful in open-ended generation, to our knowledge, none has tested them in directed generation tasks. In order to see if they could lead to the same improvements as ScaleGrad, we conduct additional experiments with the BertSum summmarization model, whose underlying language model is more mature due to the involvement of BERT, compared to the image paragraph captioning model. For the interested readers, we also provide the results of stochastic decoding on image paragraph captioning in Appendix I.\nTable 7 shows the performance of BertSum trained with MLE on NYT50 testset when stochastic decoding is applied during inference. Since ROUGE-1 measures the exact 1-gram overlaps between reference and generated summaries, it may not be sufficient to evaluate the performance of stochastic decoding methods, which may generate more diverse output while conveying the same meaning. Therefore, we also report the MoverScore that is capable of considering the semantic similarity\nrather than just n-gram overlaps. However, both the ROUGE and MoverScore in Table 7 lead to the conclusion that stochastic decoding methods significantly lower the performance compared to the standard beam search. This implies that they may not be a good fit for directed generation tasks. In contrast, our method possesses a wider applicability in mitigating degeneration issues.\nn-gram degeneration To investigate further how SG minimizes degeneration and helps to improve the performance in automatic measures, we compute the n-gram repetition ratios of the outputs from the image captioning model (Melas-Kyriazi et al., 2018) and report the numbers in Table 8. 4 Compared to human, the MLE baseline has significantly higher repetitions, thus having the lowest CIDEr score (Table 3). With SG, the model yields a much better repetition ratio, which explains the notable performance boost in CIDEr. Tri-gram blocking resolves the issue of 3- or higher n-gram degeneration in a hard-coded way, improving CIDEr significantly. However, the token and 2-gram repetitions still remain high and improvable in MLE with tri-gram blocking. When both tri-gram blocking and SG are applied, the generated texts have the lowest and most human-like repetitions." }, { "heading": "5.3 HYPER-PARAMETER SENSITIVITY", "text": "Towards better usage and understanding of ScaleGrad, we show how the key metrics in language modeling change with the hyper-parameter γ in Figure 1. As discussed, a smaller value of γ incurs a stronger push to use novel tokens, giving higher perplexity and more unique tokens. In general, γ can be chosen based on the performance of the baseline model. If the baseline produces many repetitive tokens/phrases (e.g., image paragraph captioning experiments), a smaller value of γ should be used. Conversely, in tasks with less degeneration (e.g., summarization experiments), a larger γ can be used to further improve the unigram and bigram level degeneration without affecting the perplexity much." }, { "heading": "5.4 DISCUSSION ON THE UNLIKELIHOOD TRAINING FROM A GRADIENT PERSPECTIVE", "text": "Experimental results in the directed generation tasks empirically reveal that unlikelihood (UL) training could not bring about improvements consistently. In this section, we analyze UL from the perspective of its gradients and contrast this with ours. For UL, the gradient of the total loss (Eq. 2) with a single negative token w.r.t. the logit oi is:\n∇oiL = mi · pi − 1(i = k), where mi = (1− α pneg 1− pneg ) if i 6= ineg\n(1 + α) if i = ineg (6)\nwhere pi = [softmax(o)]i, pneg is the probability of the negative-candidate token with index ineg, and 1(i = k) is the indicator function with k being the index of the ground truth token (see the original paper for derivation). From our previous discussion in §3.1, we know that as the gradient-based optimization proceeds, the gradient converges to , a number that is close enough to 0. Therefore, with a preset hyper-parameter, the probability of the ground truth token pk should always increase as the gradient norm of the loss w.r.t. its logit (i.e., |∇okL|) decreases despite the ground truth token being repetitive (negative) or not. Should this not be the case, i.e., the generation probability of the ground truth token pk decreases as the gradient |∇okL| decreases, the model is not to learn\n4Since Melas-Kyriazi et al. (2018) used a soft tri-gram blocking, some of the duplicate tri-grams still remain.\nto predict the ground truth tokens correctly (since the pk has decreased), which in turn hurts the generation quality.\nSince the ground truth is always a non-negative token by definition (i.e., i = k 6= ineg), the gradient norm from Eq. 6 is |∇okL| = |µk ·pk−1| where µk = (1−α pneg 1−pneg ). We see that when pneg > 1 α+1\n(e.g., when α = 1 and pneg > 0.5), µk becomes negative, having the gradient norm |∇okL| =∣∣− |µk| · pk − 1∣∣ = |µk| · pk + 1. In this case, pk decreases as the gradient norm decreases, which contradicts with the optimization principle we mentioned earlier. To be more specific, in order to decrease the gradient norm as the training proceeds, the model will have to reduce the value of pk, which goes against the goal of learning. Thus, UL becomes less effective in such special cases (subject to the choice of the value of α). In contrast, the gradient analysis in Eq. 5 shows that ScaleGrad does not have such properties in learning to predict ground truth tokens. In our earlier exploration, we modeled the novel tokens as an auxiliary loss, which also has the similar properties as UL (Appendix J)." }, { "heading": "6 CONCLUSION", "text": "We have introduced a novel training method, called ScaleGrad, directly modifying the gradient of the standard MLE objective to remedy the text degeneration issues. The improvement verified by both automatic metric and human evaluation against the baselines in extensive experiments across different tasks in open-ended and directed generation and different architectures (i.e., LSTM and Transformer) demonstrate the effectiveness and generalizability of our method. Further analysis shows that ScaleGrad yields token distributions that are much closer to human-written texts compared to the baselines. Our method brings a good alternative to current training strategies." }, { "heading": "A DERIVATIONS", "text": "Derivation of the gradient of loss w.r.t. logit We follow the same notation as in the main paper. At time step t, assuming that the pre-softmax scores (i.e., logits) are denoted as ot over the vocabulary V, where oti denotes the score for the token with index i in the vocabulary. Similarly, we have pti = [softmax(o t)]i. Let k denote the index of the ground truth token at step t.\nThe cross entropy loss at step t is given as (we omit t for notational simplicity): L = − ∑ i yi log pi (7)\nwhere yi = 1 if i = k, otherwise yi = 0. Thus the loss function can be rewritten as:\nL = − log pk = − log( eok∑ j e oj ) = log( ∑ j eoj )− ok (8)\nTherefore, we can derive the partial derivative of the loss w.r.t. the logit oi as follows.\n∇oiL = ∇oi log( ∑ j eoj )−∇oiok\n= 1∑ j e oj · ∇oi( ∑ j eoj )− 1(i = k) = eoi∑ j e oj − 1(i = k)\n= pi − 1(i = k)\n(9)" }, { "heading": "B NOVEL TOKEN SET ILLUSTRATION", "text": "Figure 2 shows an example of how the novel token set changes when the model is learning to predict the sentence “people who are interested ..”. At beginning, the novel token set Snovel is equivalent to the vocabulary V. The size of the novel token set shrinks as the decoding proceeds." }, { "heading": "C CONNECTION WITH POLICY GRADIENT OBJECTIVE IN REINFORCEMENT LEARNING", "text": "The text generation agent can also be trained with a policy gradient method with the objective of maximizing the expected reward (or minimizing the expected negative reward) per time-step.\nLtRL = −Eyti∼πθ r(y t i) = − ∑ yti∈V pθ(y t i |y<t,x)r(yti) (10)\nwhere r(yti) is the reward for token y t i sampled from the vocabulary V (i.e., action space) using the current policy πθ = pθ(yt|y<t,x) and x is the input text. The policy gradient w.r.t. the logit om can be expressed as follows (omitting superscript t).\n∇omLRL = − ∑ yi∈V r(yi)∇om log pθ(yi|y<t,x) = ∑ yi∈V r(yi)(pm − 1(m = i)) (11)\nUnder the reinforcement learning setup, the (sampled) tokens with higher rewards will be “pushed up”, or increased in probability, while tokens resulting in lower rewards will be suppressed. From the perspective of gradient analysis, Eq. 11 shows that a higher reward leads to a larger value of the gradient norm |r(yi)(pm − 1(m = i))|, which in turn forces the model to learn to assign higher probability pm to the the sampled token (i.e., m = i) in order to reduce the norm |r(yi)(pm − 1)|. Meanwhile, the model also learns to assign lower probabilities pm to other tokens in the vocabulary (i.e., m 6= i) to reduce the norm |r(yi)pm|. In this specific example, reinforcement learning essentially works by scaling the gradient based on the rewards for each sampled tokens. While our method (Eq. 5) scales the gradient for each token based on the information of the novel tokens. Both of the methods share the same fundamental idea that we can have the model trained to serve the specific needs by scaling the gradient." }, { "heading": "D HUMAN EVALUATION DETAILS", "text": "We conduct the human evaluation for two pairs of systems i.e., SG vs. MLE and SG vs. UL. For each pair, the models generate their own continuations based on the same 100 randomly chosen prefixes. Two native speakers of English are then asked to evaluate the generated texts independently. During the study, users are instructed to judge which generated text is a better continuation of the prefix based on the overall quality (e.g., readability, relevance to the prefix, grammar, and fluency).\nThe Win Rate in Table 2 is calculated as the total number of times that two users prefer the texts produced by the winner divided by the total number of cases in the evaluation (2 × 100 = 200). To get a reliable human study, we also compute the percentage agreement and the chance correlated measure, Gwet’s AC1/gamma coefficient (Gwet, 2008) as the inter-rater agreement. Gwet’s AC1/gamma coefficient overcomes the issue where traditional measures, such as Cohen’s Kappa, are not robust to skewed distributions of rankings. Figure 3 shows the interface for human evaluation study." }, { "heading": "E HYPER-PARAMETER SEARCH DOMAIN FOR DIRECTED GENERATION", "text": "In the experiments with the directed generation tasks, we conduct the same scale of hyper-parameter search for unlikelihood training (UL) as our proposed ScaleGrad (SG) on the validation set. Specifically, for the hyper-parameter in length normalization (beam search decoding), we use β ∈ {0.0, 0.5, 1.0, 1.5, 2.0} for text summarization and β ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} for image paragraph captioning. For the modelspecific hyper-parameters, α in UL is chosen from {0.5, 1.0}5 and γ in SG is chosen from {0.5, 0.8}.\n5In open-ended generation, α = 1 is recommended by the author. While in our initial exploration for directed generation, we tried other values then found that these two reduce degeneration in reasonable diverse degrees." }, { "heading": "F EXPERIMENTAL RESULTS ON OPEN-ENDED GENERATION", "text": "" }, { "heading": "F.1 FULL EXPERIMENTAL RESULTS ON WIKITEXT-103", "text": "We present the full experimental results on WikiText-103 (Merity et al., 2017) for open-ended generations in Table 9. All the numbers are averaged over 3 runs with different randoms seeds and shown together with standard deviations." }, { "heading": "F.2 ON GENERALIZABILITY OF SCALEGRAD", "text": "To further verify the generalizability (i.e., different datasets and domains) of our method in openended generation, apart from WikiText-103 (Merity et al., 2017), we evaluate the models on two other language modeling datasets: Penn TreeBank or PTB (Marcus et al., 1993) and IMBD (Maas et al., 2011). In particular, after fine-tuning GPT-2 with different training strategies (MLE, SG and Ul) on WikiText-103 training data, we test the language modeling and auto-completion performance with the same setting described in §4.1. For PTB, we use the standard testset, while for IMDB, we randomly sample 500 movie reviews from the dataset.\nTable 10 shows the experimental results on the PTB testset, from which we can see that SG consistently improves over the MLE baseline in degeneration while possessing an acceptable increase in perplexity, and it outperforms UL consistently.\nIn Table 11, we show the experimental results on IMDB movie reviews and observe similar performance trending as in the experiment on PTB testset. From the two experiments, we can draw the conclusion that our method, SG, is capable of generalizing well to different datasets and domains. Examples of generated text for auto completion task can be found in Appendix L." }, { "heading": "F.3 AUTO COMPLETION WITH DIFFERENT DECODING LENGTHS", "text": "In Figure 4, we show the Rep-1 of generated text from the auto completion task with the constraints in different decoding (continuation) lengths. We observe that compared to MLE counterpart, SG yields consistent improvements on Rep-1, or token-level degeneration, regardless the different decoding lengths, which again verifies the effectiveness and generalizability of our method." }, { "heading": "G EXPERIMENTAL DETAILS", "text": "In this section, we present the details of the datasets used in our experiments as well as the necessary experimental setup. All the experiments were conducted with a single GPU on our machine (CPU: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz; GPU: NVIDIA RTX 2080Ti).\nFor each task in our experiments, we use the same model architecture and train it with different objectives (i.e., MLE, ScaleGrad and unlikelihood). The hyper-parameters that are used for different training objectives in the same task are exactly same, except for the ones described in Appendix E. We list the key hyper-parameters in this section. Though they may not be exhaustive, all the hyperparameters are clearly presented in our source code. In addition, all the hyper-parameters that are not listed in this section remain unchanged from their corresponding default setup." }, { "heading": "G.1 OPEN-ENDED GENERATION", "text": "Dataset The WikiText-103 (Merity et al., 2017) is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The training, validation and test sets contain 104m, 218k and 245k tokens, respectively.\nExperiments For all the experiments, we use the same setup and the same hyper-parameters as listed in Table 12, except for the method-specific hyper-parameters. We load the GPT-2 medium and fine-tune it on WikiText-103 with a maximum of 35k iterations and select the model based on the validation perplexity." }, { "heading": "G.2 SUMMARIZATION", "text": "Dataset We use CNN/DM (Hermann et al., 2015; Nallapati et al., 2016) and NYT50 (Durrett et al., 2016) in our experiments for text summarization. Table 13 shows the dataset statistics in details." }, { "heading": "Dataset Training Size Validation Size Test Size", "text": "Experiments The models are taken from (Liu & Lapata, 2019) and we train the models for the abstractive summarization with MLE, unlikelihood training and ScaleGrad on CNN/DM and NYT50. We list the hyper-parameters that we used in Table 14.\nG.3 IMAGE PARAGRAPH GENERATION\nDataset We use the image paragraph captioning corpus Visual Genome dataset, introduced by Krause et al. (2017). The dataset contains 14,575 training, 2,487 validation, and 2,489 testing images. The average length of description paragraph is 67.50 tokens.\nExperiments We follow the same experimental setup as in (Melas-Kyriazi et al., 2018). We train the model with different objectives and choose the model for testing based on the validation loss. During generation, tri-gram blocking and length-normalization are applied. Hyper-parameters that are used in our experiments are listed in Table 15." }, { "heading": "H EXPERIMENTAL RESULTS OF DIFFERENT DECODING STRATEGIES FOR AUTO-COMPLETION.", "text": "" }, { "heading": "Approaches Rep-1 Rep-2 Rep-3 uniq-w", "text": "Table 16 shows the results for the auto-completion task when we train the model with ScaleGrad and infer with different decoding strategies." }, { "heading": "I STOCHASTIC DECODING FOR IMAGE PARAGRAPH CAPTIONING", "text": "We apply different stochastic decoding strategies for the MLE baseline on image paragraph captioning and report the results in Table 17. The experimental results demonstrate that stochastic decoding strategies do not work well in directed generation tasks, which is consitent with our findings in summarizaiton experiments." }, { "heading": "Models CIDEr", "text": "" }, { "heading": "J NEURAL NOVEL LIKELIHOOD TRAINING", "text": "In our earlier exploration, we experimented with a regularization loss based on the novel tokens, which is similar to UL. We can call it novel likelihood (NL). The total loss at time step t can be expressed as follows.\nLt = LtMLE + LtNL = − log pθ(yt|y<t)− α · ∑ c∈Ct log pθ(c|y<t) (12)\nwhere α is a hyper-parameter and Ct is the set of novel tokens at time step t, which is the same as in ScaleGrad (§3.2), i.e., Ct = V \\ {y1, . . . , yt−1} with V being the vocabulary. The NL loss LNL boosts the probabilities of novel tokens. In earlier empirical evaluation on language model, it yielded similar performance as UL. We thus also analyze the method from a gradient perspective. According to Eq. 12, it is easy to show that at time step t, the gradient of the overall loss w.r.t. the logit oi for one single novel token is (omitting t for simplicity):\n∇oiL = pi − 1(i = k) + α · (pi − 1(i = in)) = (1 + α) · pi − 1(i = k)− α · 1(i = in) (13)\nwhere pi = [softmax(o)]i, in is the index of the novel token and k is the index of the target token. We can see that when the target token is not a novel token, i.e., i = k 6= in, the gradient norm becomes |∇oiL| = |(1 + α) · pi − 1|. When pi > 11+α , the norm decreases as pi increases, which contradicts the gradient optimization principle. Thus, the NL loss has the similar property as the UL loss." }, { "heading": "K HYPER-PARAMETER SENSITIVITY", "text": "To fully present the sensitivity of Rep/l to the hyper-parameter, we further show how the Rep/l (i.e., l=16, 32 and 128) change with γ in Figure 5." }, { "heading": "L EXAMPLES", "text": "In the following, we show the examples of generated texts in three tasks: auto-completion (Table 18 and Table 19), image paragraph captioning (Table 20 and Table 21) and text summarization (Table 22, Table 23, Table 24 and Table 25). In addition, Table 26 and Table 27 show the example of auto completion on PTB testset and movie reviews from IMDB dataset." }, { "heading": "Prefix", "text": "Robert Boulter is an English film, television and theatre actor. He had a guest starring role on the television series The Bill in 2000. This was followed by a starring" }, { "heading": "MLE", "text": "role in the film The Last of the Mohicans in 2002. He has appeared in the television series The X Factor, The X Factor Australia, The X Factor New Zealand, The X Factor South Africa, The X Factor New Zealand, The X Factor South Africa, The X Factor New Zealand, The X Factor New Zealand, The X Factor New Zealand, The X Factor New Zealand, The X Factor New Zealand, The X Factor New Zealand, The X Factor New Zealand, The X Factor New Zealand" }, { "heading": "Unlikelihood", "text": "role in the film The Last of the Mohicans, which was released in 2002. He also appeared in the television series The X Factor, where he played the role of a DJ. In 2005, he starred in the film The Last of the Mohicans, which was released in 2006. He also appeared in the television series The X Factor, where he played the role of a DJ. In 2008, he starred in the television series The X Factor, where he played the role" }, { "heading": "ScaleGrad", "text": "role in the BBC’s comedy series The Office in 2002. In 2003, he appeared in the BBC’s crime drama series The Yard, playing a police officer who is caught up in a murder investigation. He also appeared in the BBC’s crime drama series The X Factor, playing a man who is accused of murdering his wife. In 2004, he starred in the BBC’s comedy series The Office, playing a fictional version of himself. He has appeared in several" }, { "heading": "Prefix", "text": "You’s patriotism, and Mei’s reflections on the quotidian are a few examples. More broadly, Du Fu’s work in transforming the lushi from mere word play into “a vehicle for serious poetic utterance”" }, { "heading": "MLE", "text": "is a major contribution to the genre. The first of the three major works of the genre, was written in the early 1920s, and was the first of the three to be published in English. It was published in the United States in the United States, and was translated into French in the 1930s. The work was the first of the three" }, { "heading": "Unlikelihood", "text": "is a major influence on the modern Chinese novel. The modern Chinese novel is often characterized as being “literary in nature”, but it is not limited to literary genres. The modern Chinese novel has been described as “a literary genre that has been influenced by the Western literary tradition”. The modern Chinese novel has been described as “a literary genre that has been influenced by the Western literary tradition”" }, { "heading": "ScaleGrad", "text": "is a powerful example of how the Chinese literary tradition can be used to explore themes of nationalism and self determination. In addition to his own poetry, Du Fu wrote several works of prose. The first was a collection of short stories called “The Story of the Three Sisters”, which was published in 1885. It was followed by a series of novels, including the first two volumes of the Song dynasty epic poem “The Three Sisters”" }, { "heading": "MLE w/o tri-gram blocking", "text": "There are two horses in a field. The horses are brown and white. The horses are brown and white. The horses are brown and white. The horses are brown and white. The horses are brown and white. The horses are brown and white. The horses are brown. The horses are brown. The horses are brown. The horses are brown. The horses are brown. The horses are brown. The horses are brown. The horses are brown. The horses are brown . The horses are brown. The horses are brown. The horses are brown. The horses are brown. The horses are brown. The horses are brown." }, { "heading": "MLE w/ tri-gram blocking", "text": "There are two horses in a field. The horses are brown and white. The horse on the right is brown and brown. The horse is brown. The other horse is white. The grass is green and green. The sky is very and green. There are trees in the background. The trees are green and healthy. The horses are black. The sun is shining brightly." }, { "heading": "ScaleGrad w/ tri-gram blocking", "text": "Two horses are standing in a field. The field is covered with green grass and weeds. The horse is brown and white. The horses are brown and white. There is a large tree behind the horses. The sky is blue and clear. There are many trees in the background. The grass is green. The sun is shining brightly. The sky is mostly cloudy." }, { "heading": "MLE w/o tri-gram blocking", "text": "A train is on the tracks. The train is yellow and yellow. The train is red and white . The train is red and white. The train is red and white . The train is on the train. The train is yellow and yellow. The train is blue and white . The train is on the tracks. The train is yellow and yellow . The train is blue and white. The train is blue and white. The train is on the tracks . The train is yellow. The train is blue. The train is blue. The train is blue. The train is blue. The train is blue. The train is blue. The train is blue. The train is blue. The train is blue. The train is blue." }, { "heading": "MLE w/ tri-gram blocking", "text": "A train is on the tracks. The train is yellow and yellow. The train has a yellow stripe on the front. The front of the train is red. The sky is blue. There are many windows on the train. There is a train on the platform. The platform is made of metal. The is a platform on the side of the train. The are many tracks on the train. There are wires on the ground. There is a building behind the train tracks. There is a large building behind the train." }, { "heading": "ScaleGrad w/ tri-gram blocking", "text": "A train is on the tracks. There are two sets of tracks next to the train. The train is white and yellow. There is a large white building behind the trains." }, { "heading": "MLE", "text": "swiss sen dick marty reports that central intelligence agency operates secret prisons run by american in poland and romania from 2003 to 2006. says prison were operated exclusively by americans in poland and." }, { "heading": "Unlikelihood", "text": "swiss sen dick marty reports that secret prisons run by central intelligence agency in eastern europe, with information he says is gleaned from anonymous intelligence agents. report is prepared by swiss senator investigating cia operations for council of europe, 46 - nation rights group. scathing report says prison were operated exclusively by americans in poland and romania from 2003 to 2006." }, { "heading": "ScaleGrad", "text": "dick marty, swiss senator investigating cia operations for council of europe, gives bleak description of secret prisons run by central intelligence agency in eastern europe, with information he says is gleaned from anonymous intelligence agents. report says prisons were operated exclusively by americans in poland and romania from 2003 to 2006." }, { "heading": "MLE", "text": "the 31-year-old has been slammed for ‘promoting the cruel and unnecessary whaling industry’. the 31-year-old boasted of eating minke whale in norway, to the horror of environmental campaigners who say she is helping promote an ‘unimaginably cruel’ trade. the hunting and importation of whale meat is banned in britain , as it is in most parts of the world." }, { "heading": "Unlikelihood", "text": "the 31-year-old boasted of eating minke whale at juvet landscape hotel in norway. the 31-year-old has been slammed for ‘promoting the cruel and unnecessary whaling industry ’¡q¿environmental campaigners say she is helping promote an ‘ unimaginably cruel ’ trade." }, { "heading": "ScaleGrad", "text": "duchess of cambridge’s brother-in-law has led a personal crusade against poaching and protecting wildlife. pippa middleton boasted of eating minke whale in norway, conservation group said she is helping promote ‘promoting the cruel and unnecessary whaling industry’." }, { "heading": "MLE", "text": "protesters are calling for president dilma rousseff to be impeached. rousseff’s supporters call for the president to be impeachment¡q¿they say there hasn’t been any evidence she was involved in the corruption scandal." }, { "heading": "Unlikelihood", "text": "protesters are calling for president dilma rousseff to be impeached. there is a number of issues at play. one of the biggest is an investigation into a multimillion-dollar kickback scheme at the state-run oil company petrobras." }, { "heading": "ScaleGrad", "text": "president dilma rousseff is facing an economic downturn and a massive bribery scandal. rousseff has defended her right to protest and acknowledged the need to clean up corruption at petrobras." }, { "heading": "MLE", "text": "new york city plant - o - ramachan pine, which grew among dinosaur 200 million years ago, is named after dr david noble, who rappelled into deep gorge in blue mountains of wollemi national park, ten miles west of sydney, australia. photos. map. pine was discovered until 1994, when bush delay and national parks officer david noble tried rappelled into woods of wollemi, which he had never seen before. scientists had evidence of wollemi pine’s existence, but only from fossils." }, { "heading": "Unlikelihood", "text": "nicholas d kristof op-ed column on plight of wollemi pine, one of oldest on earth who grew among dinosaur 200 million years ago. notes that it was not discovered until 1994, when bush rescue officer david noble rappelled into deep gorge in blue mountains of sydney, australia, and came back up with branch of tall tree that he never saw before." }, { "heading": "ScaleGrad", "text": "nicholas d kristof op-ed column on new species of wollemi pine, which grew among dinosaur 200 million years ago and was discovered until 1994, when bush delay and national parks officer david noble rappelled into deep gorge in blue mountains of wollemi national park, 120 miles west of sydney, australia, and discovered new species, wollemia nobilis, named after noble." }, { "heading": "Prefix", "text": "last resort who were criticized after the 1987 crash – once again couldn’t handle the selling pressure. Big investment banks refused to step up to the plate to support the beleaguered floor traders by buying big blocks of stock, traders say. Heavy selling" }, { "heading": "MLE", "text": "pressure was also felt by the Wall Street banks. The Wall Street banks were forced to sell more stock to hedge funds and other investors, and to buy more stock to hedge against losses. The Wall Street banks were forced to sell more stock to hedge funds and other investors, and to buy more stock to hedge against losses. The Wall Street banks were forced to sell more stock to hedge funds and other investors, and to buy more stock to hedge against losses." }, { "heading": "Unlikelihood", "text": "pressure was also felt by the banks themselves. In 1987, the New York Federal Reserve Bank, which had been the largest bank in the United States since the 1930s, was forced to sell $1.5 billion worth of stock. The bank’s stock price fell to $0.001, and it was forced to close its New York branch. The bank’s stock price fell to $0.001, and it was forced to close its New" }, { "heading": "ScaleGrad", "text": "by the big banks led to a sharp drop in prices, and the market collapsed. In 1992, the Federal Reserve Bank of New York issued a report on the financial crisis, which concluded that the Fed’s actions had not been sufficient to prevent the collapse of the housing market. The report recommended that the Fed increase its balance sheet by $1.5 trillion (equivalent to about $2.5 trillion today), but this was not implemented." }, { "heading": "Prefix", "text": "One of the other reviewers has mentioned that after watching just 1 Oz episode you’ll be hooked. They are right, as this is exactly what happened with me. The first thing that struck me about Oz was its brutality and unflinching scenes of violence." }, { "heading": "MLE", "text": "I was not expecting anything like that in a show about a family. I was expecting a show about a family, but I was expecting something more. I was expecting a show about a family, but I was expecting something more. I was expecting a show about a family, but I was expecting something more. I was expecting a show about a family, but I was expecting something more. I was expecting a show about a family, but I was expecting something more. I was expecting a show" }, { "heading": "Unlikelihood", "text": "I was immediately hooked. The violence was so graphic that I was completely taken aback by how it was handled. I was also struck by the fact that the show was so well-written that it was almost impossible to not be entertained. The second thing that struck me about Oz was its ability to make me laugh. I was so taken aback by the violence that I couldn ’t help but laugh. The violence was so graphic that I was completely taken aback by how it" }, { "heading": "ScaleGrad", "text": "It was also very funny, but not funny enough to make me laugh out loud. The second thing that struck me was the fact that the show’s characters were so well developed that they were able to carry their own weight without being annoying or annoying. In its original broadcast, “Oz” was watched by 9.5 million viewers, according to Nielsen ratings. It was the highest rated" } ]
2,020
null
SP:2062ab9c65e0d10e5d6d0112aaeaca208f131afd
[ "In this paper, the authors augment the instance-level self-supervised learning with cluster-aware learning mechanism during the training procedure. Specifically, for each training batch, the authors project the instances into a clustering space and then utilize a cluster-aware contrastive loss to push the augmented samples from the same instance to belong to the same cluster, otherwise for different instances. To ensure the clustering not to collapse into a single or a few cluster to find the trivial solutions, the authors further add a penalization item keep the entropy of clustering assignment be uniform to some extent. The experimental results demonstrate that the proposed method can improve the representation learning performance over SoTA methods on several datasets, while also outperforms the previous methods on clustering task. Further ablation studies show that the loss is effective to ensure the learned representation more discriminative and clusterable." ]
Learning visual representations using large-scale unlabelled images is a holy grail for most of computer vision tasks. Recent contrastive learning methods have focused on encouraging the learned visual representations to be linearly separable among the individual items regardless of their semantic similarity; however, it could lead to a sub-optimal solution if a given downstream task is related to non-discriminative ones such as cluster analysis and information retrieval. In this work, we propose an advanced approach to consider the instance semantics in an unsupervised environment by both i) Contrasting batch-wise Cluster assignment features and ii) Bootstrapping an INstance representations without considering negatives simultaneously, referred to as C2BIN. Specifically, instances in a mini-batch are appropriately assigned to distinct clusters, each of which aims to capture apparent similarity among instances. Moreover, we introduce a multi-scale clustering technique, showing positive effects on the representations by capturing multi-scale semantics. Empirically, our method achieves comparable or better performance than both representation learning and clustering baselines on various benchmark datasets: CIFAR-10, CIFAR-100, and STL-10.
[]
[ { "authors": [ "YM Asano", "C Rupprecht", "A Vedaldi" ], "title": "Self-labelling via simultaneous clustering and representation learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hyojin Bahng", "Sanghyuk Chun", "Sangdoo Yun", "Jaegul Choo", "Seong Joon Oh" ], "title": "Learning de-biased representations with biased representations", "venue": null, "year": 1910 }, { "authors": [ "Yoshua Bengio", "Pascal Lamblin", "Dan Popovici", "Hugo Larochelle" ], "title": "Greedy layer-wise training of deep networks. In Advances in neural information processing", "venue": null, "year": 2007 }, { "authors": [ "Deng Cai", "Xiaofei He", "Xuanhui Wang", "Hujun Bao", "Jiawei Han" ], "title": "Locality preserving nonnegative matrix factorization", "venue": "In IJCAI,", "year": 2009 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster", "venue": null, "year": 2020 }, { "authors": [ "Jianlong Chang", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Deep adaptive image clustering", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jianlong Chang", "Yiwen Guo", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Deep discriminative clustering analysis", "venue": "Proceedings of the IEEE international conference on computer vision,", "year": 2019 }, { "authors": [ "Ting Chen", "Lala Li" ], "title": "Intriguing properties of contrastive losses", "venue": "arXiv preprint arXiv:2011.02803,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big self-supervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wouter Gansbeke", "Simon Vandenhende", "Stamatios Georgoulis", "Marc Proesmans", "Luc Van Gool" ], "title": "Scan: Learning to classify images without labels", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "K Chidananda Gowda", "G Krishna" ], "title": "Agglomerative clustering using the concept of mutual nearest neighbourhood", "venue": "Pattern recognition,", "year": 1978 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Philip Haeusser", "Johannes Plapp", "Vladimir Golkov", "Elie Aljalbout", "Daniel Cremers" ], "title": "Associative deep clustering: Training a classification network with no labels", "venue": "In German Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Jiabo Huang", "Shaogang Gong", "Xiatian Zhu" ], "title": "Deep semantic clustering by partition confidence maximisation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "arXiv preprint arXiv:2004.11362,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Kimin Lee", "Sukmin Yun", "Kibok Lee", "Honglak Lee", "Bo Li", "Jinwoo Shin" ], "title": "Robust inference via generative classifiers for handling noisy labels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yucen Luo", "Jun Zhu", "Mengxi Li", "Yong Ren", "Bo Zhang" ], "title": "Smooth neighbors on teacher graphs for semi-supervised learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of Machine Learning Research, 9:2579–2605,", "year": 2008 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol", "Léon Bottou" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of machine learning research,", "year": 2010 }, { "authors": [ "Tongzhou Wang", "Phillip Isola" ], "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "venue": "arXiv preprint arXiv:2005.10242,", "year": 2020 }, { "authors": [ "Jianlong Wu", "Keyu Long", "Fei Wang", "Chen Qian", "Cheng Li", "Zhouchen Lin", "Hongbin Zha" ], "title": "Deep comprehensive correlation mining for image clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Weili Wu", "Hui Xiong", "Shashi Shekhar" ], "title": "Clustering and information retrieval, volume 11", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Jianwei Yang", "Devi Parikh", "Dhruv Batra" ], "title": "Joint unsupervised learning of deep representations and image clusters", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Matthew D Zeiler", "Dilip Krishnan", "Graham W Taylor", "Rob Fergus" ], "title": "Deconvolutional networks", "venue": "IEEE Computer Society Conference on computer vision and pattern recognition,", "year": 2010 }, { "authors": [ "Zelnik-Manor", "Lihi", "Pietro Perona" ], "title": "Self-tuning spectral clustering", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Xiaohang Zhan", "Jiahao Xie", "Ziwei Liu", "Yew-Soon Ong", "Chen Change Loy" ], "title": "Online deep clustering for unsupervised representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Split-brain autoencoders: Unsupervised learning by cross-channel prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Huang" ], "title": "Full comparison with unsupervised representation models for clustering benchmark datasets. The results of previous methods are taken from Ji et al", "venue": "SCAN (Gansbeke et al.,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning to extract generalized representations from a high-dimensional image is essential in solving various down-stream tasks in computer vision. Though a supervised learning framework has shown to be useful in learning discriminative representations for pre-training the model, expensive labeling cost makes it practically infeasible in a large-scale dataset. Moreover, relying on the human-annotated labels tends to cause several issues such as class imbalance (Cui et al., 2019), noisy labels (Lee et al., 2019), and biased datasets (Bahng et al., 2019). To address these issues, self-supervised visual representation learning, which does not require any given labels, has emerged as an alternative training framework, being actively studied to find a proper training objective.\nRecently, self-supervised approaches with contrastive learning (Wu et al., 2018; Chen et al., 2020a; He et al., 2020) have rapidly narrowed the performance gap with supervised pre-training in various vision tasks. The contrastive method aims to learn invariant mapping (Hadsell et al., 2006) and instance discrimination. Intuitively, two augmented views of the same instance are mapped to the same latent space while different instances are pushed away. However, aforementioned instance discrimination does not consider the semantic similarities of the representations (e.g., same class), even pushing away the relevant instances. This affects the learned representations to exhibit uniformly distributed characteristics, proven by the previous works (Wang & Isola, 2020; Chen & Li, 2020).\nWe point out that this uniformly distributed characteristic over instances can be a fundamental limitation against improving the learned representation quality. For instance, consider the representations illustrated in Fig. 1. It indicates a simple case where linearly separable representations do not always guarantee that they can\nbe properly clustered, which is not appropriate for non-discriminative downstream tasks such as information retrieval, density estimation, and cluster analysis (Wu et al., 2013). In response, we start this work by asking: How can we learn the representations to be properly clustered even without the class labels?\nIn this work, we propose a self-supervised training framework that makes the learned representations not only linearly separable but also properly clustered, as illustrated in Fig. 2. To mitigate the uniformly distributed constraint while preserving the invariant mapping, we replace the instance discrimination with an instance alignment problem, pulling the augmented views from the same instance without pushing away the views from the different images. However, learning the invariant mapping without discrimination can easily fall into a trivial solution that maps all the individual instances to a single point. To alleviate this shortcoming, we adopt a bootstrapping strategy from Grill et al. (2020), utilizing the Siamese network, and a momentum update strategy (He et al., 2020).\nIn parallel, to properly cluster the semantically related instances, we are motivated to design additional cluster branch. This branch aims to group the relevant representations by softly assigning the instances to each cluster. Since each of cluster assignments needs to be discriminative, we employ the contrastive loss to the assigned probability distribution over the clusters with a simple entropy-based regularization. In the meantime, we constructed the cluster branch in multi-scale clustering starategy where each head deals with a different number of clusters (Lin et al., 2017). Since there exists a various granularity of semantic information in images, it helps the model to effectively capture the diverse level of semantics as analyzed in Section 4.5.\nIn summary, our contributions are threefold, as follows:\n• We propose a novel self-supervised framework which contrasts the clusters while bootstrapping the instances that can attain both linearly separable and clusterable representations.\n• We present a novel cluster branch with multi-scale strategy which effectively captures the different levels of semantics in images.\n• Our method empirically achieves state-of-the-art results in CIFAR-10, CIFAR-100, and STL-10 on representation learning benchmarks, for both classification and clustering tasks." }, { "heading": "2 RELATED WORK", "text": "Our work is closely related to unsupervised visual representation learning and unsupervised image clustering literature. Although both have a slightly different viewpoints of the problem, they are essentially similar in terms of its goal to find good representations in unlabelled datasets.\nInstance-level discrimination utilizes an image index as supervision because it is an unique signal in the unsupervised environment. NPID (Wu et al., 2018) firstly attempts to convert the classwise classification into the extreme of instance-wise discrimination by using external memory banks. MoCo (He et al., 2020) replaces the memory bank by introducing a momentum encoder that memorizes knowledge learned from the previous mini-batch. SimCLR (Chen et al., 2020a) presents\nthat it is crucial for representation quality to combine data augmentations using a pretext head after the encoder. Although recent studies show promising results on benchmark datasets, the instancewise contrastive learning approach has a critical limitation that it pushes away representations from different images even if the images have similar semantics, e.g., belonging to the same class.\nCluster-level bootstrapping is an alternative paradigm that enhancing the initial bias of the networks can be useful in obtaining a discriminative power in visual representations, since convolutional neural networks work well on capturing the local patterns (Caron et al., 2018). In the case of using pseudo-labels, K-means (Caron et al., 2018) or optimal transport (Asano et al., 2019; Caron et al., 2020) are commonly adopted for clustering. On the other hand, soft clustering methods have also been actively studied to allow flexible cluster boundaries (Ji et al., 2019; Huang et al., 2020). Recently, a 2-stage training paradigm has been proposed to construct the cluster structure initialized from the representations learned by instance discrimination (Gansbeke et al., 2020)." }, { "heading": "3 METHOD", "text": "Our work is motivated by an observation from SupCLR (Khosla et al., 2020), which additionally pulls the representations together from different instances by using groundtruth labels. However, directly applying this idea in an unsupervised environment with pseudo-labels is challenging, because small false-positive errors at the initial step can be gradually spread out, degrading the quality of final representations.\nInstead, the main idea of our approach avoid pushing away those instances close enough to each other. To validate this idea, we conducted a toy experiment that a pulling force is only corresponding to two augmented views of the same image while not pushing the images within the same class by using the groundtruth label. We found that its classification accuracy increases over 5% on STL-10 datasets compared to that of SimCLR (Chen et al., 2020a). Inspired by this experiment, we design our model (i) not to push away relevant instances with our instance-alignment loss (Section 3.2) while (ii) discriminating the representations in a cluster-wise manner. (Section 3.3-3.4)." }, { "heading": "3.1 PRELIMINARIES", "text": "As shown in Fig. 3, we adopt stochastic data augmentation algorithms (Chen et al., 2020a; He et al., 2020; Chen et al., 2020b; Caron et al., 2020) to generate two different augmented views x′i and x′′i of the same image xi ∼ X = {x1, x2, ..., xN} where N is the number of unlabelled images. Inspired by Luo et al. (2018); Grill et al. (2020), C2BIN consists of an instance predictor P a(·), cluster predictors P c,k(·), and two Siamese networks called the runner Eθ(·) and the follower Eφ(·), respectively. The runner Eθ is rapidly updated to find the optimal parameters θ∗ over the search spaces, while the follower Eφ generates the target representations for the Eθ. The Eθ is composed of the two neural functions: encoder Fθ(·) and instance projector Gaθ(·), and vice versa for the follower\nEφ. To bootstrap the instance-level alignment, Eθ, Eφ, and P a are used. Afterwards, Fθ and P c,k are utilized to contrast the cluster-wise features." }, { "heading": "3.2 BOOTSTRAPPING LOSS OF INSTANCE REPRESENTATIONS", "text": "Given an image x ∼ X , we can obtain two augmented views x′ = t′(x) and x′′ = t′′(x) where t′ and t′′ are sampled from a set of stochastic data augmentations T as mentioned above. Even though augmented views are distorted, they should contain similar semantics, and the learned representations should be closely aligned in the latent space. For training, we forward x′′ through the follower Eφ to obtain target representations at an instance level; the runner Eθ aims to make the embedding vector of x′ closer to them. That is, we first extract image representations r = Fθ(x′) ∈ Rdr where dr is the number of dimensions of our representations. Afterwards, we introduce a pretext-specific instance-wise projector Gaθ(·) and then obtain pretext embedding vectors za = Gaθ(r) ∈ R1×da ; the target pretext vectors ẑa can be obtained using the same procedure by Eφ. Motivated from Grill et al. (2020), we calculate our alignment loss as the cosine distance as\nLalign = 1− P a(za) · ẑa\n||P a(za)||2||ẑa||2 , (1)\nwhere P a(za), ẑa ∈ R1×da and we adopt the number of dimensions of projected features da as in Chen et al. (2020a;c)." }, { "heading": "3.3 CONTRASTIVE LOSS OF BATCH-WISE CLUSTER ASSIGNMENTS", "text": "Our high-level motivation of this branch is that an image feature r can be represented as the combination of cluster features capturing local patterns. However, grouping similar images conflict with the instance-level invariant mapping; therefore, we introduce an additional branch which contains cluster predictor P c,k(·) after the encoder Fθ(·). The cluster predictor P c,k is a linear function whose takes ri as an input and transform it to a K-dimensional output vector. Therefore, zci = P\nc,k(ri) represents a degree of confidence for the i-th image representations ri to belong to the k-th cluster feature, i.e.,\nzci = [z c i,1, z c i,2, ..., z c i,k] ∈ R1×K , (2)\nwhere zci indicate a cluster membership distribution of the given image xi. Since we sample n items for training, Zc ∈ Rn×K is the set of memberships distribution of the given mini-batch. Now we define batch-wise cluster assignment vectors (BCAs) ck as\nck = Z c :,k = z c 1,k ...\nzcn,k ∈ Rn×1, (3) which indicates how much the k-th cluster is mapped by images in the mini-batch. Although ck will dynamically change as a new mini-batch is given, the same cluster features between differently augmented views from the same image should be similar while pushing away the others to capture diverse patterns. To this end, we simply utilize the contrastive loss between the BCAs as\nLbcaclust = 1\nK K∑ i=1 − log ( exp(c′i · c′′i /τ)∑K j=1 1[j 6=i] exp(c ′ i · c′′j /τ) ) , (4)\nwhere τ indicates a temperature value. The vectors c′ and c′′ are outputs of P c,k following the encoder Fθ by taking x′ and x′′ respectively.\nUnfortunately, most of the clustering-based methods suffers from falling into a degenerate solution where the majority of items are allocated in a few clusters, especially in an unsupervised environment. To mitigate this issue, we first compute the mass of assignment to k-th cluster as sk = ∑N i ck(i) where ck(i), indicating each element of ck. Afterwards, we encourage ri to be stochastically activated for diverse cluster features as much as possible by maximizing an entropy of s. To this end, we formulate the cluster loss function as\nLclust = Lbcaclust − λentH(s), (5) where H indicates an entropy function as H(s) = − ∑K i si log si and λent is the weight value for the regularization term." }, { "heading": "3.4 MULTI-SCALE CLUSTERING STRATEGY", "text": "The multi-scale clustering strategy has often been used in prior research (Vaswani et al., 2017; Asano et al., 2019), leveraging the ensembling effect. Extending this strategy, we propose a multi-scale clustering strategy for our task. Although contrasting between the BCAs encourages our model to capture various aspects of local patterns, the performance may be sensitive to the number of clusters k. To address this issue, we introduce a set of the cluster branches that have a different number of cluster assignments in each scale. To this end, we reformulate Lclust as\nLclust = ∑ k Lkclust, k ∈ K. (6)\nIn this work, we use various values of k, e.g., K = {32, 64, 128}." }, { "heading": "3.5 TOTAL OBJECTIVE", "text": "Finally, our total objective function is written as\nLtotal = Lalign + λclustLclust. (7)\nThe parameters of the follower Eφ gradually reflect those of the runner via\nφ← γφ+ (1− γ)θ, (8)\nwhere γ indicates a momentum factor." }, { "heading": "4 EXPERIMENTS", "text": "This section presents the experimental evaluation of C2BIN on the standard benchmark datasets including CIFAR-10, CIFAR-100, STL-10, and ImageNet, which are commonly adopted in both self-supervised representation learning and unsupervised image clustering literature.\nIn Sections 4.1-4.3, we compare C2BIN with several representation learning methods and unsupervised clustering methods to verify that our model can yield both linearly separable and clusterable representations. Afterwards, Section 4.4 studies the robustness of C2BIN in an class-imbalanced setting. Lastly, Section 4.5 presents an ablation study for in-depth analysis of our model behaviour." }, { "heading": "4.1 REPRESENTATION LEARNING TASKS ON UNIFIED SETUP", "text": "Experimental setup. Because previous studies have their own experimental settings in terms of datasets and backbone architectures, we prepared for a unified experimental setup for the fair comparison , as follows. We first employ the ResNet-18 architecture as the backbone architecture, following Wu et al. (2018). We used 3 standard benchmark datasets; CIFAR-10, CIFAR-100, and STL-10 for this experiment. All baselines are trained by using the identical data augmentation techniques, as used in Chen et al. (2020a;c). Further training details can be found in Appendix A.1.\nEvaluation metrics. We adopt three standard evaluation metrics: linear evaluation protocol (LP) (Zhang et al., 2017), k-nearest-neighbour (kNN) classifier with k = 5 and k = 200. For the LP, we follow the recipe of Grill et al. (2020), where we report the best evaluation score over five differently initialized learning rates. For kNN, we follow the settings used in Wu et al. (2018) and the implementation of Asano et al. (2019).\nFor our baseline methods, we choose SimCLR, MoCo v2, and BYOL, which can work as the stateof-the-art methods for the instance-wise contrastive learning, momentum-based contrastive learning, and instance-wise bootstrapping, respectively. As seen in Tab. 1, C2BIN consistently outperforms all baselines across all benchmark datasets In the case of CIFAR-100, the kNN accuracy of C2BIN significantly improves compared to the baselines while its LP scores consistently increase as well. We conjecture the reason is becuase C2BIN is appropriate to learn a hierarchical structure in a dataset such as CIFAR-100" }, { "heading": "4.2 REPRESENTATION LEARNING TASKS ON LARGE SCALE BENCHMARK", "text": "Experimental setup. To compare our method with concurrent and state-of-the-art works in a large scale dataset, we evaluate our method in ImageNet with ResNet-50 architecture as a backbone model. For fair comparison, most of the results are taken from the experiments that models are trained for 200 epochs with a batch size of 256. All of the baselines are trained with identical data augmentation techniques introduced in (Chen et al., 2020a). Further training details can be found in Appendix A.2.\nThough C2BIN has shown competitive performance compared to the baselines, MoCo v2 (Chen et al., 2020c) outperforms C2BIN on the large-scale dataset, which contradicts the findings in Section 4.1. Since C2BIN utilizes batch-wise clustering techniques to learn the cluster structure, it brings instability to the training process when a large number of cluster size is required compared to the batch size. Still, our method slightly outperforms the state-of-the-art instance bootstrapping method, BYOL (Grill et al., 2020). This result implies that simultaneously learning the cluster structure is not counter-effective to enhance the discriminative power of the instance-wise representation learning method." }, { "heading": "4.3 IMAGE CLUSTERING TASKS", "text": "This section compares our approach to the baselines in the unsupervised image clustering task.\nExperimental setup. For a fair comparison, we keep most of the implementation details identical to Ji et al. (2019); Huang et al. (2020) except for excluding the use of the Sobel filter. We use the architecture similar to ResNet-34, with the 2-layer MLP for both the instance projector Gaθ(·) and\nthe predictor P a(·). For the clustering branch, three cluster heads are used as K = {10, 40, 160} for CIFAR-10 and STL-10, and K = {20, 40, 160} for CIFAR-100. Further training details can be found in Appendix B.1.\nEvaluation metrics. Three standard clustering performance metrics are used for evaluation: (a) Normalized Mutual Information (NMI) measures the normalized mutual dependence between the predicted labels and the ground-truth labels. (b) Accuracy (ACC) is measured by assigning dominant class labels to each cluster and take the average precision. (c) Adjusted Rand Index (ARI) measures how many samples are assigned properly to different clusters. All the evaluation metrics range between 0 and 1, where the higher score indicates better performance.\nAs shown in Table 3, C2BIN outperforms the state-of-the-art clustering performance in all datasets by a significant margin, showing its capability of grouping the semantically related instances to distinct clusters. Moreover, C2BIN is shown to be robust, given that the averaged performance over five random trials even surpasses the best results from the previous literature." }, { "heading": "4.4 CLASS-IMBALANCED EXPERIMENTS", "text": "Unlike the standard benchmark datasets we used, it is often the case that the real-world image dataset is severely imbalanced in terms of its underlying class distribution. Therefore, we conducted additional experiments in a class-imbalanced environment, following the experimental design proposed in Cui et al. (2019).\nFig. 4 demonstrates the classification accuracy degradation in an imbalanced setting. The balanced rate indicates the relative ratio of the largest to the smallest classes. We find that the performance of PICA, the clustering-based method, significantly decreases compared to other baselines as the class imbalance gets apparent. In the case of imbalanced CIFAR-100, SimCLR, which contrasts\nall instances within the mini-batch, is shown to get degraded faster than BYOL, which does not consider the relationship between other instances. On the other hand, the accuracy degradation of C2BIN is shown to be minimal for both CIFAR-10 and CIFAR-100, possibly due to our alignment loss (Section 3.2)." }, { "heading": "4.5 DISCUSSIONS", "text": "Qualitative analysis on the learned representations.\nFor the test data items in STL-10 dataset, we embed their high-dimensional representations obtained by our method and SimCLR in a 2-D space using t-SNE(van der Maaten & Hinton (2008)). As shown in Fig. 5, the representations learned from our model show a clearer cluster structure than SimCLR(Chen et al. (2020a)), as training proceeds.\nTo understand the characteristics of both instance-wise alignment and cluster-wise discrimination in a straightforward manner, we conduct the image retrieval experiment. As shown in Fig. 6, our method outperforms two proposed baselines from both perspectives. Since SimCLR only focuses on instance-wise discrimination, it fails to retrieve with a larger value of k with a given query image (e.g., airplane). Likewise, PICA lacks the alignment capability in an instance-wise manner, resulting in poor performance with a lower value of k in contrast to C2BIN. This is also corroborated by the quantitative results in Appendix (Fig. 8).\nAblation study. To further verify whether our loss terms are complementary to each other, we perform an ablation study on STL-10 dataset. As we can observe in Table 4(b) and 4(c), a simple integration of the clustering method into the instance-wise bootstrapping (Table 4(a)) can degrade the representation quality unless an appropriate level of granularity is provided. Similar to the results from Asano et al. (2019), using a simple multi-scale clustering branch with a specific number of\nclusters (Table 4(d) and (e)) is a more effective strategy than a single-head method. Furthermore, our proposed multi-scale clustering strategy (Table 4(f)) peaks out the best performance since it allows the model to capture the diverse semantic information at a different level. This result justifies our motivation to utilize a clustering strategy in a multi-scale manner.\nVisual analysis on multi-scale clustering strategy. We also show the visual analysis on the multiscale clustering strategy. Each scale represents the different semantic information as shown in Appendix (Figs. 9, 10, and 11). Combining this semantic difference in each scale prevents our model from binding to a specific number of cluster assignments." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we proposed a novel approach to improve the existing representation learning with unsupervised image clustering. By integrating the advantages of both literature, we present an advanced self-supervised framework that simultaneously learns cluster features as well as image representations by contrasting clusters while bootstrapping instances. Moreover, in order to capture diverse semantic information, we suggest a multi-scale clustering strategy. We also conduct ablation studies to validate complementary effects of our proposed loss functions." }, { "heading": "A REPRESENTATION LEARNING EXPERIMENTS", "text": "A.1 IMPLEMENTATION DETAILS FOR UNIFIED SETTING\nA.2 IMPLEMENTATION DETAILS FOR THE LARGE-SCALE SETTING\nA.3 IMPACT STUDY FOR CHOICE OF K\nAlthough the effectiveness of the multi-scale clustering technique is briefly described in Section 4.5, this section studies performance changes according to the choice of the set K.\nTable 9 shows how C2BIN’s performance is damaged for the linear evaluation protocol (LP) on the STL-10 dataset when the combination of K is changed. In Table 9, (a) is the best score reported in the main paper and can be used as a pivot for comparison. For the rest of them, similar to Section 4.5, we divide experiments into three groups. First, (b)-(d) are matched with the case of attaching single and arbitrary selected cluster size. Unfortunately, this case does not help to improve performances and even dramatically degenerates the representation quality. We guess that attaching a single cluster head after the backbone network makes its representation quality sensitive according to the head size. The second group, (e)-(g), corresponds with the case of multiple but single-scale cluster heads. Although it seems a slight improvement compared to the above-mentioned case, it is difficult to be sufficiently complementary in our setting. We think the effect of the multi-branch clustering seems small because each cluster head can capture similar patterns with others. Lastly, (h)-(i) is mapped to the case of our multiple and multi-scale clustering strategy, showing robust performance in regard to the combination of K if each element of K is assigned in different scales. We guess that the effect of the multi-task learning is maximized because an identical representation vector should be informative enough to satisfy the following clusters, which are from abstracted to detailed." }, { "heading": "B UNSUPERVISED CLUSTERING EXPERIMENTS", "text": "B.1 IMPLEMENTATION DETAILS\nB.2 CLUSTERING QUALITY COMPARISON\nB.3 QUALITATIVE EXAMPLES IN IMAGE RETRIEVAL TASKS\nB.4 VISUALIZATION OF CLUSTERS" } ]
2,020
null
SP:b47032cd0c8bf0189504e1c6562b058ba8f0e8ae
[ "The paper studies generalization under distribution shift, and tries to answer the question: why do ERM-based classifiers learn to rely on \"spurious\" features? They present a class of distributions called \"easy-to-learn\" that rules out several explanations given in recent work and isolates the spurious correlation phenomenon in the simplest possible setting. Even on \"easy-to-learn\" distributions, linear models obtained from ERM use spurious features owing to either the dynamics of gradient descent trained on separable data (very slow convergence to the max-margin classifier) or a certain geometric skew in the data." ]
Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time, resulting in poor accuracy during test-time. In this work, we identify the fundamental factors that give rise to this behavior, by explaining why models fail this way even in easy-to-learn tasks where one would expect these models to succeed. In particular, through a theoretical study of gradient-descenttrained linear classifiers on some easy-to-learn tasks, we uncover two complementary failure modes. These modes arise from how spurious correlations induce two kinds of skews in the data: one geometric in nature, and another, statistical in nature. Finally, we construct natural modifications of image classification datasets to understand when these failure modes can arise in practice. We also design experiments to isolate the two failure modes when training modern neural networks on these datasets.1
[ { "affiliations": [], "name": "Vaishnavh Nagarajan" }, { "affiliations": [], "name": "Anders Andreassen" }, { "affiliations": [], "name": "Behnam Neyshabur" } ]
[ { "authors": [ "Isabela Albuquerque", "João Monteiro", "Mohammad Darvishi", "Tiago H. Falk", "Ioannis Mitliagkas" ], "title": "Generalizing to unseen domains via distribution matching, 2020", "venue": null, "year": 2020 }, { "authors": [ "Martı́n Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization. 2019", "venue": "URL http://arxiv.org/abs/1907.02893", "year": 1907 }, { "authors": [ "Devansh Arpit", "Stanislaw Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron C. Courville", "Yoshua Bengio", "Simon Lacoste-Julien" ], "title": "A closer look at memorization in deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nader Asadi", "Mehrdad Hosseinzadeh", "Mahdi Eftekhari" ], "title": "Towards shape biased unsupervised representation learning for domain generalization", "venue": null, "year": 1909 }, { "authors": [ "Sara Beery", "Grant Van Horn", "Pietro Perona" ], "title": "Recognition in terra incognita", "venue": "Computer Vision - ECCV", "year": 2018 }, { "authors": [ "Gilles Blanchard", "Gyemin Lee", "Clayton Scott" ], "title": "Generalizing from several related classification tasks to a new unlabeled sample", "venue": "In Advances in Neural Information Processing Systems", "year": 2011 }, { "authors": [ "Remi Tachet des Combes", "Mohammad Pezeshki", "Samira Shabanian", "Aaron C. Courville", "Yoshua Bengio" ], "title": "On the learning dynamics of deep neural networks. 2018", "venue": "URL http://arxiv.org/ abs/1809.06848", "year": 2018 }, { "authors": [ "Lucas Dixon", "John Li", "Jeffrey Sorensen", "Nithum Thain", "Lucy Vasserman" ], "title": "Measuring and mitigating unintended bias in text classification", "venue": "In Proceedings of the 2018 AAAI/ACM Conference on AI,", "year": 2018 }, { "authors": [ "Jeremy Elson", "John (JD) Douceur", "Jon Howell", "Jared Saul" ], "title": "Asirra: A captcha that exploits interest-aligned manual image categorization", "venue": "In Proceedings of 14th ACM Conference on Computer and Communications Security (CCS). Association for Computing Machinery,", "year": 2007 }, { "authors": [ "Chen Fang", "Ye Xu", "Daniel N. Rockmore" ], "title": "Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias", "venue": "In IEEE International Conference on Computer Vision,", "year": 2013 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor S. Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Robert Geirhos", "Jörn-Henrik Jacobsen", "Claudio Michaelis", "Richard S. Zemel", "Wieland Brendel", "Matthias Bethge", "Felix A. Wichmann" ], "title": "Shortcut learning in deep neural networks", "venue": "URL https://arxiv.org/abs/2004.07780", "year": 2004 }, { "authors": [ "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "In search of lost domain generalization", "venue": null, "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas G. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Katherine L. Hermann", "Andrew K. Lampinen" ], "title": "What shapes feature representations? exploring datasets, architectures, and training", "venue": "In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ziwei Ji", "Matus Telgarsky" ], "title": "Risk and parameter convergence of logistic regression", "venue": null, "year": 2018 }, { "authors": [ "Dimitris Kalimeris", "Gal Kaplun", "Preetum Nakkiran", "Benjamin L. Edelman", "Tristan Yang", "Boaz Barak", "Haofeng Zhang" ], "title": "SGD on neural networks learns functions of increasing complexity", "venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Fereshte Khani", "Percy Liang" ], "title": "Feature noise induces loss discrepancy across groups", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Fereshte Khani", "Percy Liang" ], "title": "Removing spurious features can hurt accuracy and affect groups disproportionately", "venue": "In FAccT", "year": 2021 }, { "authors": [ "David Krueger", "Ethan Caballero", "Jörn-Henrik Jacobsen", "Amy Zhang", "Jonathan Binas", "Rémi Le Priol", "Aaron C. Courville" ], "title": "Out-of-distribution generalization via risk extrapolation (rex)", "venue": null, "year": 2003 }, { "authors": [ "Da Li", "Yongxin Yang", "Yi-Zhe Song", "Timothy M. Hospedales" ], "title": "Learning to generalize: Metalearning for domain generalization", "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ya Li", "Xinmei Tian", "Mingming Gong", "Yajing Liu", "Tongliang Liu", "Kun Zhang", "Dacheng Tao" ], "title": "Deep domain generalization via conditional invariant adversarial networks", "venue": "In Computer Vision ECCV 2018 - 15th European Conference,", "year": 2018 }, { "authors": [ "Tom McCoy", "Ellie Pavlick", "Tal Linzen" ], "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Krikamol Muandet", "David Balduzzi", "Bernhard Schölkopf" ], "title": "Domain generalization via invariant feature representation", "venue": "In Proceedings of the 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Generalization in deep networks: The role of distance from initialization", "venue": null, "year": 2017 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Kamil Nar", "Orhan Ocal", "S. Shankar Sastry", "Kannan Ramchandran" ], "title": "Cross-entropy loss and low-rank features have responsibility for adversarial examples", "venue": "URL http://arxiv.org/abs/1901.08360", "year": 1901 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "In search of the real inductive bias: On the role of implicit regularization in deep learning", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nathan Srebro" ], "title": "Exploring generalization in deep learning", "venue": null, "year": 2017 }, { "authors": [ "F.M. Palechor", "A. de la Hoz Manotas" ], "title": "Dataset for estimation of obesity levels based on eating habits and physical condition in individuals from colombia, peru and mexico", "venue": null, "year": 2019 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society Series B,", "year": 2016 }, { "authors": [ "Nasim Rahaman", "Devansh Arpit", "Aristide Baratin", "Felix Draxler", "Min Lin", "Fred A. Hamprecht", "Yoshua Bengio", "Aaron C. Courville" ], "title": "On the spectral bias of deep neural networks. 2018", "venue": "URL http://arxiv.org/abs/1806.08734", "year": 2018 }, { "authors": [ "Marco Túlio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should I trust you?”: Explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "year": 2016 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B. Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": null, "year": 2020 }, { "authors": [ "Shiori Sagawa", "Aditi Raghunathan", "Pang Wei Koh", "Percy Liang" ], "title": "An investigation of why overparameterization exacerbates spurious correlations", "venue": null, "year": 2020 }, { "authors": [ "Harshay Shah", "Kaustav Tamuly", "Aditi Raghunathan", "Prateek Jain", "Praneeth Netrapalli" ], "title": "The pitfalls of simplicity bias in neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "J. Mach. Learn. Res.,", "year": 2018 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Vladimir Vapnik" ], "title": "Statistical learning theory", "venue": "ISBN", "year": 1998 }, { "authors": [ "Zhi-Qin John Xu", "Yaoyu Zhang", "Yanyang Xiao" ], "title": "Training behavior of deep neural network in frequency domain", "venue": "Neural Information Processing - 26th International Conference,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jieyu Zhao", "Tianlu Wang", "Mark Yatskar", "Vicente Ordonez", "Kai-Wei Chang" ], "title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Tsipras" ], "title": "2019), albeit without noise, consider a task where the spurious feature", "venue": null, "year": 2019 }, { "authors": [], "title": "MORE ON EXPERIMENTS Common details: In all our MNIST-based experiments, we consider the Binary-MNIST classification task (Arjovsky et al., 2019) where the first five digits (0 to 4) need to be separated from the rest", "venue": null, "year": 2019 }, { "authors": [ "E DEMONSTRATING" ], "title": "SKEWS ON A NON-IMAGE-CLASSIFICATION DATASET So far, we have demonstrated our insights in the context of image classification tasks. However, our theoretical framework is abstract enough for us to apply these insights even in non-image classification tasks. To demonstrate this, we consider an obesity estimation task based on the dataset", "venue": "Palechor & de la Hoz Manotas", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "A machine learning model in the wild (e.g., a self-driving car) must be prepared to make sense of its surroundings in rare conditions that may not have been well-represented in its training set. This could range from conditions such as mild glitches in the camera to strange weather conditions. This out-of-distribution (OoD) generalization problem has been extensively studied within the framework of the domain generalization setting (Blanchard et al., 2011; Muandet et al., 2013). Here, the classifier has access to training data sourced from multiple “domains” or distributions, but no data from test domains. By observing the various kinds of shifts exhibited by the training domains, we want the classifier can learn to be robust to such shifts.\nThe simplest approach to domain generalization is based on the Empirical Risk Minimization (ERM) principle (Vapnik, 1998): pool the data from all the training domains (ignoring the “domain label” on each point) and train a classifier by gradient descent to minimize the average loss on this pooled dataset. Alternatively, many recent studies (Ganin et al., 2016; Arjovsky et al., 2019; Sagawa et al., 2020a) have focused on designing more sophisticated algorithms that do utilize the domain label on the datapoints e.g., by enforcing certain representational invariances across domains.\nA basic premise behind pursuing such sophisticated techniques, as emphasized by Arjovsky et al. (2019), is the empirical observation that ERM-based gradient-descent-training (or for convenience, just ERM) fails in a characteristic way. As a standard illustration, consider a cow-camel classification task (Beery et al., 2018) where the background happens to be spuriously correlated with the label in a particular manner only during training — say, most cows are found against a grassy background and most camels against a sandy one. Then, during test-time, if the correlation is completely flipped (i.e., all cows in deserts, and all camels in meadows), one would observe that the\n∗Work performed in part while Vaishnavh Nagarajan was interning at Blueshift, Alphabet. 1Code is available at https://github.com/google-research/OOD-failures\naccuracy of ERM drops drastically. Evidently, ERM, in its unrestrained attempt at fitting the data, indiscriminately relies on all kinds of informative features, including unreliable spurious features like the background. However, an algorithm that carefully uses domain label information can hope to identify and rely purely on invariant features (or “core” features (Sagawa et al., 2020b)).\nWhile the above narrative is an oft-stated motivation behind developing sophisticated OoD generalization algorithms, there is little formal explanation as to why ERM fails in this characteristic way. Existing works (Sagawa et al., 2020b; Tsipras et al., 2019; Arjovsky et al., 2019; Shah et al., 2020) provide valuable answers to this question through concrete theoretical examples; however, their examples critically rely on certain factors to make the task difficult enough for ERM to rely on the spurious features. For instance, many of these examples have invariant features that are only partially predictive of the label (see Fig 1a). Surprisingly though, ERM relies on spurious features even in much easier-to-learn tasks where these complicating factors are absent — such as in tasks with fully predictive invariant features e.g., Fig 1c or the Waterbirds/CelebA examples in Sagawa et al. (2020a) or for that matter, in any real-world situation where the object shape perfectly determines the label. This failure in easy-to-learn tasks, as we argue later, is not straightforward to explain (see Fig 1b for brief idea). This evidently implies that there must exist factors more general and fundamental than those known so far, that cause ERM to fail.\nOur goal in this work is to uncover these fundamental factors behind the failure of ERM. The hope is that this will provide a vital foundation for future work to reason about OoD generalization. Indeed, recent empirical work (Gulrajani & Lopez-Paz, 2020) has questioned whether existing alternatives necessarily outperform ERM on OoD tasks; however, due to a lack of theory, it is not clear how to hypothesize about when/why one algorithm would outperform another here. Through our theoretical study, future work can hope to be better positioned to precisely identify the key missing components in these algorithms, and bridge these gaps to better solve the OoD generalization problem.\nOur contributions. To identify the most fundamental factors causing OoD failure, our strategy is to (a) study tasks that are “easy” to succeed at, and (b) to demonstrate that ERM relies on spurious features despite how easy the tasks are. More concretely:\n1. We formulate a set of constraints on how our tasks must be designed so that they are easy to succeed at (e.g., the invariant feature must be fully predictive of the label). Notably, this class of easy-to-learn tasks provides both a theoretical test-bed for reasoning about OoD generalization and also a simplified empirical test-bed. In particular, this class encompasses simplified MNIST and CIFAR10-based classification tasks where we establish empirical failure of ERM.\n2. We identify two complementary mechanisms of failure of ERM that arise from how spurious correlations induce two kinds of skews in the data: one that is geometric and the other statistical. In particular, we theoretically isolate these failure modes by studying linear classifiers trained by\ngradient descent (on logistic/exponential loss) and its infinite-time-trained equivalent, the maxmargin classifier (Soudry et al., 2018; Ji & Telgarsky, 2018) on the easy-to-learn-tasks.\n3. We also show that in any easy-to-learn task that does not have these geometric or statistical skews, these models do not rely on the spurious features. This suggests that these skews are not only a sufficient but also a necessary factor for failure of these models in easy-to-learn tasks.\n4. To empirically demonstrate the generality of our theoretical insights, we (a) experimentally validate these skews in a range of MNIST and CIFAR10-based tasks and (b) demonstrate their effects on fully-connected networks (FNNs) and ResNets. We also identify and explain failure in scenarios where standard notions of spurious correlations do not apply (see Fig 1d). We perform similar experiments on a non-image classification task in App E." }, { "heading": "2 RELATED WORK", "text": "Spurious correlations. Empirical work has shown that deep networks find superficial ways to predict the label, such as by relying on the background (Beery et al., 2018; Ribeiro et al., 2016) or other kinds of shortcuts (McCoy et al., 2019; Geirhos et al., 2020). Such behavior is of practical concern because accuracy can deteriorate under shifts in those features (Rosenfeld et al., 2018; Hendrycks & Dietterich, 2019). It can also lead to unfair biases and poor performance on minority groups (Dixon et al., 2018; Zhao et al., 2017; Sagawa et al., 2020b).\nUnderstanding failure of ERM. While the fact that ERM relies on spurious correlations has become empirical folk wisdom, only a few studies have made efforts to carefully model this. Broadly, there are two kinds of existing models that explain this phenomenon. One existing model is to imagine that both the invariant and the spurious features are only partially predictive of the label (Tsipras et al., 2019; Sagawa et al., 2020b; Arjovsky et al., 2019; Ilyas et al., 2019; Khani & Liang, 2020), as a result of which the classifier that maximizes accuracy cannot ignore the spurious feature (see Fig 1a). The other existing model is based on the “simplicity bias” of gradient-descent based deep network training (Rahaman et al., 2018; Neyshabur et al., 2015; Kalimeris et al., 2019; Arpit et al., 2017; Xu et al., 2019; des Combes et al., 2018). In particular, this model typically assumes that both the invariant and spurious features are fully predictive of the label, but crucially posits that the spurious features are simpler to learn (e.g., more linear) than the invariant features, and therefore gradient descent prefers to use them (Shah et al., 2020; Nar et al., 2019; Hermann & Lampinen, 2020).\nWhile both these models offer simple-to-understand and useful explanations for why classifiers may use spurious correlations, we provide a more fundamental explanation. In particular, we empirically and theoretically demonstrate how ERM can rely on the spurious feature even in much easier tasks where these explanations would fall apart: these are tasks where unlike in the first model (a) the invariant feature is fully predictive and unlike in the second model, (b) the invariant feature corresponds to a simple linear boundary and (c) the spurious feature is not fully predictive of the label. Further, we go beyond the max-margin settings analyzed in these works to analyze the dynamics of finite-time gradient-descent trained classifier on logistic loss. We would also like to point the reader to concurrent work of Khani & Liang (2021) that has proposed a different model addressing the above points. While their model sheds insight into the role of overparameterization in the context of spurious features (and our results are agnostic to that), their model also requires the spurious feature to be “dependent” on the invariant feature, an assumption we don’t require (see Sec 3).\nAlgorithms for OoD generalization. Due to the empirical shortcomings of ERM, a wide range of sophisticated algorithms have been developed for domain generalization. The most popular strategy is to learn useful features while constraining them to have similar distributions across domains (Ganin et al., 2016; Li et al., 2018b; Albuquerque et al., 2020). Other works constrain these features in a way that one can learn a classifier that is simultaneously optimal across all domains (Peters et al., 2016; Arjovsky et al., 2019; Krueger et al., 2020). As discussed in Gulrajani & Lopez-Paz (2020), there are also many other existing non-ERM based methods, including that of meta-learning (Li et al., 2018a), parameter-sharing (Sagawa et al., 2020a) and data augmentation (Zhang et al., 2018). Through their extensive empirical survey of many of the above algorithms, Gulrajani & Lopez-Paz (2020) suggest that ERM may be just as competitive as the state-of-the-art. But we must emphasize that this doesn’t vindicate ERM of its failures but rather indicates that we may be yet to develop a substantial improvement over ERM." }, { "heading": "3 EASY-TO-LEARN DOMAIN GENERALIZATION TASKS", "text": "Below, we first set up the basic domain generalization setting and the idea of ERM. Then, in Section 3.1, we formulate a class of domain-generalization tasks that are in many aspects “easy” for the learner (such as fully informative invariant features) – what exactly makes a task “easy” will be discussed in Section 3.1). This discussion sets the ground for the later sections to show how ERM can fail even in these easy tasks, which will help uncover the fundamental factors behind its failure.\nNotations. Consider an input (vector) space X and label space Y ∈ {−1, 1}. For any distribution D over X × Y , let pD(·) denote its probability density function (PDF). Let H denote a class of classifiers h : X → R. Let the error of h on D be denoted as LD(h) := E(x,y)∼D[h(x) · y < 0]. The domain generalization setting and ERM. In the domain generalization setting, one considers an underlying class D of data distributions over X ×Y corresponding to different possible domains. The learner is given training data collected from multiple distributions from D. For an ERM-based learner in particular, the training data will be pooled together, so we can model the data as coming from a single (pooled) distribution Dtrain, which for simplicity, can be assumed to belong to D. Given this data, the learner outputs a hypothesis ĥ ∈ H that is tested on a new distribution Dtest picked from D. This can be potentially modeled by assuming that all test and training distributions are drawn from a common hyper-distribution over D. However, this assumption becomes pointless in most practical settings where the training domains are not more than three to four in number (e.g., PACS (Asadi et al., 2019), VLCS (Fang et al., 2013)), and therefore hardly representative of any hyper-distribution. Here, the problem becomes as hard as ensuring good performance on a worst-case test-distribution without any hyper-distribution assumption; this boils down to minimizing maxD∈D LD(ĥ). Indeed, most works have studied the worst-case setting, both theoretically (Sagawa et al., 2020b) and empirically (Arjovsky et al., 2019; Sagawa et al., 2020a).\nSimilarly, for this work, we focus on the worst-case setting and define the optimal target function h? to be h? = arg minh∈H maxD∈D LD(h). Then, we define the features that this “robust” classifier uses as invariant features Xinv (e.g., the shape of the object), and the rest as spurious features Xsp (e.g., the background). To formalize this, we assume that there exists a mapping Φ : Xinv×Xsp → X such that each D ∈ D is induced by a distribution over Xinv × Xsp (so we can denote any x as Φ(xinv,xsp)). With an abuse of notation we will use pD(·) to also denote the PDF of the distribution over Xinv×Xsp. Then, the fact that Xsp are features that h? does not rely on, is mathematically stated as: ∀xinv and ∀xsp 6= x′sp, h?(Φ(xinv,xsp)) = h?(Φ(xinv,x′sp)). Finally, we note that, to make this learning problem tractable, one has to impose further restrictions; we’ll provide more details on those when we discuss the class of easy-to-learn domain generalization tasks in Sec 3.1.\nEmpirical failure of ERM. To guide us in constructing the easy-to-learn tasks, let us ground our study in a concrete empirical setup where an ERM-based linear classifier shows OoD failure. Specifically, consider the following Binary-MNIST based task, where the first five digits and the remaining five digits form the two classes. First, we let Φ be the identity mapping, and so x = (xinv,xsp). Then, we let xinv be a random ReLU features representation of the MNIST digit i.e., if xraw represents the MNIST image, then xinv = ReLU(Wxraw) where W is a matrix with Gaussian entries. We make this representation sufficiently high-dimensional so that the data becomes linearly separable. Next, we let the spurious feature take values in {+B,−B} for some B > 0, imitating the two possible background colors in the camel-cow dataset. Finally, on Dtrain, for any y, we pick the image xinv from the corresponding class and independently set the “background color” xsp so that there is some spurious correlation i.e., PrDtrain [xsp · y > 0] > 0.5. During test time however, we flip this correlation around so that PrDtest [xsp · y > 0] = 0.0. In this task, we observe in Fig 2a (shown later under Sec 4) that as we vary the train-time spurious correlation from none (PrDtrain [xsp · y > 0] = 0.5) to its maximum (PrDtrain [xsp · y > 0] = 1.0), the OoD accuracy of a max-margin classifier progressively deteriorates. (We present similar results for a CIFAR10 setting, and all experiment details in App C.1.) Our goal is now to theoretically demonstrate why ERM fails this way (or equivalently, why it relies on the spurious feature) even in tasks as “easy-to-learn” as these." }, { "heading": "3.1 CONSTRAINTS DEFINING EASY-TO-LEARN DOMAIN-GENERALIZATION TASKS.", "text": "To formulate a class of easy-to-learn tasks, we enumerate a set of constraints that the tasks must satisfy; notably, this class of tasks will encompass the empirical example described above. The\nmotivation behind this exercise is that restricting ourselves to the constrained set of tasks yields stronger insights — it prevents us from designing complex examples where ERM is forced to rely on spurious features due to a not-so-fundamental factor. Indeed, each constraint here forbids a unique, less fundamental failure mode of ERM from occuring in the easy-to-learn tasks. Constraint 1. (Fully predictive invariant features.) For all D ∈ D, LD(h?) = 0. Arguably, our most important constraint is that the invariant features (which is what h? purely relies on) are perfectly informative of the label. The motivation is that, when this is not the case (e.g., as in noisy invariant features like Fig 1a or in Sagawa et al. (2020b); Tsipras et al. (2019)), failure can arise from the fact that the spurious features provide vital extra information that the invariant features cannot provide (see App A for more formal argument). However this explanation quickly falls apart when the invariant feature in itself is fully predictive of the label.\nConstraint 2. (Identical invariant distribution.) Across all D ∈ D, pD(xinv) is identical. This constraint demands that the (marginal) invariant feature distribution must remain stable across domains (like in our binary-MNIST example). While this may appear to be unrealistic, (the exact distribution of the different types of cows and camels could vary across domains), we must emphasize that it is easier to make ERM fail when the invariant features are not stable (see example in App A Fig 4a).\nConstraint 3. (Conditional independence.) For all D ∈ D, xsp ⊥ xinv|y. This constraint reflects the fact that in the MNIST example, we chose the class label and then picked the color feature independent of the actual hand-written digit picked from that class. This prevents us from designing complex relationships between the background and the object shape to show failure (see example in App A Fig 4b or the failure mode in Khani & Liang (2021)).\nConstraint 4. (Two-valued spurious features.) We set Xsp = R and the support of xsp in Dtrain is {−B,+B}. This constraint2 captures the simplicity of the cow-camel example where the background color is limited to yellow/green. Notably, this excludes failure borne out of high-dimensional (Tsipras et al., 2019) or carefully-constructed continuous-valued (Sagawa et al., 2020b; Shah et al., 2020) spurious features.\nConstraint 5. (Identity mapping.) Φ is the identity mapping i.e., x = (xinv,xsp). This final constraint3, also implicitly made in Sagawa et al. (2020b); Tsipras et al. (2019), prevents ERM from failing because of a hard-to-disentangle representation (see example in App A Fig 4c).\nBefore we connect these constraints to our main goal, it is worth mentioning their value beyond that goal. First, as briefly discussed above and elaborated in App A, each of these constraints in itself corresponds to a unique failure mode of ERM, one that is worth exploring in future work. Second, the resulting class of easy-to-learn tasks provides a theoretical (and a simplified empirical) test-bed that would help in broadly reasoning about OoD generalization. For example, any algorithm for solving OoD generalization should at the least hope to solve these easy-to-learn tasks well.\nWhy is it hard to show that ERM relies on the spurious feature in easy-to-learn tasks? Consider the simplest easy-to-learn 2D task. Specifically, during training we set xinv = y (and so Constraint 1 is satisfied) and xsp to be yB with probability p ∈ [0.5, 1) and −yB with probability 1 − p (hence satisfying both Constraint 3 and 4). During test-time, the only shifts allowed are on the distribution of xsp (to respect Constraint 2). Observe from Fig 1b that this distribution has a support of the four points in {−1,+1} × {−B,+B} and is hence an abstract form of the cow-camel dataset which also has four groups of points: (a majority of) cows/camels against grass/sand and (a minority of) cows/camels against sand/grass. Fitting a max-margin classifier on Dtrain leads us to a simple yet key observation: owing to the geometry of the four groups of points, the max-margin classifier has no reliance on the spurious feature despite the spurious correlation. In other words, even though this dataset distills what seem to be the core aspects of the cow/camel dataset, we are unable to reproduce the corresponding behavior of ERM. In the next two sections, we will try to resolve this apparent paradox.\n2Note that the discrete-value restriction holds only during training. This is so that, in our experiments, we can study two kinds of test-time shifts, one within the support {−B,+B} and one outside of it (the vulnerability to both of which boils down to the level of reliance on xsp).\n3The rest of our discussion would hold even if Φ corresponds to any orthogonal transformation of (xinv,xsp), since the algorithms we study are rotation-invariant. But for simplicity, we’ll work with the identity mapping." }, { "heading": "4 FAILURE DUE TO GEOMETRIC SKEWS", "text": "A pivotal piece in solving this puzzle is a particular geometry underlying how the invariant features in real-world distributions are separated. To describe this, consider the random features representation of MNIST (i.e., xinv) and fit a max-margin classifier on it, i.e., the least-norm winv that achieves a margin of y · (winv · xinv + b) ≥ 1 on all data (for some b). Then, we’d observe that as the number of training points increase, the `2 norm of this max-margin classifier grows (see Figure 2b); similar observations also hold for CIFAR10 (see App C.2). This observation (which stems from the geometry of the dataset) builds on ones that were originally made in Neyshabur et al. (2017); Nagarajan & Kolter (2017; 2019) for norms of overparameterized neural networks. Our contribution here is to empirically establish this for a linear overparameterized model and then to theoretically relate this to OoD failure. In the following discussion, we will take this “increasing norms” observation as a given4), and use it to explain why the max-margin classifier trained on all the features (including xsp) relies on the spurious feature.\nImagine every input xinv to be concatenated with a feature xsp ∈ {−B,B} that is spuriously correlated with the label i.e., Pr[xsp ·y > 0] > 0.5. The underlying spurious correlation implicitly induces two disjoint groups in the dataset S: a majority group Smaj where xsp · y > 0 e.g., cows/camels with green/yellow backgrounds, and a minority group Smin where xsp · y < 0 e.g., cows/camels with yellow/green backgrounds. Next, let wall ∈ Xinv denote the least-norm vector that (a) lies in the invariant space and (b) classifies all of S by a margin of at least 1. Similarly, let wmin ∈ Xinv denote a least-norm, purely-invariant vector that classifies all of Smin by a margin of at least 1. Crucially, since Smin has much fewer points than S, by the “increasing-norm” property, we can say that ‖wmin‖ ‖wall‖. We informally refer to the gap in these `2 norms as a geometric skew. Given this skew, we explain why the max-margin classifier must use the spurious feature. One way to classify the data is to use only the invariant feature, which would cost an `2 norm of ‖wall‖. But there is another alternative: use the spurious feature as a short-cut to classify the majority of the dataset Smaj (by setting wsp > 0) and combine it with wmin to classify the remaining minority Smin. Since ‖wmin‖ ‖wall‖, the latter strategy requires lesser `2 norm, and is therefore the strategy opted by the max-margin classifier. We illustrate this failure mechanism in a 2D dataset in Fig 2c. Here, we have explicitly designed the data to capture the “increasing norms” property: the distance between purely-invariant classifier boundary (i.e., a vertical separator through the origin) and the closest point in Smaj is smaller than that of the closest point in Smin. In other words, a purely-invariant classifier\n4To clarify, we take the “increasing norms” observation as a given in that we don’t provide an explanation for why it holds. Intuitively, we suspect that norms increase because as we see more datapoints, we also sample rarer/harder training datapoints. We hope future work can understand this explain better.\nwould require much greater norm to classify the majority group by a margin of 1 than to classify the minority group. We can then visually see that the max-margin classifier would take the orientation of a diagonal separator that uses the spurious feature rather than a vertical, purely-invariant one.\nThe following result formalizes the above failure. In particular, for any arbitrary easy-to-learn task, we provide lower and upper bounds on wsp that are larger for smaller values of ‖wmin‖/‖wall‖. For readability, we state only an informal, version of our theorem below. In App B.1, we present the full, precise result along with the proof. Theorem 1. (informal) Let H be the set of linear classifiers, h(x) = winvxinv + wspxsp + b. Then for any task satisfying all the constraints in Sec 3.1 with B = 1, the max-margin classifier satisfies:\n1− 2 √ ‖wmin‖/‖wall‖ ≤ wsp ≤\n1 ‖wmin‖/‖wall‖ − 1.\nA salient aspect of this result is that it explains the varying dynamics between the underlying spurious correlation and spurious-feature-reliance in the classifier. First, as the correlation increases (PrDtrain [xsp · y > 0]→ 1.0), the size of the minority group decreases to zero. Then, we empirically know that ‖wmin‖/‖wall‖ progressively shrinks all the way down to 0. Then, we can invoke the lower bound which implies that wsp grows to≈ 1. This implies serious vulnerability to the test-time shifts: any flip in the sign of the spurious feature can reduce the original margin of≈ 1 by a value of 2|wspxsp| ≈ 2 (since B = 1 here) making the margin negative (implying misclassification). On the other hand, when spurious correlations diminish (PrDtrain [xsp · y > 0] → 0.5) , the value of ‖wmin‖ grows comparable to ‖wall‖, and our upper bound suggests that the spurious component must shrink towards ≈ 0, thereby implying robustness to these shifts. Broader empirical implications. While our theorem explains failure in linear, easy-to-learn settings, the underlying geometric argument can be used to intuitively understand failure in more general settings i.e., setting where the classifier is non-linear and/or the task is not easy-to-learn. For illustration, we identify a few such unique non-linear tasks involving the failure of a neural network. The first two tasks below can be informally thought of as easy-to-learn tasks5:\n• In Fig 1c, we consider a CIFAR10 task where we add a line to with its color spuriously correlated with the class only during training. The & 10% OoD accuracy drop of a ResNet here, we argue (in App C.3.1), arises from the fact that it takes greater norms for the ResNet to fit larger proportions CIFAR10.\n• In Fig 1d, we consider a colored Cats vs. Dogs task (Elson et al., 2007), where a majority of the datapoints are blue-ish and a minority are green-ish. During testing, we color all datapoints to be green-ish. Crucially, even though there is no correlation between the label and the color of the images, the OoD accuracy of an ResNet drops by & 20%. To explain this, in App. C.3.3, we identify an “implicit”, non-visual kind of spurious correlation in this dataset, one between the label and a particular component of the difference between the two channels.\nNext, we enumerate two not-easy-to-learn tasks. Here, one of the easy-to-learn constraints is disobeyed significantly enough to make the task hard, and this is essential in causing failure. In other words, the failure modes here correspond to ones that were outlined in Sec 3.1. Nevertheless, we argue in App C.3 that even these modes can be reasoned geometrically as a special case of Theorem 1. The two not-easy-to-learn tasks are as follows:\n• In Fig 2d, we add a line to the last channel of CIFAR10 images regardless of the label, and make the line brighter during testing resembling a camera glitch, which results in a & 27% drop in a ResNet’s accuracy. We geometrically argue how this failure arises from breaking Constraint 5.\n• In App C.3.5, we consider an MNIST setting inspired by Tsipras et al. (2019), where failure arises (geometrically) due to high-dimensional spurious features (breaking Constraint 4).\nWe hope that these examples, described in greater detail in App C.3, provide (a) a broader way to think about how spurious correlations manifest, and (b) how a variety of resulting failure modes can be reasoned geometrically.\n5Note that although technically speaking these tasks do break Constraint 4 (as the spurious feature does not take two discrete values) this is not essential to the failure." }, { "heading": "5 FAILURE DUE TO STATISTICAL SKEWS", "text": "Having theoretically studied max-margin classifiers, let us now turn our attention to studying linear classifiers trained by gradient descent on logistic/exponential loss. Under some conditions, on linearly separable datasets, these classifiers would converge to the max-margin classifier given infinite time (Soudry et al., 2018; Ji & Telgarsky, 2018). So it is reasonable to say that even these classifiers would suffer from the geometric skews, even if stopped in some finite time. However, are there any other failure modes that would arise here?\nTo answer this, let us dial back to the easiest-to-learn task: the setting with four points {−1,+1} × {−B,+B}, where in the training distribution (say D2-dim), we have xinv = y and xsp to be yB with probability p ∈ [0.5, 1) and−yB with probability 1−p. Here, even though the max-margin does not rely on xsp for any level of spurious correlation p ∈ [0.5, 1) — there are no geometric skews here after all — the story is more complicated when we empirically evaluate via gradient descent stopped in finite time t. Specifically, for various values of p we plot wsp/ √ w2inv+w 2 sp vs. t (here, looking at wsp alone does not make sense since the weight norm grows unbounded). We observe in Fig 3a that the spurious component appears to stagnate around a value proportional to p, even after sufficiently long training, and even though it is supposed to converge to 0. Thus, even though max-margin doesn’t fail in this dataset, finite-time-stopped gradient descent fails. Why does this happen?\nTo explain this behavior, a partial clue already exists in Soudry et al. (2018); Ji & Telgarsky (2018): gradient descent can have a frustratingly slow logarithmic rate of convergence to the max-margin i.e,. the ratio |wsp/winv| could decay to zero as slow as 1/ ln t . However, this bound is a distributionindependent one that does not explain why the convergence varies with the spurious correlation. To this end, we build on this result to derive a distribution-specific convergence bound in terms of p, that applies to any easy-to-learn task (where xinv may be higher dimensional unlike in D2-dim). For convenience, we focus on continuous-time gradient descent under the exponential loss exp(−yh(x)) (the dynamics of which is similar to that of logistic loss as noted in Soudry et al. (2018)). Then we consider any easy-to-learn task and informally speaking, any corresponding dataset without geometric skews, so the max-margin wouldn’t rely on the spurious feature. We then study the convergence rate of wsp(t)·B/winv(t)·xinv to 0 i.e., the rate at which the ratio between the output of the spurious component to that of the invariant component converges to its corresponding max-margin value. We show that the convergence rate is Θ(1/ ln t), crucially scaled by an extra factor that monotonically increases in [0,∞) as a function of the spurious correlation, p ∈ [0.5, 1), thus capturing slower convergence for larger spurious correlation. Another notable aspect of our result is that when there is no spurious correlation (p = 0.5), both the upper and lower bound reduce to 0, indicating quick convergence. We provide the full statement and proof of this bound in App B.2. For completeness, we also provide a more precise analysis of the dynamics for a 2D setting under both exponential and logistic loss in Theorem 5 and Theorem 6 in App B.2. Theorem 2. (informal) Let H be the set of linear classifiers h(x) = winv · xinv + wspxsp. Then, for any easy-to-learn task, and for any dataset without geometric skews, continuous-time gradient descent training of winv(t) · xinv + wsp(t)xsp to minimize the exponential loss, satisfies:\nΩ\n( ln 1+p\n1+ √\np(1−p) ln t ) ≤ wsp(t)B|winv(t)·xinv| ≤ O ( ln p1−p ln t ) , where p := PrDtrain [xsp · y > 0] ∈ [0.5, 1).\nThe intuition behind this failure mode is that in the initial epochs, when the loss exp (−yw · x) on all points are more or less the same, the updates 1/|S|· ∑ (x,y)∈S yx exp(−yw·x) roughly push along\nthe direction, 1/|S| · ∑\n(x,y)∈S yx. This is the precise (mis)step where gradient descent “absorbs” the spurious correlation, as this step pushes wsp along pB − (1− p)B = (2p− 1)B. While this update would be near-zero when there is only little spurious correlation (p ≈ 0.5), it takes larger values for larger levels of spurious correlation (p ≈ 1). Unfortunately, under exponential-type losses, the gradients decay with time, and so the future gradients, even if they eventually get rid of this absorbed spurious component, take an exponentially long time to do so.\nBroader empirical implications. We now demonstrate the effect of statistical skews in more general empirical settings consisting of a non-linear easy-to-learn task learned using a neural network.\n6The overall accuracy on CIFAR10 is low because even though |Scon| = 50k and |Sexp| = 455k, the number of unique samples here is just 5k. See App C.4.2 for more explanation.\nIsolating the statistical skew effect is however challenging in practice: any gradient-descent-trained model is likely to be hurt by both statistical skews and geometric skews, and we’d have to somehow disentangle the two effects. We handle this by designing the following experiment. We first create a control dataset Scon where there are no geometric or statistical skews. For this, we take a set of images Sinv (with no spurious features), and create two copies of it, Smaj and Smin where we add spurious features, positively and negatively aligned with the label, respectively, and define Scon = Smaj ∪ Smin. Next, we create an experimental dataset Sexp with a statistical skew in it. We do this by taking Scon and duplicating Smaj in it so that the ratio |Smaj| : |Smin| becomes 10 : 1. Importantly, this dataset has no geometric skews, since merely replicating points does not affect the geometry. Then, if we were to observe that (stochastic) gradient descent on Sexp results in greater spurious-feature-reliance than Scon, we would have isolated the effect of statistical skews.\nIndeed, we demonstrate this in two easy-to-learn tasks. First, we consider a Binary-MNIST task learned by an fully-connected network. Here, we concatenate a spurious channel where either all the pixels are “on” or “off”. Second, we consider a (multiclass) CIFAR10 task (where we add a spuriously colored line) learned using a ResNet (He et al., 2016). In Fig 3, we demonstrate that training on Sexp leads to less robust models than on Scon in both these tasks. In other words, gradient descent on Sexp leads to greater spurious-feature-reliance, thus validating the effect of statistical skews in practice. More details are provided in App C.4." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "We identify that spurious correlations during training can induce two distinct skews in the training set, one geometric and another statistical. These skews result in two complementary ways by which empirical risk minimization (ERM) via gradient descent is guaranteed to rely on those spurious correlations. At the same time, our theoretical results (in particular, the upper bounds on the spurious component of the classifier) show that when these skews do disappear, there is no failure within the considered tasks. This suggests that within the class of easy-to-learn tasks and for gradient-descenttrained linear models, the above discussion likely captures all possible failure modes.\nHowever, when we do venture into the real-world to face more complicated tasks and use non-linear, deep models, many other kinds of failure modes would crop up (such as the ones we enumerate in Sec 3.1, in addition to the fundamental ones mentioned above). Indeed, the central message of our work is that there is no one unique mechanism by which classifiers fail under spurious correlations, even in the simplest of tasks. This in turn has a key practical implication: in order to improve our solutions to OoD generalization, it would be valuable to figure out whether or not a unified solution approach is sufficient to tackle all these failure mechanisms. While we outline some solutions in App D, we hope that the foundation we have laid in this study helps future work in better tackling out-of-distribution challenges.\nAcknowledgements. We thank Hanie Sedghi for providing useful feedback on the draft." }, { "heading": "A CONSTRAINTS AND OTHER FAILURE MODES", "text": "Here, we elaborate on how each of the constraints we impose on the easy-to-learn tasks corresponds to eliminating a particular complicated kind of failure of ERM from happening. In particular, for each of these constraints, we’ll construct a task that betrays that constraint (but obeys all the others) and that causes a unique kind of failure of ERM. We hope that laying these out concretely can provide a useful starting point for future work to investigate these various failure modes. Finally, it is worth noting that all the failure modes here can be explained via a geometric argument, and furthermore the failure modes for Constraint 5 and Constraint 4 can be explained as a special case of the argument in Theorem 1.\nFailure due to weakly predictive invariant feature (breaking Constraint 1). We enforced in Constraint 1 that the invariant feature be fully informative of the label. For a setting that breaks this\nconstraint, consider a 2D task with noisy invariant features, where across all domains, we have a Gaussian invariant feature of the form xinv ∼ N (y, σ2inv) —- this sort of a noisy invariant feature was critically used in Tsipras et al. (2019); Sagawa et al. (2020b); Arjovsky et al. (2019) to explain failure of ERM. Now, assume that during training we have a spurious feature xsp ∼ N (y, σ2sp) (say with relatively larger variance, while positively correlated with y). Then, observe that the Bayes optimal classifier on Dtrain is sgn (xinv/σinv + xsp/σsp) i.e., it must rely on the spurious feature. However, also observe that if one were to eliminate noise in the invariant feature by setting σinv → 0 (thus making the invariant feature perfectly informative of the label like in our MNIST example), the Bayes optimal classifier approaches sgn (xinv), thus succeeding after all.\nFailure due to “unstable” invariant feature (breaking Constraint 2). If during test-time, we push the invariant features closer to the decision boundary (e.g., partially occlude the shape of every camel and cow), test accuracy will naturally deteriorate, and this embodies a unique failure mode.\nConcretely, consider a domain generalization task where across all domains xinv ≥ −0.5 determines the true boundary (see Fig 4a). Now, assume that in the training domains, we see xinv = 2y or xinv = 3y. This would result in learning a max-margin classifier of the form xinv ≥ 0. Now, during test-time, if one were to provide “harder” examples that are closer to the true boundary in that xinv = −0.5 + 0.1y, then all the positive examples would end up being misclassified. Failure due to complex conditional dependencies (breaking Constraint 3). This constraint imposed xinv ⊥ xsp|y. Stated more intuitively, given that a particular image is that of a camel, this constraint captures the fact that knowing the background color does not tell us too much about the precise shape of the camel. An example where this constraint is broken is one where xinv and xsp share a neat geometric relationship during training, but not during testing, which then results in failure as illustrated below.\nConsider the example in Fig 4b where for the positive class we set xinv +xsp = 1 and for the negative class we set xinv +xsp = −1, thereby breaking this constraint. Now even though the invariant feature in itself is fully informative of the label, while the spurious feature is not, the max-margin here is parallel to the line xinv + xsp = c and is therefore reliant on the spurious feature.\nFailure mode due to high-dimensional spurious features (lack of Constraint 4). Akin to the setting in Tsipras et al. (2019), albeit without noise, consider a task where the spurious feature has D different co-ordinates, Xsp = {−1, 1}D and the invariant feature just one Xinv = {−1,+1}. Then, assume that the ith spurious feature xsp,i independently the value y with probability pi and −y with probability 1− pi, where without loss of generality pi > 1/2. Here, with high probability, all datapoints in S can be separated simply by summing up the spurious features (given D is large enough). Then, we argue that the max-margin classifier would provide some non-zero weight to this direction because it helps maximize its margin (see visualization in Fig 4d).\nOne way to see why this is true is by invoking a special case of Theorem 1. In particular, if we define ∑ xsp,i to be a single dimensional spurious feature xsp, this feature satisfies xsp · y > 0 on all training points. In other words, this is an extreme scenario with no minority group. Then, Theorem 1 would yield a positive lower bound on the weight given to xsp, explaining why the classifier relies on the spurious pixels. For the sake of completeness, we formalize all this discussion below:\nProposition 1. Let c be a constant such that for all i, pj > 12 + c 2 . Let D be sufficiently large so that D ≥ 12c √\n2 ln mδ where m is the number of training datapoints in S. Then, w.h.p. of 1− δ over the draws of S, the max-margin classifier corresponds is of the form winvxinv + wspxsp where:\n‖wsp‖ winv\n≥ c √ D\n2 .\nProof. First, we’ll show that on S there exists a classifier that relies purely on the spurious features to separate the data. In particular, consider w′sp where the ith dimension is 1/ √ D if pi > 1/2, and −1/ √ D otherwise. By the Hoeffding’s inequality, we have that with high probability 1 − δ, on all\nthe m training datapoints, yw′sp · xsp ≥ 1√D ∑ (2pi − 1)− √ 2 D ln m δ ≥ c 2 √ D.\nNow, for the max-margin classifier, assume that w2inv = α. Further, assume that the margin contributed by wspxsp + b equals √ 1− α2m for some m. Observe that m must satisfy m ≥ c2 √ D (as\notherwise, we can replace wsp with √\n1− α2w′sp to achieve a better margin). Now, for the resulting margin to be maximized, α must satisfy α√\n1−α2 = 1 m .\nFailure mode due to hard-to-separate features (lack of Constraint 5) Here we assumed that the feature space can be orthogonally decomposed into invariant and spurious features. Now let us imagine a 2D task, visualized in Fig 4c where this is not respected in that each datapoint is written as (xinv, xinv + xsp), assuming that xinv = y and xsp ∈ {−0.5, 0.5}. A practical example of this sort of structure is the example in Fig 2d where we add a line to the last channel of CIFAR10, and then vary the brightness of the line during test-time.\nTo understand why failure occurs in this 2D example, observe that, regardless of the correlation between xsp and y, we’d have that (xinv + xsp) · y > 0. In other words, the second co-ordinate is fully informative of the label. The max-margin classifier, due to its bias, relies on both the first and the second co-ordinate to maximize its margin i.e., the classifier would be of the form w1xinv + w2(xinv + xsp) where w2 > 0. (Again, like in our discussion of the failure mode of Constraint 4, we can argue this via Thm 1 by considering xinv +xsp itself as a spurious feature, and by observing that there’s no minority group here.) Hence, by assigning a positive weight to the second co-ordinate it inadvertently becomes susceptible to the spurious feature that may shift during testing." }, { "heading": "B PROOFS", "text": "" }, { "heading": "B.1 PROOF OF THEOREM 1 ON FAILURE DUE GEOMETRIC SKEWS", "text": "Below we provide a proof for our result analyzing the failure mode arising from geometric skews in the data.\nRecall that given a dataset S, where the spurious feature can take only values in {−B,+B}, we partitioned S into two subsets Smaj and Smin where in Smaj the points satisfy xsp · y > 0 and in Smin the points satisfy xsp · y < 0. Next, we define two key notations. First, for any dataset T ⊆ S, let v(T ) ∈ Xinv denote a least-norm vector (purely in the invariant space) that achieves a margin of at least 1 on all datapoints in T (we’ll define this formally a few paragraphs below). Similarly, let ṽ(T ) ∈ Xinv denote a least-norm vector that achieves a margin of at least 1 on T , and a margin of at least 0 on S\\T (again, full definition will shortly follow). While by definition, ‖v(T )‖ ≤ ‖ṽ(T )‖, we can informally treat these quantities as the same since empirically ‖v(T )‖ ≈ ‖ṽ(T )‖. We show these plots in Sec C.1 for both MNIST and CIFAR10. But importantly, both these quantities grow with the size of T . Then, by virtue of the small size of the minority group Smin, we can say that ‖v(Smin)‖ is smaller than both ‖v(Smaj)‖ and ‖v(S)‖. We refer to this gap as a geometric skew. When this skew is prominent enough (e.g.,\n‖v(Smin)‖/‖v(S)‖ ≈ 0), our result below argues that the spurious component in the overall maxmargin classifier must be sufficiently large (and positive). On the flip side, we also show that when the skew is negligible enough (e.g., ‖v(Smin)‖/‖v(S)‖ ≈ 1), then the spurious component has to be sufficiently small.\nTo be able to better visualize these bounds, we write these as bounds on |Bwsp| (i.e., |wspxsp| rather than |wsp|). Then we can think of a lower bound of the form |Bwsp| & 1 as demonstrating serious failure as a shift in the correlation can adversely reduce the original margin of ≈ 1. Before we state the result, for clarity, we state the full mathematical definition of v and ṽ as follows. For any T ⊆ S:\nv(T ), b(T ) = arg min winv∈Xinv,b\n‖winv‖2\ns.t. y(winv · xinv) + b) ≥ 1 ∀ ((xinv, xsp), y) ∈ T\nṽ(T ), b̃(T ) = arg min winv∈Xinv,b\n‖winv‖2\ns.t. y(winv · xinv) + b) ≥ 1 ∀ ((xinv, xsp), y) ∈ T y(winv · xinv) + b) ≥ 0 ∀ ((xinv, xsp), y) ∈ S \\ T\nUsing these notations, we state our full theorem and provide its proof below. Theorem 3. Let H be the set of linear classifiers, h(x) = winvxinv + wspxsp + b. Let the geometric skews in a dataset S be quantified through the terms κ1 = ‖v(Smin)‖/‖v(S)‖, κ2 = ‖v(Smin)‖/‖v(Smaj)‖ and κ̃1 := ‖ṽ(Smin)‖/‖ṽ(S)‖, κ̃2 := ‖ṽ(Smin)‖/‖ṽ(Smaj)‖. Then for any task satisfying all the constraints in Sec 3.1 the max-margin classifier satisfies the inequalities (where for readability, we will use c1 := 1/(2‖ṽ(S)‖B), c2 := 1/(2‖ṽ(Smaj)‖B)):\nBwsp ≥ max ( 1− 2 √ κ̃1 + c21, 0 ) if κ̃2 ≤ √ 1/4− c22, and\n|Bwsp| ≤ min ( 1/κ1 − 1,B‖v(S)‖ ) if κ2 ≤ 1.\nFor readability, it helps to think of c1 and c2 as small constants here (also see remark below). Furthermore, for readability, one can also imagine that all the κ terms are numerically similar to each other.\nWe make a few remarks below before providing the proof. Remark 1. For the lower bound on wsp to be positive, we need c1 and c2 to be small. This would be true when either B or ‖v(S)‖ (or ‖ṽ(Smaj)‖ ) is sufficiently large. This is intuitive: after all, if B is too small (say 0) there is no spurious feature effectively and therefore the max-margin has no incentive to use it; similarly, if v(S) is too small (say 0), then the max-margin has no incentive to use any feature besides the invariant feature, which is already quite cheap. Remark 2. The above result is not intended to be a numerically tight/upper lower bound. In fact, the proof can be tightened in numerous places which we however avoid, to keep the result and the proof simple. The bound is rather meant to be instructive of the effect of the geometric skew (i.e., the gap between the max-margin norms on the minority and whole/majority dataset) on the spurious component.\nProof. We present the proof of lower bound first, followed by the upper bound.\nProof of lower bound. First, we will show that there exists a classifier of norm 1 that relies on the spurious feature to create a sufficiently large margin. We’ll let this classifier be of the form α ṽ(Smin)‖ṽ(Smin)‖ · xinv + √ 1− α2xsp + αbmin. By the definition of ṽ(Smin), the margin of this classifier\non any datapoint in Smin is at least α‖ṽ(Smin)‖ − √\n1− α2B. Again, by the definition of ṽ(Smin), the margin on Smaj is at least √ 1− α2B. Let us pick an α such that these two quantities are equal. Such\nan α would satisfy α√ 1−α2 = 2‖ṽ(Smin)‖B. By plugging this back, we get that the resulting margin of this classifier on the whole dataset S is at least B√ 1+4‖ṽ(Smin)‖2B2 . In other words, this also means the least norm classifier w with a margin of at least 1 on S has its norm upper bounded as:\n‖w‖ ≤ √\n1 + 4‖ṽ(Smin)‖2B2 B . (1)\nNow, assume that w (i.e., the least norm classifier with a margin of 1 on S) is of the form winvxinv + wspxsp + b. First, we derive a lower bound on |wsp|, and after that we’ll show that wsp > 0. For this we’ll consider two cases, one where |wsp| ≥ 1B (and so we already have a lower bound) and another case where |wsp| < 1B . In the latter case, we will need the invariant part of the max-margin classifier, namely winvxinv + b, to have to have a margin of at least 1 − |wsp|B on S; if it were any lesser, the contribution from the spurious component which is at most |wspxsp| will be unable to bump this margin up to 1. Now, for the invariant part of the max-margin classifier to have a margin of at least 1− |wsp|B (a non-negative quantity since |wsp| ≤ 1/B) on S, the norm of the (invariant part of the) classifier must be at least (1 − |wsp|B)‖v(S)‖ (which follows from the definition of v(S)). This implies,\n‖w‖ ≥ (1− |wsp|B)‖v(S)‖. (2)\nCombining this with Eq 1, we get (1− |wsp|B)‖v(S)‖ ≤ √ 1+4‖ṽ(Smin)‖2B2 B . Rearranging this gives\nus the bound that B|wsp| ≥ 1 − 2 √ ‖ṽ(Smin)‖2 ‖v(S)‖2 + 1 4B2‖v(S)‖2 . Finally note that v(S) is the same as ṽ(S). Hence we can interchange these terms in the final result (which we’ve done for readability) to arrive at |Bwsp| ≥ 1− 2 √ κ̃1 + c21).\nWhat remains now is to show that wsp > 0. For this we do the same argument as above but with a slight modification. First, if wsp > 1/B, we are done. So, assume that wsp ≤ 1/B. Then, we can say that the invariant part of the max-margin classifier i.e., winvxinv+b, must achieve a margin of 1−wspB (a non-negative quantity since wsp ≤ 1/B) specifically on Smaj. Then, by the definition of v(Smaj), it follows that the norm of our overall max-margin classifier must be at least (1− wspB)‖v(Smaj)‖. Again, for the overall classifier to be the max-margin classifier, we need (1 − |wsp|B)‖v(Smaj)‖ ≤√\n1+4‖ṽ(Smin)‖2B2 B , which when rearranged gives us Bwsp ≥ 1 − √ 4B2‖ṽ(Smin)‖2+1 B‖ṽ(Smaj)‖ . The R.H.S is\nat least 0 when 14 ( 1− 1‖ṽ(Smaj)‖2B2 ) ≥ ‖ṽ(Smin)‖ 2 ‖ṽ(Smaj)‖2 (i.e., √ 1 4 − c 2 2 ≥ κ̃2). In other words when\nκ̃2 ≤ √ 1 4 − c 2 2, we have wsp ≥ 0.\nProof of upper bound. The spurious component of the classifier winvxinv + wspxsp + b positively contributes to the margin of one of the groups (i.e., one of Smin and Smaj), and negatively contributes to the other group, depending on the sign of wsp. On whichever group the spurious component negatively contributes to the margin, the invariant part of the classifier, winvxinv + b, must counter this and achieve a margin of 1 + |wsp|B. To manage this, we’d require ‖winv‖ ≥ (1 + |wsp|B) min(‖v(Smin)‖, ‖v(Smaj)‖). In other words, for the overall max-margin classifier w, we have:\n‖w‖ ≥ (1 + |wsp|B) min(‖v(Smin)‖, ‖v(Smaj)‖). (3)\nAt the same time, we also know from the definition of v(S) that\n‖w‖ ≤ ‖v(S)‖. (4)\nCombining the above two equations, we can say (1+|wsp|B) min(‖v(Smin)‖, ‖v(Smaj)‖) ≤ ‖v(S)‖. Since we are given κ2 ≤ 1, it means that min(‖v(Smin)‖, ‖v(Smaj)‖) = ‖v(Smin)‖, this simplifies to (1 + |wsp|B)‖v(Smin)‖ ≤ ‖v(S)‖, which when rearranged reaches the result Bwsp ≤ 1κ1 − 1.\nTo get the other upper bound here, observe that for winvxinv +wspxsp + b to be the overall min-norm classifier, its `2 norm, which is lower bounded by |wsp| must not be larger than the `2 norm of v(S). Hence |wsp| ≤ ‖v(S)‖." }, { "heading": "B.2 PROOF OF THEOREM 2 ON FAILURE DUE TO STATISTICAL SKEWS", "text": "Below we state the full form of Theorem 2 and its proof demonstrating the effect of statistical skews in easy-to-learn tasks. After that, we’ll present a more precise analysis of the same in a 2D setting in Theorem 5 (for exponential loss) and in Theorem 6 (for logistic loss).\nOur result below focuses on any easy-to-learn task and on a corresponding dataset where there are no geometric skews. Specifically, we consider a dataset where the invariant features have the same empirical distribution in both the majority subset (where xsp · y > 0) and the minority subset (where xsp · y < 0). As a result, in this setting the max-margin classifier would not rely on the spurious feature. This allows us to focus on a setting where we can isolate and study the effect of statistical skews.\nFor the sake of convenience, we focus on the exponential loss and under infinitesimal learning rate, and a classifier initialized to the origin.\nTheorem 4. (full form of Theorem 2 ) Let H be the set of linear classifiers, h(x) = winvxinv+wspxsp. Consider any task that satisfies all the constraints in Section 3.1. Consider a dataset S drawn fromD such that the empirical distribution of xinv given xsp · y > 0 is identical to the empirical distribution of xinv given xsp · y < 0. Let winv(t)xinv + wsp(t)xsp be initialized to the origin, and trained with an infinitesimal rate to minimize the exponential loss on a dataset S. Then, for any (x, y) ∈ S, we have:\nΩ ln c+pc+√p(1−p) M ln(t+ 1) ≤ wsp(t)B |winv(t) · xinv| ≤ O ( ln p1−p ln(t+ 1) ) where:\n• p denotes the empirical level of spurious correlation, p = 1|S| ∑\n(x,y)∈S 1[xsp ·y > 0] which without generality is assumed to satisfy p ∈ [0.5, 1).\n• M denotes the maximum value of the margin of the max-margin classifier on S i.e.,M = maxx∈S ŵ · x where ŵ is the max-margin classifier on S.\n• c := 2(2M−1)B2\nProof. Throughout the discussion, we’ll denote winv(t) and wsp(t) as just winv and wsp for readability.\nLet Smin and Smaj denote the subset of datapoints in S where xsp · y < 0 and xsp · y > 0 respectively. Let D̂inv denote the uniform distribution over xinv induced by drawing x uniformly from Smin. By the assumption of the theorem, this distribution would be the same if x was drawn uniformly from Smaj. Then, the loss function that is being minimized in this setting corresponds to:\nL(winv, wsp) = pExinv∼D̂inv [ e−(winv·xinv+wspB) ] + (1− p)Exinv∼D̂inv [ e−(xinv·winv−wspB) ] ,\nwhere p ∈ [0.5, 1). Here, the first term is the loss on the majority dataset (where xsp = yB) and the second term is the loss on the minority dataset (where xsp = −yB). The update on wsp can be written as:\nẇsp = Exinv∼D̂inv [ e−winvxinv ] · B · ( pe−wspB − (1− p)ewspB ) To study the dynamics of this quantity, we first bound the value of winv(t)xinv.\nBounds on winv(t)xinv The result from Soudry et al. (2018) states that we can write w(t) = ŵ ln(1 + t) + ρ(t) where ŵ is the max-margin classifier and ρ is a residual vector that is bounded as ‖ρ(t)‖2 = O(ln ln t). Since the max-margin classifier here is of the form ŵ = (ŵinv, 0) (i.e., it only relies on the invariant feature), we can infer from this that winv(t) = ŵinv ln(1 + t) + ρ †(t) where again ‖ρ†(t)‖2 = O(ln ln t). For a sufficiently large t, we can\nsay that ln ln t ln(1 + t). This would then imply that for all x ∈ S, |winv(t) · xinv| ∈ [0.5ŵinvxinv(t) ln(1 + t), 2ŵinvxinv(t) ln(1 + t)]. Since the max-margin classifier has a margin between 1 andM on the training data, this implies that, for a sufficiently large t and for all x ∈ S:\n|winv(t) · xinv| ∈ [0.5 ln(1 + t), 2M ln(1 + t)].\nNext, we bound the dynamics of wsp.\nUpper bound on wsp. To upper bound wsp, we first note that ẇsp = 0 only when wsp = 12B ln p 1−p . Furthermore, ẇsp is a decreasing function in wsp. Hence, for any value of wsp that is less than 1 2B ln p 1−p , ẇsp ≥ 0 and for any that is greater than this value, ẇsp ≤ 0. So, we can conclude that when the system is initialized at 0, it can never cross the point wsp = 12B ln p\n1−p . In other words, for all t, wsp(t) ≤ 12B ln p 1−p . Combining this with the lower bound on winv(t), we get the desired result.\nLower bound on wsp. We lower bound ẇsp via the upper bound on wsp as: ẇsp ≥ Exinv∼D̂inv [ e−winvxinv ] · B · ( pe−wspB − (1− p) p\n1− p ) = Exinv∼D̂inv [ e−winvxinv ] · B · ( pe−wspB − √ p(1− p) ) .\nNext, since we have that for all x ∈ S, |winv · xinv| ≤ 2M ln(t+ 1):\nẇsp ≥ 1 (t+ 1)2M B · ( pe−wspB − √ p(1− p) ) .\nRearranging this and integrating, we get:∫ wsp 0 1 pe−wspB − √ p(1− p) dwsp ≥ ∫ t 0 B 1 (1 + t)2M dt,\n(Since 2M≥ 2, we can integrate the right hand side as below)\n− ln(p− ewspB\n√ p(1− p))\nB √ p(1− p)\n+ ln(p−\n√ p(1− p))\nB √ p(1− p)\n≥ B 2M− 1\n( 1− 1\n(1 + t)2M−1\n) ,\nsince for a sufficiently large t, the final paranthesis involving t will at least be half,\nln\n √\np 1−p − 1√ p\n1−p − ewspB\n ≥ 1 2 √ p(1− p)B2 2M− 1 ,\nwe can further lower bound the right hand side by applying the inequality x ≥ ln(x+1) for positive x,\nln\n √\np 1−p − 1√ p\n1−p − ewspB\n ≥ ln(1 + 1 2 √ p(1− p)B2 2M− 1 ) .\nTaking exponents on both sides and rearranging,\newspB ≥ √ p\n1− p −\n√ p\n1−p − 1\n1 + 12 √ p(1−p)B2 (2M−1)\nwsp ≥ 1\nB ln\n2(2M−1) B2 + p\n2(2M−1) B2 +\n√ p(1− p) .\nCombining this with the upper bound on winv(t), we get the lower bound on wsp(t)/winv(t)." }, { "heading": "B.3 PRECISE ANALYSIS OF STATISTICAL SKEWS FOR A 2D SETTING UNDER EXPONENTIAL LOSS", "text": "We now consider the 2D dataset D2-dim considered in the main paper, with the spurious feature scale set as B = 1, and provide a more precise analysis of the dynamics under exponential loss. This analysis is provided for the sake of completeness as the proof is self-contained and does not rely on the result of Soudry et al. (2018); Ji & Telgarsky (2018). In the next section, we perform a similar analysis for logistic loss. Theorem 5. Under the exponential loss with infinitesimal learning rate, a linear classifier winv(t)xinv + wsp(t)xsp initialized to the origin and trained on D2-dim with B = 1 satisfies:\nln ((1+2p)/(3−2p)) ln(1 + 3 max(t, 1)) ≤ wsp(t) winv(t) ≤ ln ( p/(1−p)) ln(1 + 2t) , where p := PrD2-dim [xsp · y > 0] ∈ [0.5, 1].\nProof. Throughout the proof, we’ll drop the argument t from winv(t) and wsp(t) for convenience.\nThe loss function that is being minimized in this setting corresponds to:\nL(winv, wsp) = pe −(winv+wsp) + (1− p)e−(winv−wsp),\nwhere p ≥ 0.5. Here, the first term is the loss on the majority dataset (where xsp = yB) and the second term is the loss on the minority dataset (where xsp = −yB). Now the updates on winv and wsp are given by:\nẇinv = pe −(winv+wsp) + (1− p)e−(winv−wsp)\nẇsp = pe −(winv+wsp) − (1− p)e−(winv−wsp),\nwhich means:\nd(winv + wsp)\ndt = 2pe−(winv+wsp)\nd(winv − wsp) dt = 2(1− p)e−(winv+wsp)\nThus, by rearranging and integrating we get:\nwinv + wsp = ln(1 + 2pt)\nwinv − wsp = ln(1 + 2(1− p)t) winv = 0.5(ln(1 + 2pt) + ln(1 + 2(1− p)t)) wsp = 0.5(ln(1 + 2pt)− ln(1 + 2(1− p)t)).\nNow let us define β(t) = wsp/winv:\nβ(t) := wsp(t)\nwinv(t) = ln(1 + 2pt)− ln(1 + 2(1− p)t) ln(1 + 2pt) + ln(1 + 2(1− p)t) . (5)\nTo bound this quantity, we’ll consider two cases, t ≥ 1 and t < 1. First let us consider t ≥ 1. We begin by noting that the numerator wsp(t) is increasing with time t. This is because,\nwsp(t) = ln 1 + 2pt\n1 + 2(1− p)t\n= ln ( 1 +\n2(2p− 1)t 1 + 2(1− p)t ) = ln ( 1 +\n2(2p− 1) 1 t + 2(1− p)\n) .\nHere, the term 2(2p−1)1 t+2(1−p)\nis increasing with t due to the fact that the numerator is non-negative (p ≥ 0.5) and the denominator is decreasing with t. So, given that wsp(t) is increasing, we can say that for all t ≥ 1:\nβ(t) ≥ wsp(1)\nwinv(t) = ln 1+2p3−2p ln(1 + 2(1− p)t) + ln(1 + 2pt) ≥ ln 1+2p3−2p ln(1 + 3t) .\nHere we have used the fact that the denominator can be upper bounded as ln(1+2(1−p)t)+ln(1+ 2pt) ≤ ln(1 + 2t+ 4(1− p)pt) ≤ ln(1 + 3t).\nNow, for any t ≤ 1, we can show that β(t) ≥ β(1) = ln 1+2p 3−2p ln(3+4(p−p2)) ≥ ln 1+2p3−2p\nln 4 . This follows if we can show that β(t) is decreasing for t ≥ 0. Taking its derivative with respect to time, we get:\nβ̇ = (ln(1 + 2pt) + ln(1 + 2(1− p)t))\n( 2p\n1+2pt − 2(1−p) (1+2(1−p)t) ) (ln(1 + 2pt) + ln(1 + 2(1− p)t))2\n− (ln(1 + 2pt)− ln(1 + 2(1− p)t))\n( 2p\n(1+2pt) + 2(1−p) 1+2(1−p)t ) (ln(1 + 2pt) + ln(1 + 2(1− p)t))2\n=2 · ln(1 + 2(1− p)t) 2p1+2pt − ln(1 + 2pt) 2(1−p) 1+2(1−p)t\n(ln(1 + 2pt) + ln(1 + 2(1− p)t))2\n=2 · ln(1 + 2(1− p)t) 11 1 2p +t − ln(1 + 2pt) 11 1 2(1−p) +t\n(ln(1 + 2pt) + ln(1 + 2(1− p)t))2\nThe sign of the above quantity is equal to the sign of:\nln(1 + 2(1− p)t) (\n1\n2(1− p) + t\n) − ln(1 + 2pt) ( 1\n2p + t ) = ln ( 1\n2(1− p) + t\n)( 1\n2(1− p) + t\n) + ln(\n1\n2(1− p) )\n( 1\n2(1− p) + t ) ︸ ︷︷ ︸\n:=f( 12(1−p) ) − ln ( 1\n2p + t\n)( 1\n2p + t\n) − ln 1\n2p\n( 1\n2p + t ) ︸ ︷︷ ︸\n:=f( 12p ) Now, we show that f(x) = (x+ t) ln(x+ t)− (x+ t) lnx = (x+ t) ln ( 1 + tx ) is a non-increasing function:\nf ′(x) = ln ( 1 + t\nx\n) + x+ t\n1 + tx · −t x2\n= ln ( 1 + t\nx ) − t x\n≤ t x − t x ≤ 0.\nNow since p ≥ 0.5, and f is non-increasing, f (\n1 2(1−p)\n) − f ( 1 2p ) ≤ 0. Subsequently, β̇ ≤ 0.\nTherefore, β(t) ≥ β(1) for any t ∈ [0, 1]. Upper bound. For an upper bound on β(t), we note that since wsp(t) is always increasing wsp(t) ≤ limt→∞ wsp(t) = ln ( p\n1−p\n) . On the other hand winv(t) = ln(1 + 2t + 4p(1 − p)t2) ≥ ln(1 + 2t).\nCombining these inequalities, we get:\nβ(t) ≤ ln ( 1 p − 1 ) ln(1 + 2t) ." }, { "heading": "B.4 ANALYSIS OF STATISTICAL SKEWS FOR A 2D SETTING UNDER LOGISTIC LOSS", "text": "While the Theorem 2 and Theorem 5 were concerned with the exponential losses, as noted in Soudry et al. (2018), the dynamics under logistic loss are similar (although harder to analyze). For the sake of completeness, we show similar results for logistic loss in the same 2D setting as Theorem 5. Theorem 6. Under the logistic loss with infinitesimal learning rate, a linear classifier winv(t)xinv + wsp(t)xsp initialized to the origin and trained on D2-dim with B = 1 satisfies for a sufficiently large t (where p := PrD2-dim [xsp · y > 0] ∈ [0.5, 1]):\nmin 1, 12 ln ( 2 3−2p ) ln(t+ 1) ≤ wsp(t) winv(t) ≤ 1 2 ln 1−p p ln(0.5t+ 1) .\nProof. Here, the loss function is of the form:\nL(winv, wsp) = p log(1 + e −(winv+wsp)) + (1− p) log(1 + e−(winv−wsp))\nwhere p ≥ 0.5. Now the updates on winv and wsp are:\nẇinv = p e−(winv+wsp)\n1 + e−(winv+wsp) + (1− p) e\n−(winv−wsp)\n1 + e−(winv−wsp)\nẇsp = p e−(winv+wsp)\n1 + e−(winv+wsp) − (1− p) e\n−(winv−wsp)\n1 + e−(winv−wsp) ,\nwhich means:\nd(winv + wsp)\ndt = 2p\ne−(winv+wsp)\n1 + e−(winv+wsp) = 2p\n1\n1 + e(winv+wsp)\nd(winv − wsp) dt = 2(1− p) e −(winv−wsp) 1 + e−(winv−wsp) = 2(1− p) 1 1 + e(winv−wsp) .\nSolving for this, we get:\nwinv + wsp + e winv+wsp = 2pt+ 1 (6)\nwinv − wsp + ewinv−wsp = 2(1− p)t+ 1. (7)\nWe first derive some useful inequalities.\nFirst, we argue that forall t,\nwsp(t) ≥ 0. (8)\nThis is because at the point where wsp(t) = 0, ẇsp(t) ≥ 2p−11+ewinv ≥ 0 (since p ≥ 0.5). Hence, the system can never reach values of wsp < 0.\nNext, we have for all t, winv(t) ∈ [0, ln(t+ 1)]. (9)\nWe can show this by summing up Eq 6 and 7\n2winv + e winv+wsp + ewinv−wsp = 2t+ 2\n=⇒ 2winv + 2 √ ewinv+wsp · ewinv−wsp ≤ 2t+ 2 =⇒ 2winv + 2ewinv ≤ 2t+ 2\nand since ẇinv ≥ 0, and winv(t) = 0, winv(t) ≥ 0,\n2ewinv ≤ 2t+ 2\nNext, we show:\nwsp(t) ≤ 1\n2 ln 2(1− p)t+ 1 2pt+ 1 ≤ 1 2 ln (1− p) p . (10)\nTo show this, we divide Eq 6 by Eq 7, to get:\nwinv + wsp + e winv+wsp winv − wsp + ewinv−wsp = 2(1− p)t+ 1 2pt+ 1\n=⇒ (2(2p− 1)t)winv + (2(2p− 1)t)wsp + (2pt+ 1)ewinv+wsp = ewinv−(2(1−p)t+1)wsp\nsince by p ≥ 0.5, Eq 8 and Eq 9 the first two terms are positive,\n=⇒ (2pt+ 1)ewinv+wsp ≤ (2(1− p)t+ 1)ewinv−wsp\n=⇒ e2wsp ≤ 2(1− p)t+ 1 2pt+ 1 .\nThis proves the first inequality. The second inequality follows from the fact that 2(1−p)t+12pt+1 is increasing with t so applying lim t→∞ gives us an upper bound. Finally, we rewrite Equation 6 and Equation 7 to get:\nwinv + wsp = ln(2pt+ 1− (winv + wsp)) (11) winv − wsp = ln(2(1− p)t+ 1− (winv − wsp). (12)\nAdding and subtracting these, we get a different form for the dynamics of these quantities:\nwinv = 0.5(ln(2pt+ 1− (winv + wsp)) + ln(2(1− p)t+ 1− (winv − wsp)) (13) wsp = 0.5(ln(2pt+ 1− (winv + wsp))− ln(2(1− p)t+ 1− (winv − wsp)). (14)\nLower bound. To prove a lower bound on wsp(t)/winv(t), we’ll first lower bound wsp. Observe that:\nwsp(t) = 1\n2 ln 2pt+ 1− (winv + wsp) 2(1− p)t+ 1− (winv − wsp)\n= 1\n2 ln\n( 1 +\n2(2p− 1)t− 2wsp 2(1− p)t+ 1− (winv − wsp)\n)\nNow, since wsp is upper bounded by a constant (Eq 10), for sufficiently large t, the numerator of the second term inside the ln will be positive, and can be lower bounded by (2p − 1)t. Then, let us consider two scenarios. Either that wsp > winv, in which case we already have a lower bound on β(t), or that wsp ≤ winv. In the latter case, we can lower bound the above as:\nwsp(t) ≥ 1\n2 ln\n( 1 +\n(2p− 1)t 2(1− p)t+ 1\n)\nSince the right hand side is an increasing in t, we can say that for sufficiently large t ≥ 1,\nwsp(t) ≥ 1\n2 ln\n( 1 +\n(2p− 1) 2(1− p) + 1 ) ≥ 1\n2 ln\n( 2\n3− 2p\n) .\nCombining this with Eq 9 we get for sufficiently large t, either\nwsp(t) winv(t) ≥\n1 2 ln\n( 2\n3−2p ) ln(t+ 1) .\nor wsp(t)winv(t) ≥ 1.\nUpper bound. To upper bound wsp(t)/winv(t), we’ll lower bound winv(t):\nwinv = 0.5(ln(2pt+ 1− (winv + wsp)) + ln(2(1− p)t+ 1− (winv − wsp))\nSince by Eq 9 winv(t) ∈ [0, ln(t+ 1)] and by Eq 10, wsp(t) ∈ [0, 12 ln 1−p p ], for a sufficiently large t, the linear terms within the ln terms dominate and so, for large t\nwinv ≥ (ln(0.5 · 2pt+ 1) = ln(0.5t+ 1)\nCombining this with Eq 10, we get, for large t:\nwsp(t) winv(t) ≤\n1 2 ln 1−p p\nln(0.5t+ 1) ." }, { "heading": "C MORE ON EXPERIMENTS", "text": "Common details: In all our MNIST-based experiments, we consider the Binary-MNIST classification task (Arjovsky et al., 2019) where the first five digits (0 to 4) need to be separated from the rest\n(5 to 9). Unless specified otherwise, for this we train a fully-connected three-layered ReLU network with a width of 400 and using SGD with learning rate 0.1 for 50 epochs. In all our CIFAR10-based experiments, unless stated otherwise, we consider the 10-class classification problem, and train a ResNetV1 with a depth of 20 for 200 epochs. 7 All values are averaged at least over fives runs. Finally, when we describe our datasets, we’ll adopt the convention that all pixels lie between 0 and 1," }, { "heading": "C.1 RANDOM FEATURE EXPERIMENTS FROM SECTION 3.1", "text": "For the random features based experiment on Binary MNIST in Section 3.1, we consider 50k random ReLU features i.e., xinv = ReLU(Wxraw) where W is a 50k × 784 matrix (and so this is well overparameterized for dataset sizes upto 6400). Each entry here is drawn from the normal distribution. We set the spurious feature support to be {−100, 100}, which is about 1/10th the magnitude of ‖xinv‖. We also conduct similar experiments on a two-class CIFAR10 and report similar OoD accuracy drops in Fig 5a. Here, we use the first two classes of CIFAR10, as against grouping five classes together into each of the classes. This is because on the latter dataset, the random features representation has poor in-distribution accuracy to begin with.\nC.2 INCREASING NORM EXPERIMENTS FROM SECTION 4\nThe main premise behind our geometric skews argument is that as we increase the number of datapoints, it requires greater norm for the model to fit those points. We verify this for random features models on Binary-MNIST and two-class CIFAR10 in Figs 5b, 5c.\nWhile the theory is solely focused on linear classifiers, we can verify this premise intuitively for neural network classifiers. However, for neural networks, the notion of margin is not well-defined. Nevertheless, as a proxy measure, we look at how much distance the weights travel from their initialization in order to classify the dataset completely. Such plots have already been considered in Neyshabur et al. (2017); Nagarajan & Kolter (2017; 2019) (although in the completely different context of understanding why deep networks succeed at in-distribution generalization). We present similar plots for completeness.\nFig 5d shows this for Binary-MNIST on an FNN. For CIFAR10, we conduct two experiments. Fig 5f uses a ResNet with Adam and decaying learning rate. Here, we observe that the norms saturate after a point, which is because of the learning rate decay. Since this does not make a fair comparison between the geometries of larger datasets and smaller datasets, we also plot this for SGD with fixed learning rate in Fig 5e to recover the increasing norms observation. Here, sometimes the model sometimes saturates at an accuracy of 99% (rather than 100%); in those cases, we report the value of the weight norm at the final epoch (namely, at the 200th epoch)." }, { "heading": "C.3 BROADER EXAMPLES OF GEOMETRIC FAILURE", "text": "We elaborate on the multiple datasets we showcased in the paper as examples where ERM fails due to a geometric skew. We first discuss the two CIFAR10 datasets, and then discuss the cats vs. dogs example, and then discuss two Binary-MNIST datasets, one that is similar to the cats vs. dogs example, and another corresponding to high-dimensional spurious features." }, { "heading": "C.3.1 CIFAR10 EXAMPLE WITH SPURIOUSLY COLORED LINE", "text": "Here we provide more details about the CIFAR-10 dataset we presented in Sec 4 and in Fig 1c. This dataset can be thought of as an equivalent of the cow-camel classification tasks but for 10 classes. For this, we use ten different values of the spurious feature, one for each class. We argue that the failure here arises from the fact that the ResNet requires greater norms to fit larger datapoints (see Fig 5e), and so a similar argument as Theorem 1 should explain failure here.\nDataset details. To create the ten-valued spurious feature, we consider a vertical line passing through the middle of each channel, and also additionally the horizontal line through the first channel. Next, we let each of these four lines take a constant value of either (0.5 ± 0.5B) where\n7Borrowing the implementation in https://github.com/keras-team/keras/blob/master/ examples/cifar10_resnet.py, without data augmentation.\nB ∈ [−1, 1] denotes a “spurious feature scale”. Since each of these lines can take two configurations, it allows us to instantiate 16 different configurations. We’ll however use only 10 of these configurations, and arbitrarily fix a mapping from those configurations to the 10 classes. For convenience let us call these ten configurations xsp,1, . . . ,xsp,10. Then, for any datapoint in class i, we denote the probability of the spurious feature taking the value xsp,j , conditioned on y, as pi,j .\nTo induce a spurious correlation, we set pi,i > 0.1, and set all other pi,j := (1−pi,i)/10. Thus, every value of the spurious feature xsp,j is most likely to occur with its corresponding class j. Finally, note that to incorporate the spurious pixel, we zero out the original pixels in the image, and replace them with the spurious pixels.\nFor the observation reported in Fig 1c, we use the value of B = 0.5 during training and testing. We set pi,i = 0.5 for all classes. This means that on 50% of the data the spurious feature is aligned with the class (we call this the ‘Majority’ group). On the remaining 50% data, the spurious feature takes one of the other 9 values at random (we call this the ‘Minority’ group)." }, { "heading": "C.3.2 CIFAR10 EXAMPLE WITH A LINE IN THE THIRD CHANNEL", "text": "Here, we elaborate on the discussion regarding the dataset in Fig 2d. In this dataset, we add a line to the last channel of CIFAR10 (regardless of the label), and vary its brightness during testing. We argue that one way to understand the failure is via the fact that the “linear mapping” Constraint 5 is broken. In particular, if we imagine that each channel contains the same invariant feature xinv (which is almost the case as can be seen in Fig 7), then for simplicity, we can imagine this dataset to be of the form (xinv,xinv + xsp) i.e., xinv and xsp are not orthogonal to each other. In this scenario, the second co-ordinate can still be fully predictive of the label, and therefore the max-margin classifier would rely on both the first and the second co-ordinate to maximize its margins e.g., w1 · xinv + w2(xinv +xsp)) + b where w1,w2 6= 0. Crucially, since the classifier has not quite disentangled xsp from the invariant part of the second channel, this makes the classifier vulnerable to test-time shifts on xsp. In Sec A we detail this under the failure mode of Constraint 5, and visualize this failure in Fig 4c, and also connect it to Theorem 1.\nDataset details. In Fig 6b, we visualize this dataset for multiple values of a spurious feature scale parameter, B ∈ [−4, 4]. In particular, we take the last channel of CIFAR10 and add B to the middle line of the channel. Since this can result in negative pixels, we add a value of 4 to all pixels in the third channel, and then divide all those pixels by a value of 1 + 8 so that they lie in between 0 and 1. (This normalization is the reason the color of the images differ from the original CIFAR10 dataset; as such this normalization is not crucial to our discussion.)\nMore experimental results. We run two kinds of experiments: one where we add only a vertical line to all images, and another where we add a horizontal line to 50% of the images (essentially simulating data from two different domains). We also run experiments for two different values of B during training, 0.2 and 1.0. As a “control” experiment, we completely fade out the original CIFAR image in the third channel. Then, according to our explanation, the model should not fail in this setting as data is of the form (xinv,xsp) i.e., the two features are orthogonally decomposed. We summarize the key observations from these experiments (plotted in Fig 8) here:\n1. As predicted, we observe that when trained on the “orthogonal” dataset (where the original CIFAR10 image is zerod out in the third channel), the OoD performance of the neural network is unaffected. This provides evidence for our explanation of failure in the “non-orthogonal” dataset.\n2. We observe that training on ERM on the “multidomain dataset” (Figs 8b, 8d) that has both horizontal and vertical lines, makes it more robust to test-time shifts compared to the dataset with purely vertical lines (Figs 8a, 8c).\n3. Remarkably, even though the third channel has only a very faint copy of the image (Fig 7), the classifier still learns a significant enough weight on the channel that makes it susceptible to shifts in the middle line.\n4. We note that introducing a different kind of test-time shift such as a Gaussian shift, is not powerful enough to cause failure since such a shift does not align with the weights learned, w2. However, shifts such as the line in this case, are more likely to be aligned with the classifier’s weights, and hence cause a drop in the accuracy." }, { "heading": "C.3.3 CATS VS. DOGS EXAMPLE WITH COLORS INDEPENDENT OF LABEL", "text": "Recall that the dataset from Fig 1d consists of a scenario where the images of cats vs. dogs (Elson et al., 2007) are colored independently of the label. To generate this dataset, we set the first channel to be zero. Then, for a particular color, we pick a value B ∈ [−1, 1], and then set the second channel of every image to be 0.5 · (1− B) times the original first channel image, and the second channel to be 0.5 · (1 + B) times the original first channel image. For the blueish images, we set B = 0.90 and for the greenish images, we set B = −0.90 (hence both these kinds of images have non-zero pixels in both the green and blue channels). We visualize these images in Fig 9.\nThen, on the training distribution, we randomly select p fraction of the data to be bluish and 1 − p fraction to be greenish as described above. On the testing distribution, we force all datapoints to be greenish. Finally, note that we randomly split the original cats vs. dogs dataset into 18000 points for training and use the remaining 5262 datapoints for testing/validation. In Fig 1d, we set 1− p = 0.001, so about ≈ 20 greenish points must be seen during training. We show more detailed results for 1− p set to 0.001, 0.01 and 0.1 in Fig 10. We observe that the OoD failure does diminish when 1− p = 0.1. Explaining this failure via an implicit spurious correlation. Peculiarly, even though there is no explicit visual spurious correlation between the color and the label here, we can still identify a different kind of non-visual spurious correlation. To reason about this, first observe that, if the two active channels (the blue and the green one) correspond to (x1,x2), then x1+x2 is a constant across all domains (and is fully informative of the label). Hence x1 + x2 can hence be thought of as an invariant feature. On the other hand, consider the feature xdiff = x1 − x2. During training time, this feature would correspond to a version of the original image scaled by a positive factor of 2B for most datapoints (and scaled by a negative factor of −2B for a minority of datapoints). A classifier that relies only on x1 + x2 to predict the label will work fine on our test distribution; but if it relies on xdiff, it is susceptible to fail.\nNow, we informally argue why an ERM-based classifier would rely on xdiff. If we were to think of this in terms of a linear classifier, we can say that there must exist a weight vector wdiff such that\ny · (wdiff · xdiff) > 0 for the majority of the training datapoints. Then, we can imagine a singledimensional spurious feature that corresponds to the component of the data along this direction i.e., xsp := wdiff · xdiff. Notably, for a majority of the datapoints in the training set we have xsp · y > 0 and for a minority of the datapoints this feature does not necessarily correlate align with the label — let’s say xsp · y < 0 for convenience. Then, this setting would effectively boil down to the geometric skew setting considered by Theorem 1. In particular, we can say that when the minority group is sufficiently small, the classifier would rely on the spurious feature xsp to make its classification. We visualize this in Fig 10d.\nThus, even though there is no color-label correlation in this dataset, there is still a spurious correlation, although manifest in a way that does not visually stand out. While this is one such novel kind of spurious correlation, the key message here is that we must not limit ourselves to thinking of spurious correlations as straightforward co-occurences between the label and a spurious object in the image.\nRemark 3. It is worth distinguishing this spurious correlation with the one in the Colored MNIST dataset (Arjovsky et al., 2019). In Colored MNIST, the color is correlated with the label, and one can again think of xdiff as giving rise to the spurious feature. However, the manner in which this translates to a spurious feature is different. Here, the sign of each co-ordinate in xdiff is indicative of the true label (if it is positive, it means the image resides in the first channel, and is red, which is correlated with the label). Mathematically, if we define w̃diff to be the vector of all ones, then we can think of w̃diff · xdiff as a single-dimensional spurious feature here. In our case however, the vector\nwdiff that yields the single-dimensional spurious feature is different, and is given by the direction of separation between the two classes.\nRemark 4. We observe that in this non-standard spurious correlation setting, we require the minority group to be much smaller than in standard spurious correlation setting (like CMNIST) to create similar levels of OoD failure. We argue that this is because the magnitude of the spurious feature |xsp| is much smaller in this non-standard setting. Indeed, this effect of the spurious feature magnitude is captured in Theorem 3: the lower bound on the spurious component holds only when the minority group is sufficiently small to achieve κ̃2 . √ 1/4− 1/2|xsp| i.e., when the spurious feature magnitude |xsp| is smaller, we need the minority group to be smaller. To see why |xsp| differs in magnitude between the standard and non-standard spurious correlation settings, let us To see why |xsp| differs in magnitude between the standard and non-standard spurious correlation settings, recall from the previous remark that in our setting, |xsp| = |wdiff ·xdiff|, while in standard spurious correlation settings |xsp| = wdiff · xdiff. Intuitively, w̃diff · xdiff corresponds to separating all-positive-pixel images from all-negative-pixel images, which are well-separated classes. On the other hand, wdiff ·xdiff corresponds to separating images of one real-world class from another, which are harder to separate. Therefore, assuming that both weights vectors are scaled to unit norm, we can see that |wdiff · xdiff| |w̃diff · xdiff|" }, { "heading": "C.3.4 BINARY-MNIST EXAMPLE WITH COLORS INDEPENDENT OF LABEL", "text": "Similar to the cats vs. dogs dataset, we also consider a Binary-MNIST dataset. To construct this, for each domain we pick a value B ∈ [−1, 1], and then set the first channel of every image to be 0.5 · (1 + B) times the original MNIST image, and the second channel to be 0.5 · (1 − B) times the original MNIST image. To show greater drops in OoD accuracy, we consider a stronger kind of test-time shift: during training time we set B to have different positive values so that the image is concentrated more towards the first channel; during test-time, we flip the mass completely over to the other channel by setting B = −1. Here again, we can visualize the dataset in terms of an invariant feature that corresponds to the sum of the two channels, and a spurious feature that corresponds to wdiff ·xdiff. The exact visualization of failure here is slightly different here since we’re considering a stronger kind of shift in this setting. In particular, during training we’d have y ·(wdiff ·xdiff) > 0 for all the training datapoints in this setting. Thus, we can think of this as a setting with no minority datapoints in the training set (see Fig 12a). Then, as a special case of Theorem 1, we can derive a positive lower bound on the component of the classifier along xsp. However, during training time, since xdiff is no longer a positively scaled version of the MNIST digits, the value of wdiff · xdiff would no longer be informative of the label. This leads to an overall drop in the accuracy.\nExperimental results. We conduct two sets of experiments, one where we use only a single domain to train and another with two domains (with two unique positive values of B). In both variations, we also consider a control setting where the data is not skewed: in the single-domain experiment, this means that we set B = 0 (both channels have the same mass); in the two-domain experiment this means that we set B to be positive in one domain and negative in another. According to our explanation above, in the single-domain control setting Fig 12, since xinv = 0, the classifier is likely to not rely on this direction and should be robust to test-time shifts in xdiff. In the two-domain control setting, since the classifier also sees negatively scaled images in xdiff, it would be relatively robust to such negative scaling during test-time. Indeed, we observe in Fig 12, that the performance on the non-skewed, control datasets are robust. On the other hand, on the skewed datasets, we observe greater drops in OoD accuracy when B is flipped, thus validating our hypothesis." }, { "heading": "C.3.5 BINARY-MNIST EXAMPLE WITH HIGH-DIMENSIONAL SPURIOUS FEATURES", "text": "In this dataset, we consider a two-channel MNIST setting where the second channel is a set of spurious pixels, each independently picked to be either 0 or 0.1 with probability 1 − p and p for positively labeled points and p and 1 − p for negatively labeled points. During training we set p = 0.55 and p = 0.60 corresponding to two training domains, and during testing, we flip this to p = 0.0. We visualize this dataset in Fig 11b. In Fig 13, we observe that the classifier drops to 0 accuracy during testing. To explain this failure, we can think of the sum of the pixels in the second channel as a spurious feature: since these features are independently picked, with high probability,\ntheir sum becomes fully informative of the label. As discussed in Section C.3.4, this boils down to the setting of Theorem 1 when there are no minority datapoints. In Appendix A, we make this argument more formal (see under the discussion of Constraint 4 in that section)." }, { "heading": "C.4 EXPERIMENTS FROM SEC 5 ON STATISTICAL SKEWS", "text": "Experiment on D2-dim. For the plot in Fig 3a, we train a linear model with no bias on the logistic loss with a learning rate of 0.001, batch size of 32 and training set size of 2048." }, { "heading": "C.4.1 BINARY-MNIST EXPERIMENTS VALIDATING THE EFFECT OF STATISTICAL SKEWS.", "text": "For this experiment, we consider a Binary-MNIST dataset where the second channel is set to be either all-0 or all-constant pixels. We visualize this dataset in Fig 11c.\nTo generate our control and experimental datasets, we do the following. First, we sample a set of Binary-MNIST images Sinv and their labels from the original MNIST distribution. Next we create two datasets Smaj and Smin by taking each of these invariant images, and appending a spurious feature to it. More precisely, we let Smin = {((xinv, xsp), y) where xsp = B(y + 1)|xinv ∈ Sinv} and Smaj = {((xinv, xsp), y) where xsp = B(−y + 1)|xinv ∈ Sinv}. We then define a “control” dataset Scon := Smaj∪Smin, which has a 1:1 split between the two groups of points. Next, we create an experimental “duplicated” dataset Sexp := Smaj ∪ Smin ∪ Sdup where Sdup is a large dataset consisting of datapoints randomly chosen from Smaj, thus creating a spurious correlation between the label and the spurious feature. The motivation in creating datasets this way is that neither Scon and Sexp have geometric skews; however Sexp does have statistical skews, and so any difference in training on these datasets can be attributed to those skews.\nObservations. In our experiments, we let Sinv be a set of 30k datapoints, and so Scon has 60k datapoints. We duplicate Smin nine times so that Sexp has 330k datapoints and has a 10 : 1 ratio between the two groups. We consider two different settings, one where B = 0.1 during training and B = 1.0 during training. During testing, we report the accuracy under two kinds of shifts: shifting the value of B, and also shifting the correlation completely to one direction (i.e., by concentrating all mass on the minority/majority group). As reported in Fig 14, the statistically skewed dataset does suffer more during test-time when compared to the unskewed dataset." }, { "heading": "C.4.2 CIFAR10 EXPERIMENTS VALIDATING THE EFFECT OF STATISTICAL SKEWS.", "text": "For this experiment, we consider the same CIFAR-10 dataset that we design in Section C.3.1 i.e., we introduce lines in the dataset, that can take 10 different color configurations each corresponding to one of the 10 different classes. Here, the scale B of the spurious feature varies from [−1, 1] (see Section C.3.1 for more details on this).\nThe way we construct the control and experimental dataset requires a bit more care here since we have 10 classes. Specifically, we replicate Sinv ten times, and to each copy, we attach a spurious feature of a particular configuration. This creates Scon which has no geometric or statistical skews. Also, the has a size of |Scon| = 10|Sinv|. Then, we consider the 1/10th fraction of points in Scon where the spurious feature has the correct configuration corresponding to the label. We create a large duplicate copy of this subset that is 81 times larger than it (we do this by randomly sampling from that subset). We add this large duplicate set to Scon to get Sexp. This gives rise to a dataset where for\nany label, there is a 10 : 1 ratio8 between whether the spurious feature matches the label or where it takes one of the other nine configurations.\nObservations. We run experiments by setting |Sinv| = 5k and so |Scontrol| = 50k and Sexp = 455k. During training we try two different values of B, 0.1 and 1 respectively. During testing, as in the previous section, we vary both the scale of the spurious feature and also its correlation by evaluating individually on the minority dataset (where the spurious features do not match the label) and majority datasets (where the spurious features match the label). Here, again we observe that the model trained on the non-skewed dataset is less robust, evidently due to the statistical skews.\nIt is worth noting that even though there is no statistical or geometric skew in the control dataset, we observe in Fig 15 (b) that the classifier is not completely robust to shifts in the spurious feature. We suspect that this may point to other causes of failure specific to how neural network models train and learn representations." }, { "heading": "D SOLUTIONS TO THE OOD PROBLEM", "text": "Our discussion regarding geometric and statistical skews tells us about why ERM fails. A natural follow-up is to ask: how do we fix the effect of these skews to learn a good classifier? Below, we outline some natural solutions inspired by our insights.\nSolution for geometric skews. Our goal is to learn a margin-based classifier that is not biased by geometric skews, and therefore avoids using the spurious feature. Recall that we have a majority subset Smaj of the training set which corresponds to datapoints where xsp · y > 0 and a minority subset Smin which corresponds to datapoints where xsp · y < 0. Our insight from the geometric skews setting is that the max-margin classifier fails because it is much “harder” (in terms of `2 norm) to classify the majority dataset Smaj using xinv when compared to classifying Smin using xinv. A natural way to counter this effect would be to somehow bring the difficulty levels of these datasets closer. In particular, we propose a “balanced” max-margin classifier wbal that does the following:\nwbal = arg max ‖wbal‖=1 min {wbal · x|x ∈ Smaj}︸ ︷︷ ︸ margin on majority group , {c ·wbal · x|x ∈ Smin}︸ ︷︷ ︸ downscaled margin on minority group where c is a sufficiently small constant. In words, we consider a classifier that maximizes a “balanced” margin on the dataset. The balanced margin is computed by scaling down the margins on the minority datapoints, thereby making that subset artificially harder, and thus diminishing the geometric skew. For an appropriately small choice of c, we can expect the balanced max-margin to minimize its reliance on the spurious feature.\n8Here’s the calculation: in Scontrol, we have a subset of size |Sinv| where the spurious feature is aligned with the label and in the remaining 9|Sinv| datapoint, the spurious feature is not aligned. So, if we add 81|Sinv| datapoints with matching spurious features, we’ll have a a dataset where 90|Sinv| datapoints have matching spurious features while 9|Sinv| don’t, thus creating the desired 10 : 1 ratio.\nNote that this sort of an algorithm is applicable only in settings like those in fairness literature where we know which subset of the data corresponds to the minority group and which subset corresponds to the majority group.\nSolution for statistical skews. One natural way to minimize the effect of statistical skews is to use `2 weight decay while learning the classifier: weight decay is known to exponentially speed up the convergence rate of gradient descent, leading to the max-margin solution in polynomial time. Note that this doesn’t require any information about which datapoint belongs to the minority group and which to the majority.\nIn the setting where we do have such information, we can consider another solution: simply oversample the minority dataset while running gradient descent. This would result in a dataset with no statistical skews, and should hence completely nullify the effect of statistical skews. Thus, our argument provides a theoretical justification for importance sampling for OoD generalization." }, { "heading": "E DEMONSTRATING SKEWS ON A NON-IMAGE-CLASSIFICATION DATASET", "text": "So far, we have demonstrated our insights in the context of image classification tasks. However, our theoretical framework is abstract enough for us to apply these insights even in non-image classification tasks. To demonstrate this, we consider an obesity estimation task based on the dataset from Palechor & de la Hoz Manotas (2019).\nDataset details. The dataset consists of 16 features including height, weight, gender and habits of a person. The label takes one of six different values corresponding to varying levels of obesity. We convert this to a binary classification task by considering the first three levels as one class and the last three levels as another; we ignore any datapoint from the middle level, resulting in a dataset of 1892 points. We randomly split the data to extract 729 test datapoints. The dataset also has a categorical variable corresponding to the preferred mode of transport of the individual, which we convert to five binary features (corresponding to automobile, motorbike, bike, public transport and walking). We then scale all the resulting 20 features to lie between −1 and 1. For all our experiments, we will consider fitting a linear classifier directly on these features.\nNext, we construct a “biased” training set by sampling from the above training set in a way that the public transport feature becomes spuriously correlated with the label. For convenience we denote this feature as xsp and the remaining features as xinv. Note that in the original dataset there is not much correlation between xsp and y. In particular PrDtrain [xsp · y > 0] ≈ 0.47 (a value close to 0.5 implies no spurious correlation). Indeed, a max-margin classifier w trained on the original dataset does not rely on this feature. In particular, wsp/‖w‖ is as small as 0.008. Furthermore, this classifier achieves perfect accuracy on all of the test dataset. However, as we discuss below, the classifier learned on the biased set does rely on the spurious feature.\nGeometric skews. We sample a biased training dataset that has 581 datapoints for which xsp = y and 10 datapoints for which xsp = −y. We then train a max-margin classifier on this dataset, and observe that its spurious component wsp/‖w‖ is as large as 0.12 (about 15 times larger than the component of the max-margin on the original dataset). But more importantly, the accuracy of this model on the test dataset where xsp = y is 99.5% while the accuracy on a test dataset where xsp = −y is only 57.20%. In other words, the classifier learned here relies on the spurious feature, and suffers from poor accuracy on datapoints where the spurious feature does not align with the label. Why does this happen? Our theory says that this must be because of the fact that as we increase the number of training datapoints, it requires greater and greater `2 max-margin norm to fit the data using only the invariant feature space (and ignoring the spurious feature). Indeed, we verify that this increase does happen, in Fig 16a.\nStatistical skews. To demonstrate the effect of statistical skews, we take a similar approach as in earlier sections. We consider a dataset where we have the same number of unique data in the majority group (where xsp = y) and in the minority group (where xsp = −y). However, the majority group contains many duplicates of its unique points, outnumbering the minority group. In particular, we consider two different datasets of 500 datapoints each, and in one dataset, the majority group forms 0.75 fraction of the data, and in the other it forms a 0.85 fraction of the data. Note that in both these datasets, the max-margin classifier does not rely on the spurious feature since there’s no geometric skew.\nTo demonstrate the effect of the statistical skew, we train a linear classifier to minimize the logistic loss using SGD with a learning rate of 0.01 and batch size of 32 for as many as 10k epochs (which is well beyond the number of epochs required to fit the dataset to zero error). We then verify in Fig 16b that gradient descent takes a long time to let the spurious component of the classifier get close to its final value which is close to zero. This convergence rate, as we saw in our earlier experiments, is slower when the statistical skew is more prominent.\nImportant note. We must caution the reader that this demonstration is merely intended to showcase that our theoretical insights can help understand the workings of a classifier in a practically important, non-image classification dataset. However, we make no recommendations about how to deal with such high-risk tasks in practice. Such tasks require the practitioner to make careful ethical and social considerations which are beyond the scope of this paper. Indeed, empirical benchmarks for evaluating OoD generalization (Gulrajani & Lopez-Paz, 2020) are largely based on low-risk image classification tasks as it provides a safe yet reasonable test-bed for developing algorithms.\nRemark on synthetic vs. natural spurious feature shifts. It would be a valuable exercise to validate our insights on datasets with naturally-embedded spurious features. However, in order to test our insights, it is necessary to have datasets where one can explicitly quantify and manipulate the spurious features e.g., we need this power to be able to discard the spurious feature and examine the `2 norms of the max-margin in the invariant feature space. Currently though, OoD research lacks such datasets: we either have MNIST-like datasets with synthetic but quantifiable spurious features, or realistic datasets like PACS (Asadi et al., 2019), VLCS (Fang et al., 2013) where it is not even clear what the spurious feature is. The lack of datasets with natural yet quantifiable spurious features is a gap that is beyond the scope of this paper, and is worth being bridged by the OoD community in the future." } ]
2,021
UNDERSTANDING THE FAILURE MODES OF OUT-OF- DISTRIBUTION GENERALIZATION
SP:698104525f6955ba58aee1331a9487f77a542f13
[ "This paper proposes a dataset of tasks to help evaluate learned optimizers. The learned optimizers are evaluated by the loss that they achieve on held-out tasks after 10k steps. Using this dataset, the main strategy considered is to use search spaces that parametrize optimizers and learn a list of hyperparameter configurations for the optimizer that are tried sequentially. The authors show that the learned hyperparameter configuration list learned achieves better performance than (constrained) random search on multiple optimizer search spaces. Finally, they show that the learned hyperparameter list transfer well to realistic problems such as training a ResNet-50 model on ImageNet and training a transformer architecture on LM1B, outperforming reasonable baselines." ]
We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1
[]
[ { "authors": [ "Martín Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In OSDI,", "year": 2016 }, { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Nando de Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jimmy Ba", "Geoffrey E Hinton", "Volodymyr Mnih", "Joel Z Leibo", "Catalin Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "David Balduzzi", "Karl Tuyls", "Julien Perolat", "Thore Graepel" ], "title": "Re-evaluating evaluation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "David Balduzzi", "Marta Garnelo", "Yoram Bachrach", "Wojciech M Czarnecki", "Julien Perolat", "Max Jaderberg", "Thore Graepel" ], "title": "Open-ended learning in symmetric zero-sum games", "venue": null, "year": 1901 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Irwan Bello", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Neural optimizer search with reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Yoshua Bengio", "Samy Bengio", "Jocelyn Cloutier" ], "title": "Learning a synaptic learning", "venue": "rule. Université de Montréal, Département d’informatique et de recherche opérationnelle,", "year": 1990 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "James S. Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "Advances in Neural Information Processing Systems", "year": 2011 }, { "authors": [ "Yaniv Blumenfeld", "Dar Gilboa", "Daniel Soudry" ], "title": "A mean field theory of quantized deep networks: The quantization-depth trade-off", "venue": null, "year": 1906 }, { "authors": [ "Lukas Bossard", "Matthieu Guillaumin", "Luc Van Gool" ], "title": "Food-101 – mining discriminative components with random forests", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Olivier Bousquet", "Sylvain Gelly", "Karol Kurach", "Olivier Teytaud", "Damien Vincent" ], "title": "Critical hyper-parameters: No random, no cry", "venue": "arXiv preprint arXiv:1706.03200,", "year": 2017 }, { "authors": [ "James Bradbury", "Roy Frostig", "Peter Hawkins", "Matthew James Johnson", "Chris Leary", "Dougal Maclaurin", "Skye Wanderman-Milne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http://github.com/google/jax", "year": 2018 }, { "authors": [ "G. Bradski" ], "title": "The OpenCV Library", "venue": "Dr. Dobb’s Journal of Software Tools,", "year": 2000 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Ciprian Chelba", "Tomas Mikolov", "Mike Schuster", "Qi Ge", "Thorsten Brants", "Phillipp Koehn" ], "title": "One billion word benchmark for measuring progress in statistical language modeling", "venue": "CoRR, abs/1312.3005,", "year": 2013 }, { "authors": [ "Yutian Chen", "Matthew W Hoffman", "Sergio Gómez Colmenarejo", "Misha Denil", "Timothy P Lillicrap", "Matt Botvinick", "Nando de Freitas" ], "title": "Learning to learn without gradient descent by gradient descent", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yutian Chen", "Aja Huang", "Ziyu Wang", "Ioannis Antonoglou", "Julian Schrittwieser", "David Silver", "Nando de Freitas" ], "title": "Bayesian optimization in alphago", "venue": "arXiv preprint arXiv:1812.06855,", "year": 2018 }, { "authors": [ "Dami Choi", "Christopher J Shallue", "Zachary Nado", "Jaehoon Lee", "Chris J Maddison", "George E Dahl" ], "title": "On empirical comparisons of optimizers for deep learning", "venue": null, "year": 1910 }, { "authors": [ "Patryk Chrabaszcz", "Ilya Loshchilov", "Frank Hutter" ], "title": "A downsampled variant of imagenet as an alternative to the cifar datasets", "venue": "arXiv preprint arXiv:1707.08819,", "year": 2017 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "arXiv preprint arXiv:1412.3555,", "year": 2014 }, { "authors": [ "Karl Cobbe", "Oleg Klimov", "Chris Hesse", "Taehoon Kim", "John Schulman" ], "title": "Quantifying generalization in reinforcement learning", "venue": "arXiv preprint arXiv:1812.02341,", "year": 2018 }, { "authors": [ "Karl Cobbe", "Christopher Hesse", "Jacob Hilton", "John Schulman" ], "title": "Leveraging procedural generation to benchmark reinforcement learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ian Dewancker", "Michael McCourt", "Scott Clark", "Patrick Hayes", "Alexandra Johnson", "George Ke" ], "title": "A stratified analysis of bayesian optimization methods", "venue": "arXiv preprint arXiv:1603.09441,", "year": 2016 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Elizabeth D Dolan", "Jorge J Moré" ], "title": "Benchmarking optimization software with performance profiles", "venue": "Mathematical programming,", "year": 2002 }, { "authors": [ "Tobias Domhan", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves", "venue": "In Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Timothy Dozat" ], "title": "Incorporating nesterov momentum into adam", "venue": null, "year": 2016 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Katharina Eggensperger", "Frank Hutter", "Holger Hoos", "Kevin Leyton-Brown" ], "title": "Efficient benchmarking of hyperparameter optimizers via surrogates", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Jeffrey L Elman" ], "title": "Finding structure in time", "venue": "Cognitive science,", "year": 1990 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "Bohb: Robust and efficient hyperparameter optimization at scale", "venue": "arXiv preprint arXiv:1807.01774,", "year": 2018 }, { "authors": [ "Matthias Feurer", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Using meta-learning to initialize bayesian optimization of hyperparameters", "venue": "In Proceedings of the 2014 International Conference on Meta-learning and Algorithm Selection-Volume", "year": 2014 }, { "authors": [ "Flax Developers" ], "title": "Flax: A neural network library for jax designed for flexibility, 2020", "venue": "URL https://github.com/google-research/flax/tree/prerelease", "year": 2020 }, { "authors": [ "Boris Ginsburg", "Patrice Castonguay", "Oleksii Hrinchuk", "Oleksii Kuchaiev", "Vitaly Lavrukhin", "Ryan Leary", "Jason Li", "Huyen Nguyen", "Jonathan M Cohen" ], "title": "Stochastic gradient methods with layer-wise adaptive moments for training of deep networks", "venue": "arXiv preprint arXiv:1905.11286,", "year": 2019 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Taciana AF Gomes", "Ricardo BC Prudêncio", "Carlos Soares", "André LD Rossi", "André Carvalho" ], "title": "Combining meta-learning and search techniques to select parameters for support vector machines", "venue": null, "year": 2012 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "Sergio Guadarrama", "Anoop Korattikara", "Oscar Ramirez", "Pablo Castro", "Ethan Holly", "Sam Fishman", "Ke Wang", "Ekaterina Gonina", "Neal Wu", "Efi Kokiopoulou", "Luciano Sbaiz", "Jamie Smith", "Gábor Bartók", "Jesse Berent", "Chris Harris", "Vincent Vanhoucke", "Eugene Brevdo" ], "title": "TF-Agents: A library for reinforcement learning in tensorflow", "venue": "https://github.com/tensorflow/agents,", "year": 2018 }, { "authors": [ "Nikolaus Hansen", "Anne Auger", "Olaf Mersmann", "Tea Tusar", "Dimo Brockhoff" ], "title": "Coco: A platform for comparing continuous optimizers in a black-box setting", "venue": "arXiv preprint arXiv:1603.08785,", "year": 2016 }, { "authors": [ "Soufiane Hayou", "Arnaud Doucet", "Judith Rousseau" ], "title": "On the selection of initialization and activation function for deep neural networks", "venue": "arXiv preprint arXiv:1805.08266,", "year": 2018 }, { "authors": [ "Soufiane Hayou", "Arnaud Doucet", "Judith Rousseau" ], "title": "Mean-field behaviour of neural tangent kernel for deep neural networks, 2019", "venue": null, "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Nicolas Heess", "Srinivasan Sriram", "Jay Lemmon", "Josh Merel", "Greg Wayne", "Yuval Tassa", "Tom Erez", "Ziyu Wang", "SM Eslami", "Martin Riedmiller" ], "title": "Emergence of locomotion behaviours in rich environments", "venue": "arXiv preprint arXiv:1707.02286,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Sepp Hochreiter", "A Steven Younger", "Peter R Conwell" ], "title": "Learning to learn using gradient descent", "venue": "In International Conference on Artificial Neural Networks,", "year": 2001 }, { "authors": [ "Frank Hutter", "Lars Kotthoff", "Joaquin Vanschoren (eds" ], "title": "Automatic Machine Learning: Methods, Systems, Challenges. Springer, 2018", "venue": null, "year": 2018 }, { "authors": [ "Arthur Juliani", "Ahmed Khalifa", "Vincent-Pierre Berges", "Jonathan Harper", "Hunter Henry", "Adam Crespi", "Julian Togelius", "Danny Lange" ], "title": "Obstacle tower: A generalization challenge in vision, control, and planning", "venue": null, "year": 1902 }, { "authors": [ "Ryo Karakida", "Shotaro Akaho", "Shun-ichi Amari" ], "title": "Universal statistics of fisher information in deep neural networks: mean field approach", "venue": null, "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Aaron Klein", "Frank Hutter" ], "title": "Tabular benchmarks for joint architecture and hyperparameter optimization", "venue": "arXiv preprint arXiv:1905.04970,", "year": 2019 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Learning curve prediction with bayesian neural networks. 2016", "venue": null, "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 and cifar-100 datasets", "venue": "URl: https://www. cs. toronto. edu/kriz/cifar. html,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Manoj Kumar", "George E Dahl", "Vijay Vasudevan", "Mohammad Norouzi" ], "title": "Parallel architecture and hyperparameter search via successive halving and classification", "venue": "arXiv preprint arXiv:1805.10255,", "year": 2018 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Learning to optimize neural nets", "venue": "arXiv preprint arXiv:1703.00441,", "year": 2017 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "arXiv preprint arXiv:1603.06560,", "year": 2016 }, { "authors": [ "Ping Li", "Phan-Minh Nguyen" ], "title": "On random deep weight-tied autoencoders: Exact asymptotic analysis, phase transitions, and implications to training", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Liyuan Liu", "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Jiawei Han" ], "title": "On the variance of the adaptive learning rate and beyond", "venue": null, "year": 1908 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Fixing weight decay regularization in adam", "venue": "arXiv preprint arXiv:1711.05101,", "year": 2017 }, { "authors": [ "Kaifeng Lv", "Shunhua Jiang", "Jian Li" ], "title": "Learning gradient descent: Better generalization and longer horizons", "venue": "arXiv preprint arXiv:1703.03633,", "year": 2017 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,", "year": 2011 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Sam McCandlish", "Jared Kaplan", "Dario Amodei", "OpenAI Dota Team" ], "title": "An empirical model of large-batch training", "venue": "arXiv preprint arXiv:1812.06162,", "year": 2018 }, { "authors": [ "Bryan McCann", "Nitish Shirish Keskar", "Caiming Xiong", "Richard Socher" ], "title": "The natural language decathlon: Multitask learning as question answering", "venue": "arXiv preprint arXiv:1806.08730,", "year": 2018 }, { "authors": [ "Gábor Melis", "Chris Dyer", "Phil Blunsom" ], "title": "On the state of the art of evaluation in neural language models", "venue": "arXiv preprint arXiv:1707.05589,", "year": 2017 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Jeremy Nixon", "Daniel Freeman", "Jascha Sohl-Dickstein" ], "title": "Understanding and correcting pathologies in the training of learned optimizers", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Jonathon Shlens", "Jascha Sohl-Dickstein", "Ekin D Cubuk" ], "title": "Using learned optimizers to make models robust to input noise", "venue": "arXiv preprint arXiv:1906.03367,", "year": 2019 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "C Daniel Freeman", "Ben Poole", "Jascha Sohl-Dickstein" ], "title": "Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves", "venue": null, "year": 2009 }, { "authors": [ "Sameer A Nene", "Shree K Nayar", "Hiroshi Murase" ], "title": "Columbia object image library (coil-20)", "venue": null, "year": 1996 }, { "authors": [ "Yurii Nesterov" ], "title": "A method for unconstrained convex minimization problem with the rate of convergence o (1/kˆ 2)", "venue": "In Doklady AN USSR,", "year": 1983 }, { "authors": [ "Alex Nichol", "Vicki Pfau", "Christopher Hesse", "Oleg Klimov", "John Schulman" ], "title": "Gotta learn fast: A new benchmark for generalization in rl", "venue": "arXiv preprint arXiv:1804.03720,", "year": 2018 }, { "authors": [ "Alex Olsen", "Dmitry A. Konovalov", "Bronson Philippa", "Peter Ridd", "Jake C. Wood", "Jamie Johns", "Wesley Banks", "Benjamin Girgenti", "Owen Kenny", "James Whinney", "Brendan Calvert", "Mostafa Rahimi Azghadi", "Ronald D. White" ], "title": "DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning", "venue": "Scientific Reports,", "year": 2019 }, { "authors": [ "OpenAI", "Ilge Akkaya", "Marcin Andrychowicz", "Maciek Chociej", "Mateusz Litwin", "Bob McGrew", "Arthur Petron", "Alex Paino", "Matthias Plappert", "Glenn Powell", "Raphael Ribas", "Jonas Schneider", "Nikolas Tezak", "Jerry Tworek", "Peter Welinder", "Lilian Weng", "Qiming Yuan", "Wojciech Zaremba", "Lei Zhang" ], "title": "Solving rubik’s cube with a robot hand", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Valerio Perrone", "Huibin Shen" ], "title": "Learning search spaces for bayesian optimization: Another view of hyperparameter transfer learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Valerio Perrone", "Rodolphe Jenatton", "Matthias W Seeger", "Cédric Archambeau" ], "title": "Scalable hyperparameter transfer learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Johann Petrak" ], "title": "Fast subsampling performance estimates for classification algorithm selection. In Proceedings of the ECML-00 Workshop on Meta-Learning: Building Automatic Advice Strategies for Model Selection and Method Combination", "venue": null, "year": 2000 }, { "authors": [ "Florian Pfisterer", "Jan N van Rijn", "Philipp Probst", "Andreas Müller", "Bernd Bischl" ], "title": "Learning multiple defaults for machine learning algorithms", "venue": "arXiv preprint arXiv:1811.09409,", "year": 2018 }, { "authors": [ "Arnu Pretorius", "Elan van Biljon", "Steve Kroon", "Herman Kamper" ], "title": "Critical initialisation for deep signal propagation in noisy rectifier neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc Le" ], "title": "Searching for activation", "venue": null, "year": 2017 }, { "authors": [ "J. Rapin", "O. Teytaud" ], "title": "Nevergrad - A gradient-free optimization platform. https://GitHub", "venue": "com/FacebookResearch/Nevergrad,", "year": 2018 }, { "authors": [ "Matthias Reif", "Faisal Shafait", "Andreas Dengel" ], "title": "Meta-learning for evolutionary parameter optimization of classifiers", "venue": "Machine learning,", "year": 2012 }, { "authors": [ "Tom Schaul", "Ioannis Antonoglou", "David Silver" ], "title": "Unit tests for stochastic optimization", "venue": "arXiv preprint arXiv:1312.6055,", "year": 2013 }, { "authors": [ "Juergen Schmidhuber" ], "title": "On learning how to learn learning strategies", "venue": null, "year": 1995 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-.", "venue": "hook. PhD thesis, Technische Universität München,", "year": 1987 }, { "authors": [ "Frank Schneider", "Lukas Balles", "Philipp Hennig" ], "title": "Deepobs: A deep learning optimizer benchmark suite", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Samuel S Schoenholz", "Justin Gilmer", "Surya Ganguli", "Jascha Sohl-Dickstein" ], "title": "Deep information propagation", "venue": "arXiv preprint arXiv:1611.01232,", "year": 2016 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "arXiv preprint arXiv:1508.07909,", "year": 2015 }, { "authors": [ "Christopher J Shallue", "Jaehoon Lee", "Joe Antognini", "Jascha Sohl-Dickstein", "Roy Frostig", "George E Dahl" ], "title": "Measuring the effects of data parallelism on neural network training", "venue": "arXiv preprint arXiv:1811.03600,", "year": 2018 }, { "authors": [ "Prabhu Teja Sivaprasad", "Florian Mai", "Thijs Vogels", "Martin Jaggi", "François Fleuret" ], "title": "On the tunability of optimizers in deep learning", "venue": null, "year": 1910 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Jasper Snoek", "Oren Rippel", "Kevin Swersky", "Ryan Kiros", "Nadathur Satish", "Narayanan Sundaram", "Mostofa Patwary", "Mr Prabhat", "Ryan Adams" ], "title": "Scalable bayesian optimization using deep neural networks", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Emma Strubell", "Ananya Ganesh", "Andrew McCallum" ], "title": "Energy and policy considerations for deep learning in nlp", "venue": "arXiv preprint arXiv:1906.02243,", "year": 2019 }, { "authors": [ "Kevin Swersky", "Jasper Snoek", "Ryan P Adams" ], "title": "Multi-task bayesian optimization. In Advances in neural information processing", "venue": null, "year": 2004 }, { "authors": [ "Kevin Swersky", "Jasper Snoek", "Ryan Prescott Adams" ], "title": "Freeze-thaw bayesian optimization", "venue": "arXiv preprint arXiv:1406.3896,", "year": 2014 }, { "authors": [ "Yuval Tassa", "Yotam Doron", "Alistair Muldal", "Tom Erez", "Yazhe Li", "Diego de Las Casas", "David Budden", "Abbas Abdolmaleki", "Josh Merel", "Andrew Lefrancq", "Timothy Lillicrap", "Martin Riedmiller" ], "title": "DeepMind control suite", "venue": "Technical report, DeepMind,", "year": 2018 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural networks for machine learning,", "year": 2012 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Eleni Triantafillou", "Tyler Zhu", "Vincent Dumoulin", "Pascal Lamblin", "Kelvin Xu", "Ross Goroshin", "Carles Gelada", "Kevin Swersky", "Pierre-Antoine Manzagol", "Hugo Larochelle" ], "title": "Meta-dataset: A dataset of datasets for learning to learn from few examples", "venue": "arXiv preprint arXiv:1903.03096,", "year": 2019 }, { "authors": [ "Laurens Van Der Maaten" ], "title": "Accelerating t-sne using tree-based algorithms", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI conference on artificial intelligence,", "year": 2016 }, { "authors": [ "Joaquin Vanschoren", "Jan N. van Rijn", "Bernd Bischl", "Luis Torgo" ], "title": "Openml: Networked science in machine learning", "venue": "SIGKDD Explorations,", "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Alex Wang", "Yada Pruksachatkun", "Nikita Nangia", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "venue": null, "year": 1905 }, { "authors": [ "Paul J Werbos" ], "title": "Backpropagation through time: what it does and how to do it", "venue": "Proceedings of the IEEE,", "year": 1990 }, { "authors": [ "Olga Wichrowska", "Niru Maheswaranathan", "Matthew W Hoffman", "Sergio Gomez Colmenarejo", "Misha Denil", "Nando de Freitas", "Jascha Sohl-Dickstein" ], "title": "Learned optimizers that scale and generalize", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Martin Wistuba", "Nicolas Schilling", "Lars Schmidt-Thieme" ], "title": "Learning hyperparameter optimization initializations", "venue": "IEEE international conference on data science and advanced analytics (DSAA),", "year": 2015 }, { "authors": [ "Martin Wistuba", "Nicolas Schilling", "Lars Schmidt-Thieme" ], "title": "Sequential model-free hyperparameter tuning", "venue": "In 2015 IEEE international conference on data mining,", "year": 2015 }, { "authors": [ "David H Wolpert", "William G Macready" ], "title": "No free lunch theorems for optimization", "venue": "IEEE transactions on evolutionary computation,", "year": 1997 }, { "authors": [ "Yuhuai Wu", "Mengye Ren", "Renjie Liao", "Roger B Grosse" ], "title": "Understanding short-horizon bias in stochastic meta-optimization", "venue": null, "year": 2016 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "J. Xiao", "J. Hays", "K.A. Ehinger", "A. Oliva", "A. Torralba" ], "title": "Sun database: Large-scale scene recognition from abbey to zoo", "venue": "In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2010 }, { "authors": [ "Lechao Xiao", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Samuel Schoenholz", "Jeffrey Pennington" ], "title": "Dynamical isometry and a mean field theory of CNNs: How to train 10,000-layer vanilla convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ge Yang", "Samuel Schoenholz" ], "title": "Mean field residual networks: On the edge of chaos", "venue": "In Advances In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chris Ying", "Aaron Klein", "Esteban Real", "Eric Christiansen", "Kevin Murphy", "Frank Hutter" ], "title": "NAS-Bench-101: Towards Reproducible Neural Architecture Search", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Xiaohua Zhai", "Joan Puigcerver", "Alexander Kolesnikov", "Pierre Ruyssen", "Carlos Riquelme", "Mario Lucic", "Josip Djolonga", "Andre Susano Pinto", "Maxim Neumann", "Alexey Dosovitskiy" ], "title": "The visual task adaptation benchmark", "venue": null, "year": 1910 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Metz et al", "2019a", "Wu" ], "title": "2016), or proxy objectives such as only training for a handful of epoch (Zoph & Le, 2017). While this short horizon proxy is certainly not optimal(Wu et al., 2016), the performance gains are immense and in practice is what makes meta-training optimizers feasible. In our task suite, we test this short horizon learning by training hyperparameter lists only using some finite amount of training iterations per task and testing in the full training regieme (10k steps)", "venue": null, "year": 2016 }, { "authors": [ "Wang" ], "title": "2019), and the NLPDecathalon (McCann et al., 2018). In computer vision there is (Zhai", "venue": null, "year": 2018 }, { "authors": [ "Lv" ], "title": "Wichrowska et al. (2017) uses a set of synthetic problems", "venue": "Metz et al.,", "year": 2017 }, { "authors": [ "Chen" ], "title": "2017) in which an LSTM is meta-trained to produce function locations to query. The cost of hyperparameter search is often large as each evaluation requires training a model to completion. Often multi-fidelity based approaches are used which leverage “simpler” tasks and transfer the resulting hyperparameters (Hutter", "venue": null, "year": 2018 }, { "authors": [ "Swersky" ], "title": "2018), or leveraging simplified data and models (Petrak", "venue": "Zoph & Le,", "year": 2000 } ]
[ { "heading": null, "text": "We present TaskSet, a dataset of tasks for use in training and evaluating optimizers. TaskSet is unique in its size and diversity, containing over a thousand tasks ranging from image classification with fully connected or convolutional neural networks, to variational autoencoders, to non-volume preserving flows on a variety of datasets. As an example application of such a dataset we explore meta-learning an ordered list of hyperparameters to try sequentially. By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search. Next we use the diversity of the TaskSet and our method for learning hyperparameter lists to empirically explore the generalization of these lists to new optimization tasks in a variety of settings including ImageNet classification with Resnet50 and LM1B language modeling with transformers. As part of this work we have opensourced code for all tasks, as well as 29 million training curves for these problems and the corresponding hyperparameters.1" }, { "heading": "1 INTRODUCTION", "text": "As machine learning moves to new domains, collecting diverse, rich, and application-relevant datasets is critical for its continued success. Historically, research on learning optimization algorithms have only leveraged single tasks (Andrychowicz et al., 2016; Metz et al., 2019a), or parametric synthetic tasks (Wichrowska et al., 2017), due to the difficulty of obtaining large sets of tasks." }, { "heading": "1.1 TASKSET: A SET OF TASKS", "text": "We present a set of tasks significantly larger than any optimizer dataset previously studied. We aim to better enable standardized research on optimizers, be that analysis of existing optimizers, or development of new learned learning algorithms. We call this suite of tasks TaskSet.\nMuch in the same way that learned features in computer vision outpaced hand designed features (Krizhevsky et al., 2012; LeCun et al., 2015), we believe that data driven approaches to discover optimization algorithms will replace their hand designed counterparts resulting in increased performance and usability. To this end, standardizing a large suite of optimization tasks is an important first step towards more rigorous learned optimizer research.\nIn this setting, a single “example” is an entire training procedure for a task defined by data, loss function, and architecture. Thus, TaskSet consists of over a thousand optimization tasks, largely focused on deep learning (neural networks). They include image classification using fully connected and convolutional models, generative models with variational autoencoders (Kingma & Welling, 2013) or flows (Dinh et al., 2016; Papamakarios et al., 2017), natural language processing tasks including both language modeling and classification, as well as synthetic tasks such as quadratics, and optimization test functions. The problems themselves are diverse in size, spanning 7 orders of magnitude in parameter count, but remain reasonably fast to compute as almost all tasks can be trained 10k iterations on a CPU in under one hour. To demonstrate the breadth of this dataset we show an embedding of all the tasks in Appendix A.1 in Figure S1.\n1redacted url" }, { "heading": "1.2 AMORTIZING HYPERPARAMETER SEARCH", "text": "Machine learning methods are growing ever more complex, and their computational demands are increasing at a frightening pace (Amodei & Hernandez, 2018). Unfortunately, most modern machine learning models also require extensive hyperparameter tuning. Often, hyperparameter search is many times more costly than the final algorithm, which ultimately has large economic and environmental costs (Strubell et al., 2019).\nThe most common approach to hyperparameter tuning involves some form of quasi-random search over a pre-specified grid of hyperparameters. Building on past work (Wistuba et al., 2015b; Pfisterer et al., 2018), and serving as a typical example problem illustrative of the sort of research enabled by TaskSet, we explore a hyperparameter search strategy consisting of a simple ordered list of hyperparameters to try. The idea is that the first few elements in this list will cover most of the variation in good hyperparameters found in typical machine learning workloads.\nWe choose the elements in this list by leveraging the diversity of tasks in TaskSet, by meta-learning a hyperparameter list that performs the best on the set of tasks in TaskSet. We then test this list of hyperparameters on new, larger machine learning tasks.\nAlthough learning the list of hyperparameters is costly (in total we train∼29 million models consisting of over 4,000 distinct hyper parameter configurations), our final published list is now available as a good starting guess for new tasks.\nFurthermore, we believe the raw training curves generated by this search will be useful for future hyperparameter analysis and meta-learning research, and we release it as part of this work. We additionally release code in Tensorflow (Abadi et al., 2016), Jax (Bradbury et al., 2018), and PyTorch (Paszke et al., 2019) for a reference optimizer which uses our learned hyperparameter list, and can be easily applied to any model." }, { "heading": "2 TASKSET: A SET OF TASKS", "text": "How should one choose what problems to include in a set of optimization tasks? In our case, we strive to include optimization tasks that have been influential in deep learning research over the last several decades, and will be representative of many common machine learning problems. Designing this dataset requires striking a balance between including realistic large-scale workloads and ensuring that tasks are fast to train so that using it for meta-learning is tractable. We construct our dataset largely out of neural network based tasks. Our chosen tasks have between ten thousand and one million parameters (much smaller than the billions commonly used today), as a result most problems can train in under an hour on a cloud CPU with 5 cores. We additionally focus on increased “task diversity” by including many different kinds of training algorithms, architectures, and datasets – inspired by past work in reinforcement learning which has demonstrated large numbers of problems and increased diversity around some domain of interest is useful for both training and generalization Heess et al. (2017); Tobin et al. (2017); Cobbe et al. (2018); OpenAI et al. (2019). Again though, a balance must be struck, as in the limit of too much diversity no learning can occur due to the no free lunch theorem (Wolpert & Macready, 1997). Our dataset, TaskSet, is made up of 1162 tasks in total. We define a task as the combination of a loss function, a dataset, and initialization.\nSpecifically we define a task as a set of 4 functions:\n• Initialization: ()→ parameter initial values • Data generator: data split (e.g. train / valid / test)→ batch of data • Forward pass: (batch of data, params)→ loss • Gradient function: (input data, params)→ gradients ( dlossdparams )\nA task has no tunable hyperparameters and, coupled with an optimizer, provides all the necessary information to train using first order optimization. This makes experimentation easier, as each task definition specifies hyperparameters such as batch size (Shallue et al., 2018; McCandlish et al., 2018) or initialization (Schoenholz et al., 2016; Yang & Schoenholz, 2017; Xiao et al., 2018; Li & Nguyen,\n2019; Pretorius et al., 2018; Hayou et al., 2018; Karakida et al., 2018; Blumenfeld et al., 2019; Hayou et al., 2019) that no longer need to be tuned.\nWe augment a set of “fixed” tasks which have been designed by hand, with “sampled” tasks that are randomly generated task instances." }, { "heading": "2.1 SAMPLED FAMILIES OF TASKS", "text": "Sampled tasks are created by sampling neural network architectures (e.g., MLPs, convnets), activation functions, datasets (e.g., images, text, quadratic functions, and synthetic tasks), and other properties. We organize these sampled tasks into similar families of tasks. See Appendix H for a complete description of these sampled tasks. Broadly, these are separated into tasks sampling image models (mlp, mlp_ae (Hinton & Salakhutdinov, 2006), mlp_vae (Kingma & Welling, 2013), conv_pooling, conv_fc, nvp (Dinh et al., 2016), maf (Papamakarios et al., 2017)), tasks sampling language models (char_rnn_language_model (Graves, 2013), word_rnn_language_model, rnn_text_classification), quadratics (quadratic) and other synthetic tasks (losg_tasks (Wichrowska et al., 2017)). Defining a sampling distribution that generates tasks that are always valid, and that run within a time constraint, is difficult. Instead, we define a broad distribution and make use of rejection sampling to remove tasks that are either too slow or that we are unable to optimize at all. By starting with a distribution that is too broad, and pruning it, we hope to achieve better coverage of tasks." }, { "heading": "2.2 HAND DESIGNED TASKS", "text": "In addition to the sampled tasks, we also include 107 hand designed tasks. These consist of more common tasks that both improve the coverage beyond the sampled tasks, and provide for better interpretability through a closer match to existing tasks in the literature. These tasks span image classification, text classification, language modeling, and generative modeling, as well as some synthetic tasks such as associative retrieval (Ba et al., 2016). We leave the description of each one of these tasks to Appendix H.3." }, { "heading": "2.3 AGGREGATE STATISTICS OF TASKSET", "text": "In Figure 1a we show histograms of compute times for all problems and find almost all problems train under an hour (see Appendix C for per task family histograms). In Figure 1c we plot a histogram of the number of parameters per tasks. Finally, in Figure 1b we show a distribution of task difficulty by plotting the fraction of optimizer configurations that achieve a certain loss value. We find that for some tasks as many as 50% of optimizers perform well while for others < 1% achieve a loss close to the smallest observed loss. For a qualitative visualization of TaskSet, see Appendix A" }, { "heading": "3 AMORTIZED HYPERPARAMETER SEARCH", "text": "As a simple demonstration of using TaskSet for meta-learning research, we consider learning hyperparameter lists. This idea of learning lists of hyper parameters has been explored in (Wistuba et al.,\n2015b; Pfisterer et al., 2018). We define an optimizer as the pairing of an optimization algorithm and all its corresponding hyperparameters (e.g. learning rate). While sometimes practitioners use a single optimizer – e.g. Adam (Kingma & Ba, 2014) with default hyperparameters – most practitioners will often run multiple optimizers and use a validation set to select the best performer." }, { "heading": "3.1 OPTIMIZER FAMILIES", "text": "We define different parameterizations of hand designed optimizers as an optimizer family. The optimizer families we consider consist of:\n• Adam1p: One hyperparameter, the fixed learning rate α • Adam4p: Four Adam hyperparameters, α, β1, β2, and • Adam6p: Adam4p hyperparameters, and two additional hyperparameters controlling linear\nand exponential learning rate decays • Adam8p: The hyperparameters in Adam6p plus two additional hyperparameters for `1 and `2 regularization terms\n• NAdamW: A 10 hyperparameter search space based on NAdam (Dozat, 2016) with cosine learning rate decay, and weight decay.\nFor the full update equations see Appendix D.1 for Adam and D.2 for NadamW. We chose Adam based on its use in existing work, and NAdam based on performance shown in (Choi et al., 2019)." }, { "heading": "3.2 LEARNED HYPERPARAMETER LISTS", "text": "Traditionally researchers tune hyperparameters on a per model basis. While this often results in performance gains; it comes at the cost of immense compute, and researchers are almost never able to expend enough compute to saturate model performance (Shallue et al., 2018). As an alternative to per-problem tuning, we proposes instead tuning the search strategy itself on a dataset of tasks and transferring the knowledge gained to new tasks of interest. This idea is already implicitly done by humans – e.g. we don’t start a hyperparameter search with a learning rate of 106 – we use values that the community has found useful.\nThis dataset-based tuning has a number of desirable properties. First, the resulting search strategies are much more efficient, resulting in large speedups in sample efficiency on unseen tasks over a random search baseline. Second, we are less restricted by the number of optimizer parameters we search over or by needing to define reasonable search spaces. For example, if there are redundant regions of search space, our learned optimizer will be less likely to sample them repeatedly, unlike random search. If there is a region of hyperparameter space that performs poorly on all problems, the learned search strategy will avoid it.\nIn this work we parameterize the learned search strategy as an ordered list of optimizers to try (i.e. a list of hyperparameter configurations). Given a fixed number of task evaluations we would like to achieve the best possible performance on all tasks in the training set of tasks. For a length k list of optimizers we define our loss as:\nJ(θ1,...,k) = ∑ τ∈tasks [ min i∈1..k f(τ, θi) ] , (1)\nwhere θi are the optimizer hyperparameters for element i in the list, and f is an appropriately normalized loss computed after training task τ .\nWe seek to find an optimal list of optimizers as (similar to (Wistuba et al., 2015b)):\nθ∗1,...,k = arg min θ1,...,k J(θ1,...,k). (2)\nThis is meant to serve as an example task, illustrative of the sort of research enabled by TaskSet. More advanced hyperparameter search strategies would no doubt yield even more performant results." }, { "heading": "3.3 SCORING AN OPTIMIZER BY AVERAGING OVER TASKS", "text": "To score a task, we initialize the parameters of the task and run 10,000 iterations of an optimizer. We monitor loss on each data split (train, validation, test) every 200 steps using an average over 50 mini-batches per evaluation. For all data presented in this paper we also compute averages over 5 random task parameter initializations.\nA side effect of the diverse task dataset is that losses span multiple orders of magnitude, making direct aggregation of performance problematic. To remedy this we normalize the loss values for all tasks linearly between 0 and 1 where 1 is validation loss at initialization and zero is the lowest validation loss achieved by any tested optimizer. Loss values greater than the loss at initialization are clipped to 1. To collapse an entire normalized training curve into a scalar cost, we compute the mean normalized loss over the 10,000 iterations. We find empirically that this choice is similar to taking the minimum (Appendix B.5). We leave exploring alternative methods such as performance profiles (Dolan & Moré, 2002) and Nash averaging (Balduzzi et al., 2018) for future work." }, { "heading": "3.4 GREEDY LEARNING FROM RANDOM SEARCH", "text": "Optimizing Eq. 2 is combinatorially expensive. To tractably solve this optimization problem, we introduce two approximations (Wistuba et al., 2015b). First, we shift the unconstrained search over the full space of optimizers to search over a finite set of optimizers, Θ. This finite set can be computed ahead of time and decouples the expensive procedure of training each task with an optimizer from training the learned search space. Separating data and training in this way has been done for both hyperparameter search (Eggensperger et al., 2015), and neural architecture search (Klein & Hutter, 2019; Ying et al., 2019). In total we trained 1,000 optimizer configurations for each of Adam1p, Adam4p, Adam6p, Adam8p, and NAdamW on all 1,162 tasks with 5 random seeds per pair. Second, we use a greedy heuristic to approximate the combinatorial search over sets of k optimizers. For a single optimizer trial, k = 1, we select the best performing optimizer on average across all training tasks. We then continue to select optimizer parameters such that the minimum of all optimizer-parameters per task, aggregated over all tasks is minimized. This shifts the complexity from exponential in k to linear. Finding a length k set of optimizers can thus be efficiently computed as follows:\nθ∗1 = arg min θ∈Θ [ ∑ τ∈tasks f(τ, θ) ] (3)\nθ∗k = arg min θ∈Θ [ ∑ τ∈tasks [min (b, f(τ, θ))] ] where b = min i∈1..(k−1) f(τ, θ∗i ). (4)\nWe note that the first argument of the outer min, b, can be computed once per set of hyperparameters as it does not depend on θ. Finally, as our tasks are stochastic, we order optimizers based on validation loss and report test loss (Van Hasselt et al., 2016).2\nThis training strategy requires an original search space from which to collect data and build Θ. The search space we use is described in Appendix E.2. While large, we find that the optimal parameters for each task end up covering almost the entire space. At some point, no improvement can be obtained on any of the tasks in the dataset. At this point, we simply randomly order the remaining optimizers though expect more sophisticated methods could be employed." }, { "heading": "4 EXPERIMENTS: TRAINING AND GENERALIZATION OF LEARNED HYPERPARAMETER LISTS", "text": "With our dataset of tasks and data collected, we turn our attention to exploring training of the hyperparameter lists, and generalization beyond the suite of tasks in TaskSet. In this exploration,\n2This technically means that increasing the number of optimizes could potentially decrease performance, but we find this rarely happens in practice.\nwe hope to give a flavor of the types of research possible with TaskSet. Our main tool to show performance are figures that sweep the number of optimizers configurations on the x-axis, and show the best performance achieved for each number of optimizers tried, averaged over some set of tasks (Eq. 1)." }, { "heading": "4.1 LEARNED HYPERPARAMETER LISTS ARE MORE EFFICIENT THAN RANDOM SEARCH", "text": "To demonstrate the impact of learning a search space, we take the 1,162 tasks split them into even train and test tasks. We then learn a search strategy using optimizers from the Adam8p family following Eq. 4 on the train tasks. Results in Figure 3. As baselines, we use random search with different search spaces, including just learning rate (Rand: Adam1p), the default Adam hyper parameters (Rand: Adam4p), as well as the Adam 8 dimensional search space (Rand: Adam8p). To better get a sense of performance, we show two additional “Refined” baselines which involve random sampling from better search space. For min/max, we sample from the minimum bounding box containing the best hyperparameters for each task. To improve the search space quality, we shrink this bounding box so 90% of the best hyperparameters are enclosed. Further considerations regarding search space volume are treated in E.1, and the precise search spaces are specified in Appendix E.2. Finally, one difficulty of working with offline data is the difficulty of running online hyperparameter optimization methods such as Bayesian Optimization without running additional compute. Future work will explore offline Bayesian methods." }, { "heading": "4.2 MORE TASKS LEAD TO BETTER GENERALIZATION", "text": "We next look at the effects of the number of training tasks on generalization. We take subsets of tasks of different size, and train hyperparameter lists using Eq.4. We compute test performance on the remainder of the tasks and plot loss averaged over different splits in Fig. 3. We find that a large number of tasks (more than 100) are required to achieve near-optimal test performance. This is surprising to us given how simple our learned search strategy is (simply a list of hyperparameters), but not wholly so given past work studying generalization in RL (Cobbe et al., 2018)." }, { "heading": "4.3 GENERALIZATION TO DIFFERENT TYPES OF PROBLEM", "text": "For learned algorithms to be generally useful, some amount of generalization to unseen task families is required. To test this, we split our data into disjoint task types. We perform two splits: testing on RNN tasks and training on all others, and testing on autoencoder tasks and training on all others. As a best case baseline we additionally train search spaces on the test task families directly. We find an order of magnitude better sample efficiency than random search for both cases and find our learned search space is close in performance to search spaces trained on just the testing tasks (Fig. 3)." }, { "heading": "5 EXPERIMENTS: REALISTIC PROBLEMS", "text": "In §4.3 and §B.1 we explored generalization of learned hyperparameter lists to held out tasks within the TaskSet dataset. While useful for analysis, these tasks are still far from the workloads commonly employed to solve real problems. In this section, we explore the performance of our learned search space on a number of state of the art models. These models drastically differ from the training set of tasks in parameter count and compute cost. We see these experiments as evidence that the tasks presented in TaskSet capture enough of the structure of “realistic” problems that TaskSet can be used to improve larger scale workloads. For all experiments in this section we take the optimizer ordering using the NAdamW optimizer family on all TaskSet tasks then apply the resulting search space to the target problem. The final ordered list of hyperparameters used is in Appendix G. We show results for ResNet50 on ImageNet, and Transformers on LM1B. Additional results with reinforcement learning using PPO are in Appendix B.2.\nFirst we explore ImageNet classification using a ResNet50. on We take the TPU implementation with default settings from the official Tensorflow models repository (Tensorflow, 2019), and swap out different optimizers. We show accuracy computed over the course of training as well as best performance for a given hyperparameter budget in Figure 4. We find that the learned search space vastly outperforms learning rate tuned Adam.\nNext we explore language modeling on LM1B with a Transformer. We take the transformer (Vaswani et al., 2017) example implemented in Jax (Bradbury et al., 2018) with Flax (Flax Developers, 2020). We train using a 2x2 TPU V2 configuration for 100k iterations. Once again we take all other hyperparameters as is and simply swap optimizer implementation. We find the learned hyperparameter list dramatically outperforms the default optimizer setting and the fixed learning rate baseline. Nevertheless, we emphasize that our method does not require any knowledge of the underlying problem to achieve faster results. See Appendix B.3 for this same transformer with a budget of 20k iterations." }, { "heading": "6 RELATED WORK", "text": "The idea of sets of tasks has been explored throughout machine learning. The majority of these suites are for use in evaluation where as our suite is targeted for meta-learning. The closest family of optimization tasks for evaluation to those presented here is DeepObs (Schneider et al., 2019) which\nincludes 20 neural network tasks. Our task suite focuses on smaller problems and contains 50x more tasks. Outside of evaluation, task suites in reinforcement learning such as Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018) focus on training algorithms that work across a variety of settings.\nThe creation of TaskSet was motivated by the goal of learning learning algorithms, or metalearning (Schmidhuber, 1987; 1995; Hochreiter et al., 2001), and in particular learned optimizers (Bengio et al., 1990; Andrychowicz et al., 2016; Bello et al., 2017; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). This use case is explored with this dataset in (Metz et al., 2020). In this work we do not use this task suite to train learned optimizers, but instead focus on learning a hyperparameter search strategy. Tuning hyperparameters by leveraging multiple tasks has been explored within the contexts of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as meta-learning (Reif et al., 2012; Gomes et al., 2012; Feurer et al., 2014; Wistuba et al., 2015b;a; Chen et al., 2017; Pfisterer et al., 2018). See Appendix F.1 for a full discussion of sets of tasks in machine learning, Appendix F.2 for more info on optimization in machine learning, and Appendix F.3 for a discussion on existing hyper parameter search methods." }, { "heading": "7 DISCUSSION", "text": "Learning optimization algorithms represents a promising direction for accelerating machine learning research. For the resulting algorithms to become useful tools, however, we must further understand the relationships between training tasks, meta-optimization, and both iid and out of distribution generalization.\nThis work takes steps towards this goal by introducing a significantly larger set of optimization tasks than ever previously considered. As an example use-case, we provide a thorough analysis of how TaskSet enables meta-optimization of simple, but performant hyperparameter lists. Despite this approach’s simplicity, the training of learned learning algorithms is computationally expensive. We hope to explore alternative parameterizations which will increase efficiency by, e.g., leveraging previous evaluations or partial model training (Swersky et al., 2014; Li et al., 2016).\nWe are releasing the optimal hyperparameter list we have found as a drop-in replacement optimizer in a variety of deep learning frameworks (Tensorflow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), and JAX (Bradbury et al., 2018)) in the hopes that the research community finds them useful. We believe this represents a new set of reasonable optimizer defaults for new problems. Finally, we hope TaskSet encourages more standardized research on general purpose optimizers." }, { "heading": "A TASKSET VISUALIZATION", "text": "For a qualitative view, we constructed a feature space consisting of performance measurements for each task+optimizer pair (See §3.3). This forms a dense matrix of size number of tasks by number of optimizers. We then perform T-SNE (Maaten & Hinton, 2008; Van Der Maaten, 2014) to reduce the dimensionality to two and plot the results coloring by task family (Figure S1). Clusters in this space correspond to tasks that work well with similar optimizers. We find diversity of tasks with clusters occurring around similar families of tasks." }, { "heading": "A.1 TSNE OF TASKSET", "text": "" }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "B.1 GENERALIZATION TO DIFFERENT SIZED PROBLEMS", "text": "Training learned algorithms on large models is often infeasible for computational reasons. As such, one form of generalization needed when building learned algorithms is the ability to transfer to different sized models. As shown in Figure 1 the tasks in this suite contain a wide range of parameter counts, and can thus be used to test this kind of generalization. We split the tasks into 8 groups – one group per order of magnitude in parameter count, and train hyperparameter lists on one range and test on the rest. In Figure S2 we plot the fraction of the training loss achieved by the test loss on the target parameter range. We find peak performance around the model sizes used for training, and smooth falloff as the testing tasks become more dissimilar as measured by parameter count. We note that our problems are not evenly distributed across these groups thus each group will contain a different percentage of the underlying tasks. While this potentially confounds these results, we believe a similar bias occurs in realistic workloads as well.\n0 1 2 3 4 5 6 7 number of parameters (log10)\n0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 tra in J / te st J\n0-1 3-4 6-7 all\nFigure S2: We show learned search space generalization, measured as a ratio of the loss achieved in training and testing, versus the number of task parameters used during search space training. Generalization falls off as one moves further away from the training regime. In black we show that a uniform mixture of the 7 parameter buckets does not fall off." }, { "heading": "B.2 REINFORCEMENT LEARNING WITH PPO", "text": "Figure S3: We find our learned hyperparameter lists performs about as well as random search on the NAdam search space, and worse than the random search on the learning rate tuned Adam search space.\nWe test the learned hyperparameter lists on two continuous control reinforcement learning environments, half cheetah and humanoid, from Gym’s Mujoco environments(Todorov et al., 2012; Brockman et al., 2016). We use TF-Agents (Guadarrama et al., 2018) with all non-optimizer hyperparameters set via searching a mixture of environments. In figure B.2 we find our learned hyperparameter lists achieves comparable to slightly worse performance does not out perform learning rate tuning of Adam in both efficiency nor final performance. To diagnose this behavior we ran all 1k optimizers for both problems and found the learned hyperparameter list performs comparable to random search in the underlying space. To probe further, we computed spearman correlation on the performance of each optimizer as compared to the rest of the tasks in the task suite. We found considerably worse correlations than where present for tasks in the TaskSet. This is not surprising as TaskSet contains no reinforcement learning problems." }, { "heading": "B.3 LM1B TARGETING 20K ITERATIONS", "text": "We show a transformer on LM1B similar to that shown in §5 except run for only 20k iterations, a fith of the steps. Results in Figure S4. We find the learned hyperparameter lists are much more efficient than either of the baselines.\nFigure S4: We find our learned hyperparameter lists out performs learning rate tuned Adam with both a constant, and a fixed learning rate schedule on a 53M parameter Transformer trained on LM1B. Left: Learning curves for the best of the optimizers. Right: Number of optimizers tried vs best test loss." }, { "heading": "B.4 PROBING SHORT HORIZON", "text": "Often the goal when training a learned optimizers is to minimize performance after training some number of iterations. This is extremely computationally expensive and in practice approximations must be used. One common family of approximations is short horizon based methods. These methods rely upon somehow truncating training so that updates can be made to the learned optimizer more frequently. This is commonly done via truncated backprop (Werbos, 1990; Wichrowska et al., 2017;\nFigure S5: Hyperparameter lists trained on short horizon data generalize remarkably well. On the y-axis we show performance evaluated on the the full 10k training iterations for a given number of optimizers tried (x-axis). In color we show different number of steps used when evaluating task optimizer performance when training the hyperparameter list.\nMetz et al., 2019a; Wu et al., 2016), or proxy objectives such as only training for a handful of epoch (Zoph & Le, 2017). While this short horizon proxy is certainly not optimal(Wu et al., 2016), the performance gains are immense and in practice is what makes meta-training optimizers feasible. In our task suite, we test this short horizon learning by training hyperparameter lists only using some finite amount of training iterations per task and testing in the full training regieme (10k steps). Results in figure S5. We find that even when learning the hyperparameter list on a mere 200 steps, our hyperparameter list continues to generalize to outperform random search on Adam8p. This is promising as this suggests that training the learned hyperparameter list can be done with 1/50th of the total compute. This result is surprising to us as prior work indicates the effect of this bias can be severe (Wu et al., 2016; Metz et al., 2019a). We suspect it is due to the simplicity of the learned parameter space but leave a thorough analysis of this for future work.\n100 101 102 103 0.1\n0.5\n1.0 default norm last quantile norm min norm random\n0.0 0.2 0.4 0.6 0.8 1.0 default norm\n0.0 0.2 0.4 0.6 0.8 1.0 las t q ua nt ile n or m\n0.0 0.2 0.4 0.6 0.8 1.0 default norm\n0.0 0.2 0.4 0.6 0.8 1.0 m in no rm\nFigure S6: Left: Aggregate performance (y-axis) vs number of optimizer tried (x-axis) for different normalization and aggregation techniques. In each curve we train the hyperparameter list with a different normalization and aggregation strategy and test with the default normalization and aggregation technique described in 3.3. We find some some strategies are near identical in performance (e.g. min norm), while others perform significantly worse – e.g. last quantile norm. In both cases, however, we still perform better than the underlying random search. Center: Correlation between default normalization and the quantile based normalization strategy. Correlation is quite low – 0.193 Pearson’s correlation. Right: Correlation between the default normalization using a mean to aggregate over validation over the course of training vs using a min over validation over the course training. We find a much higher correlation of 0.911." }, { "heading": "B.5 CHOICE OF NORMALIZATION FUNCTION", "text": "There is no easy way to define a single metric for optimizer performance over a mixture of tasks. This paper picks a single normalization strategy based on minimum validation loss and the validation loss at initialization presented in §3.3. In this section we show the impact of choosing a different normalization and or aggregation technique. First, instead of computing the mean over learning curves as described in §3.3 we compute a min. Second, instead of rescaling based on init and min,\nwe linearly rescale based on the 95 percentile of validation loss and the min validation loss achieved at the end of training each task.In Figure S6 we show learned hyperparameter list training and testing performance as a function of number of optimizers tried when training with different normalization techniques. We find using the min instead of mean results in a negligible change, while using the percentile loss more significantly hurts performance. This difference can be explained by Figure S6b and S6c where we show correlations between the two losses. We find the percentile loss has a much weaker correlation to the default normalizer. We suspect this difference is due to the fact that many optimizers diverage on tasks. By using the 95 percentile we upweight optimizers that do not diverge." }, { "heading": "B.6 TASK FAMILIES ARE DIVERSE", "text": "To show the effects of diversity we train and test hyperparameter lists on each pair of task family. We additionally normalize each column from 0-1 to account for different mean losses across tasks. Results in Figure S7. While we do find some similarity in tasks – e.g. between MAF and NVP models, but no two tasks behave the same performance characteristics (no duplicate columns) suggesting that each task family is providing a different contribution to the space of all tasks. We also find when training on certain “far away” tasks, e.g. the quadratic family, we find poor performance on most other task families.\nm af nv p\nm lp_\nva e\nm lp_ ae co nv _p oo lin g co nv _f c m lp wo rd _r nn _lm\nrn n_\nte xt_\ncla ss\nch ar\n_r nn _lm los g_ ta sk s qu ad ra tic\nTrain task family\nmaf nvp mlp_vae mlp_ae conv_pooling conv_fc mlp word_rnn_lm rnn_text_class char_rnn_lm\nlosg_tasks quadratic\nTe st\nta sk\nfa m\nily\n0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 no rm ali ze d los s\nFigure S7: Learning hyperparameter lists using one task family and testing on the remainder of task families. We normalize each column from 0-1 to account for different mean losses across tasks. Lower loss means better performance. We find some groups of similar tasks, but in general no two task families behave identically." }, { "heading": "B.7 EFFECTS OF THE META-TRAINING SEARCH SPACE SIZE", "text": "Our offline learning technique described in §3.4 hinges on a finite set of optimizers collected via random search. This set is denote by Θ in Eq.4. In this section we probe the impact of this size. We take different sized subsets of the the thousand Adam8p optimizer configurations and train and test search spaces on different iid splits of tasks. We then plot performance as a function of this number of optimizers in Figure S9. Moving left in this figure corresponds to increasing the compute needed to train the learned hyperparameter list. We find performance continues to improve as the size of Θ grows. Given the high dimension of our meta-parameters, 8, this is not a surprise as the number of evaluations needed to explore the space will grow exponentially. We find that the full thousand trials are needed to out perform learning rate tuned Adam when only given a single optimizer evaluation. We find around 100 optimizers (size of Θ) are needed in the case of 10 optimizer trials (k = 10).\nOverall this sugjests that randomsearch might not be the most efficient learning method for creating hyperparameter lists. This is especially true as we work with optimizer families that have more hyperparameters. Other approximate learning methods should likely be explored such as truncated backprop through time as used by the learned optimizer community(Metz et al., 2019a), and/or population based methods (Balduzzi et al., 2019).\nlo sg\n_t a sk\ns\nq u a d ra\nti c\nm lp\nm lp\n_a e\nco n v _p\no o lin\ng\nco n v _f\nc\nm lp\n_v a e\nn v p\nrn n _t\ne x t_\ncl a ss\nif ic\na ti\no n\nm a f\nfi x e d\nch a r_\nrn n _l\na n g u a g e _m\no d e l\nw o rd\n_r n n _l\na n g u a g e _m\no d e l\n1 sec\n10 sec\n1 min\n5 min\n30 min 1 hr 2 hr\nti m\ne t\no t\nra in\n1 0\nk st\ne p s\nFigure S8: Timings computed for each task family. We find most task families have a narrow distribution of compute times.\n101 102 103 number optimizers\n0.10 0.15 0.20 0.25 0.30 0.35 0.40 los s\n1 hparams (Adam8p) 1 hparams (Adam4p) 10 hparams (Adam8p) 10 hparams (Adam4p) 100 hparams (Adam8p) 100 hparams (Adam4p) 1 -- Adam lr 10 - Adam lr\nFigure S9: Performance continues to improve as more and more optimizers are used when training the search spaces. On the x-axis we show number of optimzers (size of Θ, the number of hyperparameter evaluations used in training the learned hyperparameter list) and y-axis we show test loss achieved when applying the learned search space for a given fixed length, e.g. different values of k shown in color). We plot median with 25-75 percentile shaded over different random optimizer samples and iid task splits. Stars (with horizontal guide lines) denote best search for the corresponding number of hyperparameters for learning rate tuned Adam in half orders of magnitude." }, { "heading": "C TASK TIMINGS", "text": "In Figure S8 we show box plots of training times for each problem. For each task we use the median step time recorded over a mixture of different physical devices and multipled by 10k to estimate a full training time. Future versions of this dataset of tasks will contain more variation within each task family." }, { "heading": "D OPTIMIZER FAMILY UPDATE EQUATIONS", "text": "" }, { "heading": "D.1 ADAM8P UPDATE EQUATIONS", "text": "The 8 meta-parameters are: the learning rate, α, first and second moment momentum, β1, β2, the numerical stability term, , `2 and `1 regularization strength, and learning rate schedule constants λexp_decay and λlinear_decay. For Adam6p, we set `1 and `2 to zero.\nφ(0) =problem specified random initialization (S1)\nm(0) =0 (S2)\nv(0) =0 (S3)\ng(t) = d\ndφ(t) (f(x;φ(t)) + `2||φ(t)||22 + `1||φ(t)||1) (S4)\nm(t) =β1m (t−1) + g(t)(1− β1) (S5)\nv(t) =β2v (t−1) + (g(t))2(1− β2) (S6)\nm̂(t) = m(t)\n1− βt+11 (S7)\nv̂(t) = v(t)\n1− βt+12 (S8)\nu(t) = m̂(t)√ v̂(t) +\n(S9)\ns (t) linear =max(1− tλlinear_decay, 0) (S10) s(t)exp =exp(−tλexp_decay) (S11)\nφ(t+1) =αs (t) linears (t) expu (t) (S12)" }, { "heading": "D.2 NADAMW UPDATE EQUATIONS", "text": "This optimizer family has 10 hyper parameters. The base learning rate, αbase, first and second moment momentum, β1, β2, the numerical stability term, , `2WD `2 regularization strength, `2AdamW AdamW style weight decay, and a boolean to switch between NAdam and Adam, buse nesterov. The learning rate schedule is based off of a single cycle cosine decay with a warmup. It is controlled by 3 additional parameters – cwarmup, cconstant, and cmin learning rate mult.\nThe learning rate is defined by:\nu =cwarmupT > t (S13) αdecay&constant =(αbase − cmin learning rate mult)(0.5 (S14)\ncos(tπ/(T − cconstant)) + 0.5)+ (S15) cmin learning rate mult (S16)\nαwarmup = t\n(Tcwarmup) (S17)\nα =(1− u)αdecay&constant + uαwarm (S18)\nThe update equations of NAdamW are quite similar to that of Adam8p. For clarity we list the full update here.\nφ(0) =problem specified random initialization (S19)\nm(0) =0 (S20)\nv(0) =0 (S21)\ng(t) = d\ndφ(t) (f(x;φ(t)) + `2wd||φ(t)||22 (S22)\nm(t) =β1m (t−1) + g(t)(1− β1) (S23)\nv(t) =β2v (t−1) + (g(t))2(1− β2) (S24)\nm̂(t) = m(t)\n1− βt+11 (S25)\nv̂(t) = v(t)\n1− βt+12 (S26)\nu (t) heavy ball = m̂(t)√ v̂(t) +\n(S27)\nu (t) nesterov =\nβ1m̂ (t) + (1− β1)g(t)√\nv̂(t) + (S28)\nφ(t+1) =φ(t) − (1− buse nesterov)αu(t)heavy ball+ (S29)\nbuse nesterovαu (t) nesterov − α`2AdamWφ(t) (S30)" }, { "heading": "E OPTIMIZER FAMILY SEARCH SPACES", "text": "" }, { "heading": "E.1 SEARCH SPACE CONSIDERATIONS", "text": "The performance of random search critically depends on the boundaries of the original search space. Without prior knowledge about the problems, however, picking a good search space is difficult. To explore this we additionally choose search spaces after collecting and looking at the data. We then use this search space to simulate random search within the constraints via rejection sampling. To find these search spaces we find the best hyper parameters for each task and construct new hyperparameter ranges with min and max values determined by the smallest and largest values of each hyperparameter which were the best hyperparameter for some task. This removes regions of the search space not used by any task. We also tested bounds based on the 5th and 95th percentile of best performing hyperparameters computed over all tasks. In the case of min and max, we find the optimal hyperparameters cover nearly all of the existing space, whereas the percentile based search spaces reduces the volume of the search hypercube by more than 90% leaving us with only ∼100 hyperparameter configurations. In Figure 3, we find, in all cases, learning the hyperparameter list is much more efficient." }, { "heading": "E.2 ADAM8P, ADAM6P, ADAM4P, ADAMLR SEARCH SPACES", "text": "For Adam1p, Adam4p, Adam6p, and Adam8p we sample learning rate logritmically between 1e-8 and 10, beta1 and beta2 we parametrize as 1 − x and sample logrithmically between 1e-4 and 1 and 1e-6 and 1 respectively. For learning rate schedules we sample linear decay between 1e-7, 1e-4 logrithmically and exponential decay logrithmically between 1e-3, 1e-6. We sample both `1 and `2 logrithmcally between 1e-8, 1e1." }, { "heading": "E.3 NADAMW SEARCH SPACE", "text": "This search space was chosen heuristically in an effort to generalize to new problems. We would like to emphasize that it was not tuned. We used our insight from Adam based optimizer families and\nchose this. No iterations where done. We expect more iterations will improve not only in distribution performance, but alsos generalization performance.\nThe initial learning rate, αbase is sampled from log space between 1e− 5 and 1.0. 1− β1 is sampled logrithmically between 1e − 3, and 1.0. 1 − β2 is sampled between 1e − 5, and 1.0. is sampled logarithmically between 1e − 8 and 1e4. We sample using nesterov (buse nesterov) 50% of the time. We sample `2WD and `2AdamW logrithmically between 1e− 5 and 1e− 1. Equal probabilities of a third we either use both terms, zero out `2WD, or zero out `2AdamW . With 50% probability we use a nonzero min learning rate multiplier sampled logrithmically between 1e− 5 and 1.0. With 50% probability we sample the warm up fraction, cwarmup between 1e-5 and 1e-1, otherwise it is set to zero. Finally, we uniformly sample the amount of time the learning rate is held constant(cconstant) between 0 and 1." }, { "heading": "F EXTENDED RELATED WORK", "text": "" }, { "heading": "F.1 SETS OF TASKS", "text": "Benchmarks consisting of multiple tasks are becoming an increasingly common technique for measuring improvement in algorithm design. Reinforcement learning has Atari Bellemare et al. (2013), DMLab Beattie et al. (2016), gym Brockman et al. (2016), and dm_control Tassa et al. (2018). Natural language processing has evaluation sets such as GLUE (Wang et al., 2018), Super GLUE (Wang et al., 2019), and the NLPDecathalon (McCann et al., 2018). In computer vision there is (Zhai et al., 2019) which studies transfer learning of image features. In black box optimization there is Nevergrad (Rapin & Teytaud, 2018), COmparing Continuous Optimizers (COCO) (Hansen et al., 2016) and a number of tasks to test Bayesian hyperparameter optimization presented in (Dewancker et al., 2016). For first order gradient methods there are unit tests for stochastic optimization (Schaul et al., 2013) which studies toy optimization functions, and DeepObs (Schneider et al., 2019) which includes 20 neural network tasks. Hyperparameter tuning practices on these benchmarks vary between tuning on each task separately, to tuning one set of hyperparameters for all problems. In Atari (Bellemare et al., 2013), for example, it is common practice to tune hyperparameters on a subset of tasks and evaluate on the full set. This protocol can further be extended by leveraging unseen levels or games at test time as done in Obstacle Tower (Juliani et al., 2019), ProcGen (Cobbe et al., 2019), CoinRun (Cobbe et al., 2018), and Sonic (Nichol et al., 2018). We believe generalization to unseen tasks is key for learned algorithms to be useful thus our learned search space experiments mirror this setting by making use of hold out tasks.\nExisting meta-learning data sets share similar goals to our work but focus on different domains. In few shot learning there is MiniImageNet (Vinyals et al., 2016) which is built procedurally from the ImageNet dataset (Russakovsky et al., 2015). Meta-Dataset (Triantafillou et al., 2019) takes this further and also focuses on generalization by constructing few shot learning tasks using images from a number of different domains for evaluation purposes. The automated machine learning community has OpenML (Vanschoren et al., 2013) with a focus on selecting and tuning non-neural algorithms. For learning optimizers, the use of task suites has been limited and ad-hoc. Many works use a single or small number of standard machine learning tasks (Andrychowicz et al., 2016; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a). Wichrowska et al. (2017) uses a set of synthetic problems meant to emulate many different kinds of loss surfaces. While existing collections of tasks exist for optimizer evaluation, e.g. (Schneider et al., 2019), they contain too small a number of tasks to act as a comprehensive training set for learning algorithms, and many of their tasks are additionally too computationally expensive to be useful during learning." }, { "heading": "F.2 HAND DESIGNED AND LEARNED OPTIMIZERS", "text": "Optimization is core to machine learning and thus the focus of extensive work. Methods such as Nesterov momentum (Nesterov, 1983), AdaGrad (Duchi et al., 2011), RMSProp (Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014) have all shown considerable improvements in both the speed of optimization and ease of use by exposing robust, and easier to tune hyperparameters than SGD (Sivaprasad et al., 2019). Adaptive step size methods in particular have emerged at the forefront with\nmany works building from it including AdamW (Loshchilov & Hutter, 2017), RAdam (Liu et al., 2019), Novograd (Ginsburg et al., 2019), and NAdam Dozat (2016). Recently, there has been a focus on comparing optimizers either for best performance, or ease of use (Wilson et al., 2017; Choi et al., 2019; Schneider et al., 2019; Sivaprasad et al., 2019). This has proven difficult as performance is heavily dependent on the choice of search space for optimization hyperparameters (Choi et al., 2019).\nLearned optimizers represent a parallel thread in the development of optimizers. By learning as opposed to hand-designing optimizers, researchers hope to not only increase performance but also ease of use (e.g. minimize the number of hyperparameters required or lower hyperparameter sensitivity) (Bengio et al., 1990; Schmidhuber, 1995; Hochreiter et al., 2001). Recently, there has been renewed interest in parameterizating learning algorithms with neural networks and learning these optimizers on neural network based losses (Andrychowicz et al., 2016; Wichrowska et al., 2017; Li & Malik, 2017; Lv et al., 2017; Metz et al., 2019a;b). Other approaches make learn symbolic parameterizations for new optimizers (Bello et al., 2017). These various methods are all trained and evaluated on different distributions of tasks making comparison across papers challenging. The dataset of tasks presented here will hopefully aid in the ability to compare and evaluate progress in learned optimizer research.\nIn this work, we develop a much more minimal type of “learned optimizer” than previous work which developed new functional forms for the optimizer. Optimization involves not only the functional form of the optimizer, but also the rules for choosing hyperparameters and applying the optimizer. We focus on this second aspect of optimization and learn a hyperparameter search space to improve the performance of existing hand designed methods." }, { "heading": "F.3 HYPERPARAMETER SEARCH", "text": "Hyperparameter search is a key component in machine learning. Considerable improvements have been made in language Melis et al. (2017), computer vision (Snoek et al., 2012), and RL (Chen et al., 2018) simply by tuning better. Often no single hyperparameter configuration works well across all tasks for existing optimization methods. Most current hyperparameter search methods involve trying a very large number of hyperparameters for every new task, which is computationally infeasible for large tasks, and additionally can severely limit the number of hyperparameters that can be tuned. Many common techniques such as random search (Bergstra & Bengio, 2012; Bousquet et al., 2017), Bayesian optimization (Snoek et al., 2012; 2015), tree parzen estimators (Bergstra et al., 2011), or sequential halving (Kumar et al., 2018) require setting a hyperparameter search space by hand which is not only difficult but often wildly inefficient.\nLearning hyperparameters or search strategies by leveraging multiple tasks has been explored within the context of Bayesian optimization Swersky et al. (2013); Perrone & Shen (2019); Perrone et al. (2018) as well as under the term meta-learning in Chen et al. (2017) in which an LSTM is meta-trained to produce function locations to query.\nThe cost of hyperparameter search is often large as each evaluation requires training a model to completion. Often multi-fidelity based approaches are used which leverage “simpler” tasks and transfer the resulting hyperparameters (Hutter et al., 2018). Common approaches include training on partial function evaluations Swersky et al. (2014); Domhan et al. (2015); Li et al. (2016); Klein et al. (2016); Falkner et al. (2018), or leveraging simplified data and models (Petrak, 2000; Zoph & Le, 2016; Brock et al., 2017). Our dataset of tasks serves as a: “simpler” set of tasks to train on; a large and diverse enough set of problems that optimization algorithms trained on it may be expected to generalize; and a framework to test transfer across different types of problems." }, { "heading": "G LIST OF NADAM HPARAMS", "text": "Idx Lr warmup constant Min LR mult beta1 beta2 epsilon nesterov l2 reg l2 weight decay\n0 1.24e-3 0.000 0.477 1.01e-3 0.94666 0.94067 8.114e-8 False 0.000e+00 7.258e-5 1 5.33e-3 0.000 0.172 0.0 0.96047 0.99922 8.665e-8 True 0.000e+00 5.563e-3 2 2.12e-4 0.000 0.210 1.39e-3 0.62297 0.97278 1.540e-7 False 0.000e+00 5.361e-2 3 4.06e-1 0.000 0.324 0.0 0.99724 0.98680 1.079e+02 True 0.000e+00 1.562e-2 4 2.05e-2 0.000 0.885 1.57e-5 0.35731 0.86043 8.874e-5 True 0.000e+00 7.217e-2 5 5.95e-4 0.008 0.378 0.0 0.89130 0.99983 1.483e-7 True 0.000e+00 4.087e-2 6 7.53e-3 0.000 0.422 9.55e-4 0.69192 0.98434 3.593e-8 False 0.000e+00 3.060e-4 7 4.69e-3 0.000 0.509 0.0 0.99639 0.98820 2.056e-5 False 0.000e+00 3.552e-2 8 2.95e-1 0.000 0.201 0.0 0.99678 0.99981 7.498e+00 False 3.792e-4 3.463e-4 9 2.04e-3 0.000 0.527 0.0 0.49995 0.99755 5.630e-8 True 0.000e+00 2.796e-2\n10 7.39e-1 0.001 0.556 3.31e-3 0.99691 0.80639 2.900e+03 False 0.000e+00 7.851e-2 11 8.12e-3 0.000 0.207 0.0 0.17785 0.96033 7.971e-2 False 0.000e+00 1.489e-2 12 3.33e-2 0.000 0.369 0.0 0.69592 0.99997 5.510e-6 True 0.000e+00 1.362e-5 13 6.95e-3 0.000 0.014 0.0 0.99412 0.99305 4.352e-7 False 0.000e+00 3.142e-5 14 1.88e-1 0.000 0.205 1.08e-1 0.98597 0.56531 3.335e+00 True 1.265e-5 3.868e-3 15 9.47e-4 0.007 0.452 0.0 0.43977 0.09422 2.120e-7 False 0.000e+00 6.902e-3 16 3.75e-3 0.000 0.184 0.0 0.87756 0.96128 3.163e-3 True 7.468e-5 2.627e-3 17 7.25e-1 0.000 0.495 0.0 0.99800 0.99781 3.608e+00 True 1.656e-5 3.911e-2 18 4.58e-3 0.000 0.107 3.66e-1 0.42294 0.99963 4.174e-6 True 0.000e+00 4.446e-3 19 3.07e-4 0.007 0.518 0.0 0.57863 0.99625 9.881e-6 False 0.000e+00 5.521e-2 20 2.94e-5 0.000 0.830 8.27e-5 0.96916 0.99896 7.782e-7 True 3.364e-4 3.416e-3 21 1.65e-4 0.002 0.457 2.70e-1 0.95280 0.04565 2.832e-6 True 0.000e+00 1.141e-2 22 9.17e-1 0.010 0.897 2.67e-2 0.45061 0.99244 4.945e-1 False 1.253e-3 0.000e+00 23 2.36e-3 0.000 0.986 0.0 0.98560 0.99997 1.080e-8 True 0.000e+00 3.023e-3 24 2.14e-2 0.000 0.128 0.0 0.98741 0.99336 1.266e-4 False 0.000e+00 5.194e-4 25 5.91e-2 0.000 0.062 0.0 0.99794 0.99383 3.447e+02 True 0.000e+00 3.935e-2 26 1.57e-3 0.000 0.251 0.0 0.91820 0.99991 4.675e-5 False 0.000e+00 4.112e-5 27 4.43e-1 0.000 0.702 0.0 0.94375 0.93551 2.335e-8 True 0.000e+00 8.325e-5 28 2.98e-3 0.008 0.046 0.0 0.68612 0.94232 6.614e-2 False 6.489e-5 0.000e+00 29 1.65e-2 0.004 0.082 4.92e-4 0.95717 0.99789 3.068e+01 True 0.000e+00 8.920e-2 30 5.58e-3 0.000 0.538 0.0 0.97559 0.99990 3.238e-8 True 0.000e+00 4.896e-4 31 8.54e-1 0.000 0.229 0.0 0.93129 0.50200 2.051e-2 False 2.068e-4 2.801e-2 32 7.38e-3 0.000 0.722 8.78e-2 0.21456 0.99752 2.862e-2 False 0.000e+00 8.439e-2 33 4.26e-4 0.001 0.923 2.06e-1 0.47239 0.99974 8.221e-5 False 1.248e-5 0.000e+00 34 6.04e-3 0.000 0.698 0.0 0.97849 0.91449 1.806e+00 False 3.183e-3 1.762e-2 35 8.86e-3 0.000 0.104 1.66e-1 0.98967 0.99720 1.493e-2 True 0.000e+00 2.253e-2 36 1.51e-2 0.000 0.431 1.99e-3 0.80488 0.97878 2.538e-8 True 0.000e+00 2.269e-5 37 2.50e-3 0.000 0.009 0.0 0.98127 0.99988 1.799e-7 False 0.000e+00 1.303e-2 38 3.42e-4 0.000 0.827 6.38e-1 0.25217 0.96572 2.928e-7 True 0.000e+00 1.318e-3 39 6.94e-5 0.000 0.085 0.0 0.98674 0.42709 2.387e-7 False 0.000e+00 2.071e-4 40 3.03e-2 0.001 0.313 0.0 0.90610 0.99997 4.449e-3 True 0.000e+00 2.813e-5 41 4.64e-3 0.000 0.495 2.26e-5 0.64658 0.54108 3.528e-8 False 0.000e+00 2.996e-5 42 2.25e-3 0.000 0.722 0.0 0.97967 0.97518 1.488e-7 True 1.812e-5 2.180e-2 43 6.66e-4 0.000 0.632 2.79e-5 0.65968 0.99997 6.848e-6 True 0.000e+00 3.130e-3 44 3.31e-3 0.000 0.146 0.0 0.90447 0.99970 6.618e-6 True 0.000e+00 2.184e-2 45 7.84e-4 0.016 0.124 0.0 0.95065 0.99685 2.141e-2 False 0.000e+00 4.024e-5 46 6.16e-3 0.016 0.623 0.0 0.98823 0.98744 1.616e-6 False 0.000e+00 1.544e-2 47 3.26e-4 0.000 0.738 1.61e-4 0.78425 0.99998 3.468e-3 False 0.000e+00 4.709e-2 48 4.12e-3 0.001 0.205 0.0 0.99561 0.75382 2.390e-6 True 0.000e+00 3.631e-2 49 6.26e-1 0.000 0.932 2.52e-3 0.99401 0.83521 2.431e+00 True 0.000e+00 1.048e-2\nTop 50 hyper parameters found using the NAdamW search space. We find diverse learning rates, with very little warmup used. We additionally find most good performing optimizers make use of AdamW style weight decay. Finally, matching insight from (Choi et al., 2019), we find large values of ." }, { "heading": "H DESCRIPTION OF TASKS IN TASK SUITE", "text": "In this section we detail the task distribution used throughout this work. In addition to this text, a Tensorflow (Abadi et al., 2016) implementation is also released at github.com/google-research/googleresearch/tree/master/task_set." }, { "heading": "H.1 SAMPLED TASKS", "text": "" }, { "heading": "H.1.1 DEFAULT SAMPLED COMPONENTS", "text": "As many of the sampled tasks are neural networks. We define common sampling routines used by all the sampled tasks.\nActivation functions: We define a distribution of activation functions which is sampled corresponding the following listing both name and weight. These are a mix of standard functions (relu, tanh) to less standard (cos).\n• relu: 6 • tanh: 3 • cos: 1 • elu: 1 • sigmoid: 1 • swish (Ramachandran et al., 2017): 1 • leaky relu (with α = 0.4): 1 • leaky relu (with α = 0.2): 1 • leaky relu (with α = 0.1): 1\nInitializations: We sample initializers according to a weighted distribution. Each initialization sample also optionally samples hyperparameters (e.g. for random normal initializers we sample standard deviation of the underlying distribution).\n• he normal (He et al., 2015): 2 • he uniform (He et al., 2015): 2 • glorot normal (Glorot & Bengio, 2010): 2 • glorot uniform (Glorot & Bengio, 2010): 2 • orthogonal: 1. We sample the “gain”, or multiplication of the orthogonal matrix logarithmi-\ncally between [0.1, 10]. • random uniform 1.0: This is defined between [−s, s] where s is sampled logarithmically\nbetween [0.1, 10]. • random normal: 1.0: The std is sampled logarithmically between (0.1, 10). • truncated normal: 1.0: The std is sampled logarithmically between (0.1, 10). • variance scaling: 1.0: The scale is sampled logarithmically between (0.1, 10).\nRNN Cores: We define a distribution over different types of RNN cores used by the sequential tasks. With equal probability we sample either a vanilla RNN (Elman, 1990), GRU(Chung et al., 2014), or LSTM(Hochreiter & Schmidhuber, 1997). For each cell we either sample 1 shared initialization method or sample a different initialization method per parameter vector with a 4:1 ratio. We sample the core hidden dimension logarithmically between [32, 128]." }, { "heading": "H.1.2 SAMPLED DATASETS", "text": "Image Datasets: We sample uniformly from the following image datasets. Each dataset additionally has sampled parameters. For all datasets we make use of four data splits: train, valid-inner, valid-outer, test. Train is used to train models, valid-inner is used while training models to allow for modification\nof the training procedure (e.g. if validation loss doesn’t increase, drop learning rate). Valid-outer is used to select meta-parameters. Test should not be used during meta-training.\nFor all datasets, we sample a switch with low probability (10% of the time) to only use training data and thus not test generalization. This ensures that our learned optimizers are capable of optimizing a loss as opposed to a mix of optimizing and generalizing.\nMnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (LeCun, 1998).\nFashion Mnist: Batch size is sampled logarithmically between [8, 512]. We sample the number of training images logarithmically between [1000, 55000] (Xiao et al., 2017).\nCifar10: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009).\nCifar100: Batch size is sampled logarithmically between [8, 256]. The number of training examples is sampled logarithmically [1000, 50000] (Krizhevsky et al., 2009).\n{food101_32x32, coil100_32x32, deep_weeds_32x32, sun397_32x32}: These dataset take the original set of images and resize them to 32x32 using OpenCV’s (Bradski, 2000) cubic interpolation. We ignore aspect ratio for this resize. Batch size is sampled logarithmically between [8, 256] (Bossard et al., 2014; Nene et al., 1996; Olsen et al., 2019; Xiao et al., 2010).\nImagenet32x32 / Imagenet16x16: The ImageNet 32x32 and 16x16 dataset as created by Chrabaszcz et al. (2017). Batch size is logrithmically sampled between [8, 256]." }, { "heading": "H.1.3 TEXT CLASSIFICATION:", "text": "IMDB sentiment classification: We use text from the IMDB movie reviews dataset(Maas et al., 2011) and tokenize using subwords using a vocab size of 8k(Sennrich et al., 2015). We then take length s random slice from each example where s is sampled logarithmically between [8, 64]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. We sample the number of training examples logarithmically between [1000, 55000] and with 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization." }, { "heading": "H.1.4 CHARACTER AND WORD LANGUAGE MODELING", "text": "For the character and word language modeling datasets we make use of the following data sources: imdb movie reviews(Maas et al., 2011), amazon product reviews (ama) using the Books, Camera, Home, and Video subset each as separate datasets, LM1B(Chelba et al., 2013), and Wikipedia(Foundation) taken from the 20190301 dump using the zh, ru, ja, hab, and en language codes. We split each article by new lines and only keep resulting examples that contain more than 5 characters. For infrastructure reasons, we only use a million articles from each language and only 200k examples to build the tokenizer.\nByte encoding: We take length s random slices of each example where s is sampled logarithmically between [10, 160]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization.\nsubword encoding: We encode the text as subwords with a vocabsize of 8k (Sennrich et al., 2015). We then take length s random slices of each example where s is sampled logarithmically between [10, 256]. These examples are then batched into a batch size logarithmically sampled between [8, 512]. With probability 0.2 we restrict the number of training examples to a number logarithmically sampled between [1000, 50000]. Finally, with a 10% probability just use training data instead of valid / test to test pure optimization as opposed to generalization." }, { "heading": "H.2 SAMPLED TASKS", "text": "H.2.1 MLP\nThis task family consists of a multi layer perceptron trained on flattened image data. The amount of layers is sampled uniformly from [1, 6]. Layer hidden unit sizes are sampled logarithmically between [16, 128] with different number of hidden units per layer. One activation function is chosen for the whole network and is chosen as described in H.1.1. One shared initializer strategy is also sampled. The image dataset used is also sampled.\nTwo sampled configurations are shown below.\n1 { 2 \"layer_sizes\": [ 3 71 4 ], 5 \"activation\": \"leaky_relu2\", 6 \"w_init\": [ 7 \"he_normal\", 8 null 9 ],\n10 \"dataset\": [ 11 \"sun397_32x32\", 12 { 13 \"bs\": 32, 14 \"just_train\": false, 15 \"num_train\": null 16 }, 17 { 18 \"crop_amount\": 0, 19 \"flip_left_right\": false, 20 \"flip_up_down\": true, 21 \"do_color_aug\": false, 22 \"brightness\": 0.002936489121851211, 23 \"saturation\": 0.4308521744067503, 24 \"hue\": 0.19648945965587863, 25 \"contrast\": 0.036096320130911644 26 } 27 ], 28 \"center_data\": false 29 }\n1 { 2 \"layer_sizes\": [ 3 68, 4 37, 5 78 6 ], 7 \"activation\": \"relu\", 8 \"w_init\": [ 9 \"glorot_normal\",\n10 null 11 ], 12 \"dataset\": [ 13 \"food101_32x32\", 14 { 15 \"bs\": 117, 16 \"just_train\": true, 17 \"num_train\": null 18 }, 19 null 20 ],\n21 \"center_data\": true 22 }\nH.2.2 MLP_AE\nThis task family consists of a multi layer perceptron trained with an auto encoding loss. The amount of layers is sampled uniformly from [2, 7]. Layer hidden unit sizes are sampled logarithmically between [16, 128] with different number of hidden units per layer. The last layer always maps back to the input dimension. The output activation function is sampled with the following weights: tanh:2, sigmoid:1, linear_center:1, linear:1 where linear_center is an identity mapping. When using the linear_center and tanh activation we shift the ground truth image to [−1, 1] before performing a comparison to the model’s predictions. We sample the per dimension distance function used to compute loss with weights l2:2, l1:1, and the reduction function across dimensions to be either mean or sum with equal probability. A single activation function, and initializer is sampled. We train on image datasets which are also sampled.\nA sample configurations is shown below.\n1 { 2 \"hidden_units\": [ 3 73, 4 103, 5 105, 6 104, 7 76 8 ], 9 \"activation\": \"relu\",\n10 \"w_init\": [ 11 \"glorot_uniform\", 12 null 13 ], 14 \"dataset\": [ 15 \"mnist\", 16 { 17 \"bs\": 39, 18 \"num_train\": 43753, 19 \"num_classes\": 10, 20 \"just_train\": false 21 }, 22 null 23 ], 24 \"output_type\": \"tanh\", 25 \"loss_type\": \"l2\", 26 \"reduction_type\": \"reduce_sum\" 27 }" }, { "heading": "H.2.3 MLP VAE", "text": "This task has an encoder with sampled number of layers between [1, 3]. For each layer we sample the number of hidden units logarithmically between [32, 128]. For the decoder we sample the number of layers uniformly between [1, 3]. For each layer we sample the number of hidden units logarithmically between [32, 128]. We use a gaussian prior of dimensionality logarithmically sampled between [32, 128]. A single activation function and initialization is chosen for the whole network. The output of the encoder is projected to both a mean, and a log standard deviation which parameterizes the variational distribution, q(z|x). The decoder maps samples from the latent space to a quantized gaussian distribution in which we compute data log likelihoods log p(x|z). The loss we optimize is the evidence lower bound (ELBO) which is computed by adding this likelihood to the kl divergence between our normal distribution prior and q(z|x). We use the reparameterization trick to compute gradients. This model is trained on sampled image datasets.\nA sample configuration is listsed below.\n1 { 2 \"enc_hidden_units\": [ 3 73 4 ], 5 \"dec_hidden_units\": [ 6 74 7 ], 8 \"activation\": \"relu\", 9 \"w_init\": [\n10 \"he_normal\", 11 null 12 ], 13 \"dataset\": [ 14 \"food101_32x32\", 15 { 16 \"bs\": 22, 17 \"just_train\": true, 18 \"num_train\": null 19 }, 20 null 21 ] 22 }" }, { "heading": "H.2.4 CONV POOLING", "text": "This task consists of small convolutional neural networks with pooling. We sample the number of layers uniformly between [1, 5]. We sample a stride pattern to be either all stride 2, repeating the stride pattern of 1,2,1,2... for the total number of layers, or 2,1,2,1... for the total number of layers. The hidden units are logarithmically sampled for each layer between [8, 64]. We sample one activation function and weight init for the entire network. Padding for the convolutions are sampled per layer to either be same or valid with equal probability. For the convnet we also sample whether or not to use a bias with equal probability. At the last layer of the convnet we do a reduction spatially using either the mean, max, or squared mean sampled uniformly. This reduced output is fed into a linear layer and a softmax cross entropy loss. These models are trained on a sampled image dataset.\nA sample configuration is shown below.\n1 { 2 \"strides\": [ 3 [1, 1], 4 [2, 2], 5 [1, 1], 6 [2, 2], 7 [1, 1] 8 ], 9 \"hidden_units\": [\n10 46, 11 48, 12 47, 13 29, 14 18 15 ], 16 \"activation\": \"leaky_relu4\", 17 \"w_init\": [ 18 \"glorot_normal\", 19 null 20 ], 21 \"padding\": [ 22 \"SAME\",\n23 \"SAME\", 24 \"VALID\", 25 \"SAME\", 26 \"VALID\" 27 ], 28 \"pool_type\": \"squared_mean\", 29 \"use_bias\": true, 30 \"dataset\": [ 31 \"cifar100\", 32 { 33 \"bs\": 10, 34 \"num_train\": 5269, 35 \"just_train\": true 36 }, 37 null 38 ], 39 \"center_data\": false 40 }" }, { "heading": "H.2.5 CONV FC", "text": "This task consists of small convolutional neural networks, flattened, then run through a MLP. We sample the number of conv layers uniformly between [1, 5]. We sample a stride pattern to be either all stride 2, repeating the stride pattern of 1,2,1,2... for the total number of layers, or 2,1,2,1... for the total number of layers. The hidden units are logarithmically sampled for each layer between [8, 64]. Padding for the convolutions are sampled per layer to either be same or valid with equal probability.\nThe output is then flattened, and run through a MLP with hidden layers sampled uniformly from [0, 4] and with sizes sampled logrithmically from [32, 128]. The loss is then computed via softmax cross entropy.\nWe sample one activation function and weight init for the entire network. For the convnet we also sample whether or not to use a bias with equal probability. These models are trained on a sampled image dataset.\nAn example configuration is shown below.\n1 { 2 \"strides\": [ 3 [2, 2], 4 [2, 2], 5 [2, 2], 6 [2, 2] 7 ], 8 \"hidden_units\": [ 9 17,\n10 30, 11 13, 12 16 13 ], 14 \"activation\": \"relu\", 15 \"w_init\": [ 16 \"glorot_uniform\", 17 null 18 ], 19 \"padding\": [ 20 \"VALID\", 21 \"VALID\", 22 \"VALID\", 23 \"SAME\" 24 ],\n25 \"fc_hidden_units\": [], 26 \"use_bias\": true, 27 \"dataset\": [ 28 \"coil100_32x32\", 29 { 30 \"bs\": 49, 31 \"just_train\": false, 32 \"num_train\": null 33 }, 34 null 35 ], 36 \"center_data\": true 37 }" }, { "heading": "H.2.6 CHARACTER RNN LANGUAGE MODEL", "text": "This task takes character embedded data, and embeds in a size s embedding vector where s is sampled logarithmically between [8, 128] with random normal initializer with std 1.0. With 80% we use all 256 tokens, and with 20% chance we only consider a subset of tokens sampled logarithmically [100, 256]. We then pass this embedded vector to a RNN with teacher forcing with equal probability we use a trainable initializer or zeros. A linear projection is then applied to the number of vocab tokens. Losses are computed using a softmax cross entropy vector and mean across the sequence.\nA sample configuration is shown below.\n1 { 2 \"embed_dim\": 30, 3 \"w_init\": [ 4 \"he_normal\", 5 null 6 ], 7 \"vocab_size\": 256, 8 \"core\": [ 9 \"gru\",\n10 { 11 \"core_dim\": 84, 12 \"wh\": [ 13 \"glorot_uniform\", 14 null 15 ], 16 \"wz\": [ 17 \"random_normal\", 18 0.4022641748407826 19 ], 20 \"wr\": [ 21 \"he_uniform\", 22 null 23 ], 24 \"uh\": [ 25 \"he_normal\", 26 null 27 ], 28 \"uz\": [ 29 \"glorot_normal\", 30 null 31 ], 32 \"ur\": [ 33 \"glorot_uniform\", 34 null 35 ] 36 }\n37 ], 38 \"trainable_init\": true, 39 \"dataset\": [ 40 \"lm1b/bytes\", 41 { 42 \"patch_length\": 147, 43 \"batch_size\": 63, 44 \"just_train\": false, 45 \"num_train\": null 46 } 47 ] 48 }" }, { "heading": "H.2.7 WORD RNN LANGUAGE MODEL", "text": "This task takes word embedded data, and embeds in a size s embedding vector where s is sampled logarithmically between [8, 128] with random normal initializer with std 1.0. A vocab size for this embedding table is sampled logarithmically between [1000, 30000]. We then pass this embedded vector to a RNN with teacher forcing with equal probability we use a trainable initializer or zeros. A linear projection is then applied to the number of vocab tokens. Losses are computed using a softmax cross entropy vector and mean across the sequence.\nA sample configuration shown below.\n1 { 2 \"embed_dim\": 91, 3 \"w_init\": [ 4 \"glorot_uniform\", 5 null 6 ], 7 \"vocab_size\": 13494, 8 \"core\": [ 9 \"gru\",\n10 { 11 \"core_dim\": 96, 12 \"wh\": [ 13 \"he_normal\", 14 null 15 ], 16 \"wz\": [ 17 \"he_normal\", 18 null 19 ], 20 \"wr\": [ 21 \"he_normal\", 22 null 23 ], 24 \"uh\": [ 25 \"he_normal\", 26 null 27 ], 28 \"uz\": [ 29 \"he_normal\", 30 null 31 ], 32 \"ur\": [ 33 \"he_normal\", 34 null 35 ] 36 } 37 ],\n38 \"trainable_init\": true, 39 \"dataset\": [ 40 \"tokenized_amazon_reviews/Video_v1_00_subwords8k\", 41 { 42 \"patch_length\": 14, 43 \"batch_size\": 59, 44 \"just_train\": false, 45 \"num_train\": null 46 } 47 ] 48 }" }, { "heading": "H.2.8 LOSG PROBLEMS", "text": "These tasks consist of a mixture of many other tasks. We sample uniformly over the following types of problems. We brielfy describe them here but refer reader to the provided source for more information. In this work we took all the base problems from (Wichrowska et al., 2017) but modified the sampling distributions to better cover the space as opposed to narrowly sampling particular problem families. Future work will consist of evaluating which sets of problems or which sampling decisions are required.\nquadratic: n dimensional quadratic problems where n is sampled logarithmically between [10, 1000]. Noise is optionally added with probability 0.5 and of the scale s where s is sampled logarithmically between [0.01, 10].\nbowl: A 2d qaudratic bowl problem with a sampled condition number (logrithmically between [0.01, 100]). Noise is optionally added with probability 0.5 and of the scale s where s is sampled logarithmically between [0.01, 10].\nsparse_softmax_regression: A synthetic random sparse logistic regression task.\noptimization_test_problems: A uniform sample over the following functions: Ackley, Beale, Branin, logsumexp, Matyas, Michalewicz, Rosenbrock, StyblinskiTang.\nfully_connected: A sampled random fully connected classification neural network predicting 2 classes on synthetic data. Number of input features is sampled logrithmically between 1 and 16, with a random activation function, and a sampled number of layers uniformly sampled from 2-5.\nnorm: A problem that finds a minimum error in an arbitrary norm. Specifically: ( ∑\n(Wx− y)p)( 1p ) where W ∈ RNxN , y ∈ RNx1. The dimentionality, N , is sampled logrithmically between 3, and 1000. The power, p, is sampled uniformly between 0.1 and 5.0. W , and y are drawn from a standard normal distribution.\ndependency_chain: A synthetic problem where each parameter must be brought to zero sequentially. We sample dimensionality logrithmically between 3, 100.\noutward_snake: This loss creates a winding path to infinity. Step size should remain constant across this path. We sample dimensionality logrithmically between 3 and 100.\nmin_max_well: A loss based on the sum of min and max over parameters: maxx+ 1/(minx)− 2. Note that the gradient is zero for all but 2 parameters. We sample dimentaionlity logrithmically between 10 and 1000. Noise is optionally added with probability 0.5 and of the scale s where s is sampled logarithmically between [0.01, 10].\nsum_of_quadratics: A least squares loss of a dimentionality sampled logrithmically between 3 and 100 to a synthetic dataset.\nprojection_quadratic: A quadratic minimized by probing different directions. Dimentionality is sampled from 3 to 100 logrithmically.\nIn addition to these base tasks, we also provide a variety of transformations described bellow. The use of these transformations is also sampled.\nsparse_problems: With probability 0.9 to 0.99 the gradient per parameter is set to zero. Additional noise is added with probability 0.5 sampled from a normal with std sampled logrithmically between [0.01, 10.0].\nrescale_problems: Rescales the loss value by 0.001 to 1000.0 sampled logrithmically.\nlog_objective: Takes the log of the objective value.\n2 Sample configurations shown below.\n1 [ 2 \"fully_connected\", 3 { 4 \"n_features\": 16, 5 \"n_classes\": 2, 6 \"activation\": \"leaky_relu2\", 7 \"bs\": 7, 8 \"n_samples\": 12, 9 \"hidden_sizes\": [\n10 32, 11 8, 12 5, 13 9, 14 8 15 ] 16 }, 17 36641 18 ]\n1 [ 2 \"outward_snake\", 3 { 4 \"dim\": 9, 5 \"bs\": 30, 6 \"n_samples\": 249 7 }, 8 79416 9 ]\n1 [ 2 \"rescale_problems\", 3 { 4 \"base\": [ 5 \"sum_of_quadratics\", 6 { 7 \"dim\": 36, 8 \"bs\": 5, 9 \"n_samples\": 1498\n10 } 11 ], 12 \"scale\": 227.86715292020605 13 }, 14 89629 15 ]" }, { "heading": "H.2.9 MASKED AUTOREGRESSIVE FLOWS", "text": "Masked autoregressive flows are a family of tractable density generative models. See XX for more information. The MAF is defined by a sequence of bijectors. For one bijector samples a number of layers to either be 1 or 2 with equal probability, and a number of hidden layers sampled logarithmically between [16, 128]. We sample the number of bijector uniformly from [1, 4] and use the same hidden\nlayers across all bijector. We sample activation function, and initializer once for the whole model. In this task we model image datasets which are also sampled.\nA sample configuration is shown below.\n1 { 2 \"activation\": \"relu\", 3 \"w_init\": [ 4 \"he_uniform\", 5 null 6 ], 7 \"dataset\": [ 8 \"imagenet_resized/16x16\", 9 {\n10 \"bs\": 19, 11 \"just_train\": true, 12 \"num_train\": null 13 }, 14 null 15 ], 16 \"hidden_units\": [ 17 44, 18 24 19 ], 20 \"num_bijectors\": 3 21 }" }, { "heading": "H.2.10 NON VOLUME PRESERVING FLOWS", "text": "NVP are a family of tractable density generative models. See Dinh et al. (2016) for more information. The NVP is defined by a sequence of bijectors. For one bijector samples a number of layers to either be 1 or 2 with equal probability, and a number of hidden layers sampled logarithmically between [16, 128]. We sample the number of bijector uniformly from [1, 4] and use the same hidden layers across all bijector. We sample activation function, and initializer once for the whole model. In this task we model image datasets which are also sampled.\nA sample configuration shown below.\n1 { 2 \"activation\": \"cos\", 3 \"w_init\": [ 4 \"glorot_normal\", 5 null 6 ], 7 \"dataset\": [ 8 \"sun397_32x32\", 9 {\n10 \"bs\": 228, 11 \"just_train\": false, 12 \"num_train\": null 13 }, 14 null 15 ], 16 \"hidden_units\": [ 17 21, 18 121 19 ], 20 \"num_bijectors\": 4 21 }" }, { "heading": "H.2.11 QUADRATIC LIKE PROBLEMS", "text": "This task distribution defines a synthetic problem based on a non-linear modification to a quadratic. The dimensionality of the problem is sampled logarithmically between [2, 3000].\nThe loss for this task is described by:\noutput_fn((AX −B)2 + C) (S31)\nwhere X = param ∗ weight_rescale and where param is initialized by initial_dist.sample() / weight_rescale.\nThe output_fn is sampled uniformly between identity, and f(x) = log(max(0, x)). The loss scale is sampled logarithmically between [10−5, 103].\nWe define a distribution over matrices A as a sample from one of the following: normal: we sample a mean from a normal draw with a standard deviation of 0.05 and a std from a uniform [0, 0.05]. The elements of A are drawn from the resulting distribution. uniform: linspace_eigen: logspace_eigen:\nWe define a distribution over B to be either normal with mean and std sampled from N(0, 1), U(0, 2) respectively or uniform with min and range equal to U(-5, 2.5), U(0, 5) respectively.\nWith probability 50% we add noise from a distribution whose parameters are also sampled.\nA sample configuration shown below.\n1 { 2 \"A_dist\": [ 3 \"linspace_eigen\", 4 { 5 \"min\": 32.09618575514275, 6 \"max\": 122.78045861480965 7 } 8 ], 9 \"initial_dist\": [\n10 \"uniform\", 11 { 12 \"min\": 2.3911997838130956, 13 \"max\": 6.723940057771417 14 } 15 ], 16 \"output_fn\": \"log\", 17 \"dims\": 212, 18 \"seed\": 68914, 19 \"loss_scale\": 0.6030061302850566, 20 \"noise\": null 21 }" }, { "heading": "H.2.12 RNN TEXT CLASSIFICATION", "text": "This task consists of using an RNN to classify tokenized text. We first trim the vocab length to be of a size logarithmically sampled between [100, 10000]. The text is then embedded into a vocab size logarithmically sampled between [8, 128]. These embeddings get fed into a sampled config RNN. With equal probability the initial state of the rnn is either sampled, or zeros. With equal probability we either take the last RNN prediction, the mean over features, or the per feature max over the sequence. This batch of activations is then passed through a linear layer and a softmax cross entropy loss. The initialization for the linear projection is sampled.\nAn example configuration shown below. In this version of TaskSet the dataset sampling contains a bug. All data used is from the imdb_reviews/subwords8k dataset.\n1 { 2 \"embed_dim\": 111,\n3 \"w_init\": [ 4 \"random_normal\", 5 0.1193048629073732 6 ], 7 \"dataset\": [ 8 \"imdb_reviews/subwords8kimdb_reviews/bytes\", 9 {\n10 \"bs\": 43, 11 \"num_train\": null, 12 \"max_token\": 8185, 13 \"just_train\": true, 14 \"patch_length\": 20 15 } 16 ], 17 \"vocab_size\": 3570, 18 \"core\": [ 19 \"vrnn\", 20 { 21 \"hidden_to_hidden\": [ 22 \"he_uniform\", 23 null 24 ], 25 \"in_to_hidden\": [ 26 \"he_uniform\", 27 null 28 ], 29 \"act_fn\": \"leaky_relu2\", 30 \"core_dim\": 35 31 } 32 ], 33 \"trainable_init\": false, 34 \"loss_compute\": \"max\" 35 }" }, { "heading": "H.3 FIXED TASKS", "text": "In addition to sampled tasks, we also define a set of hand designed and hand specified tasks. These tasks are either more typical of what researcher would do (e.g. using default initializations) or specific architecture features such as bottlenecks in autoencoders, normalization, or dropout.\nIn total there are 107 fixed tasks. Each task is labeled by name with some information about the underlying task. We list all tasks, discuss groups of tasks, but will not describe each task in detail. Please see the source for exact details.\nAssociative_GRU128_BS128_Pairs10_Tokens50 Associative_GRU256_BS128_Pairs20_Tokens50 Associative_LSTM128_BS128_Pairs10_Tokens50 Associative_LSTM128_BS128_Pairs20_Tokens50 Associative_LSTM128_BS128_Pairs5_Tokens20 Associative_LSTM256_BS128_Pairs20_Tokens50 Associative_LSTM256_BS128_Pairs40_Tokens100 Associative_VRNN128_BS128_Pairs10_Tokens50 Associative_VRNN256_BS128_Pairs20_Tokens50\nThese tasks use RNN’s to perform an associative memory task. Given a vocab of tokens, and some number of pairs to store and a query the RNN’s goal is to produce the desired value. For example given the input sequence A1B2C3?B_ the RNN should produce ________B.\nThis model embeds tokens, applies an RNN, and applies a linear layer to map back to the output space. Softmax cross entropy loss is used to compare outputs. A weight is also placed on the losses so that loss is incurred only when the RNN is supposed to predict. For RNN cells we use LSTM (Hochreiter & Schmidhuber, 1997), GRU (Chung et al., 2014), and VRNN – a vanilla RNN. The previous tasks are defined with the corresponding RNN cell, number of units, batch size, sequence lengths, and number of possible tokens for the retrieval task.\nCopy_GRU128_BS128_Length20_Tokens10 Copy_GRU256_BS128_Length40_Tokens50 Copy_LSTM128_BS128_Length20_Tokens10 Copy_LSTM128_BS128_Length20_Tokens20 Copy_LSTM128_BS128_Length50_Tokens5 Copy_LSTM128_BS128_Length5_Tokens10 Copy_LSTM256_BS128_Length40_Tokens50 Copy_VRNN128_BS128_Length20_Tokens10 Copy_VRNN256_BS128_Length40_Tokens50\nThese tasks use RNN’s to perform a copy task. Given a vocab of tokens and some number of tokens the RNN’s job is to read the tokens and to produce the corresponding outputs. For example an input might be: ABBC|____ and the RNN should output ____|ABBC. See the source for a complete description of the task. Each task in this set varies the RNN core, as well as the dataset structure.\nThis model embeds tokens, applies an RNN, and applies a linear layer to map back to the output space. Softmax crossentropy loss is used to compare outputs. A weight is also placed on the losses so that loss is incurred only when the RNN is supposed to predict. For RNN cells we use LSTM (Hochreiter & Schmidhuber, 1997), GRU (Chung et al., 2014), and VRNN – a vanilla RNN. The previous tasks are defined with the corresponding RNN cell, number of units, batch size, sequence lengths, and number of possible tokens." }, { "heading": "FixedImageConvAE_cifar10_32x32x32x32x32_bs128 FixedImageConvAE_cifar10_32x64x8x64x32_bs128 FixedImageConvAE_mnist_32x32x32x32x32_bs128", "text": "FixedImageConvAE_mnist_32x64x32x64x32_bs512" }, { "heading": "FixedImageConvAE_mnist_32x64x8x64x32_bs128", "text": "Convolutional autoencoders trained on different datasets and with different architectures (sizes of hidden units).\nFixedImageConvVAE_cifar10_32x64x128x64x128x64x32_bs128 FixedImageConvVAE_cifar10_32x64x128x64x128x64x32_bs512 FixedImageConvVAE_cifar10_32x64x128x64x32_bs128 FixedImageConvVAE_cifar10_64x128x256x128x256x128x64_bs128 FixedImageConvVAE_mnist_32x32x32x32x32_bs128 FixedImageConvVAE_mnist_32x64x32x64x32_bs128 FixedImageConvVAE_mnist_64x128x128x128x64_bs128\nConvolutional variational autoencoders trained on different datasets, batch sizes, and with different architectures." }, { "heading": "FixedImageConv_cifar100_32x64x128_FC64x32_tanh_variance_scaling_bs64", "text": "FixedImageConv_cifar100_32x64x64_flatten_bs128 FixedImageConv_cifar100_bn_32x64x128x128_bs128 FixedImageConv_cifar10_32x64x128_flatten_FC64x32_tanh_he_bs8 FixedImageConv_cifar10_32x64x128_flatten_FC64x32_tanh_variance_scaling_bs64 FixedImageConv_cifar10_32x64x128_he_bs64 FixedImageConv_cifar10_32x64x128_largenormal_bs64 FixedImageConv_cifar10_32x64x128_normal_bs64 FixedImageConv_cifar10_32x64x128_smallnormal_bs64 FixedImageConv_cifar10_32x64x128x128x128_avg_he_bs64 FixedImageConv_cifar10_32x64x64_bs128 FixedImageConv_cifar10_32x64x64_fc_64_bs128 FixedImageConv_cifar10_32x64x64_flatten_bs128 FixedImageConv_cifar10_32x64x64_tanh_bs64 FixedImageConv_cifar10_batchnorm_32x32x32x64x64_bs128 FixedImageConv_cifar10_batchnorm_32x64x64_bs128 FixedImageConv_coil10032x32_bn_32x64x128x128_bs128 FixedImageConv_colorectalhistology32x32_32x64x64_flatten_bs128 FixedImageConv_food10164x64_Conv_32x64x64_flatten_bs64 FixedImageConv_food101_batchnorm_32x32x32x64x64_bs128 FixedImageConv_mnist_32x64x64_fc_64_bs128 FixedImageConv_sun39732x32_bn_32x64x128x128_bs128 Mnist_Conv_32x16x64_flatten_FC32_tanh_bs32\nConvolutional neural networks doing supervised classification. These models vary in dataset, architecture, and initializations.\nFixedLM_lm1b_patch128_GRU128_embed64_avg_bs128 FixedLM_lm1b_patch128_GRU256_embed64_avg_bs128 FixedLM_lm1b_patch128_GRU64_embed64_avg_bs128 FixedLM_lm1b_patch128_LSTM128_embed64_avg_bs128 FixedLM_lm1b_patch128_LSTM256_embed64_avg_bs128\nLanguage modeling tasks on different RNN cell types and sizes." }, { "heading": "FixedMAF_cifar10_3layer_bs64 FixedMAF_mnist_2layer_bs64 FixedMAF_mnist_3layer_thin_bs64", "text": "Masked auto regressive flows models with different architectures (number of layers and sizes).\nFixedMLPAE_cifar10_128x32x128_bs128 FixedMLPAE_mnist_128x32x128_bs128 FixedMLPAE_mnist_32x32x32_bs128\nAutoencoder models based on multi layer perceptron with different number of hidden layers and dataset.\nFixedMLPVAE_cifar101_128x128x32x128x128_bs128 FixedMLPVAE_cifar101_128x32x128_bs128 FixedMLPVAE_food10132x32_128x64x32x64x128_bs64 FixedMLPVAE_mnist_128x128x8x128_bs128 FixedMLPVAE_mnist_128x64x32x64x128_bs64 FixedMLPVAE_mnist_128x8x128x128_bs128 Imagenet32x30_FC_VAE_128x64x32x64x128_relu_bs256\nVariational autoencoder models built from multi layer perceptron with different datasets, batchsizes, and architectures.\nFixedMLP_cifar10_BatchNorm_128x128x128_relu_bs128 FixedMLP_cifar10_BatchNorm_64x64x64x64x64_relu_bs128 FixedMLP_cifar10_Dropout02_128x128_relu_bs128 FixedMLP_cifar10_Dropout05_128x128_relu_bs128 FixedMLP_cifar10_Dropout08_128x128_relu_bs128 FixedMLP_cifar10_LayerNorm_128x128x128_relu_bs128 FixedMLP_cifar10_LayerNorm_128x128x128_tanh_bs128 FixedMLP_cifar10_ce_128x128x128_relu_bs128 FixedMLP_cifar10_mse_128x128x128_relu_bs128 FixedMLP_food10132x32_ce_128x128x128_relu_bs128 FixedMLP_food10132x32_mse_128x128x128_relu_bs128 FixedMLP_mnist_ce_128x128x128_relu_bs128 FixedMLP_mnist_mse_128x128x128_relu_bs128 FixedNVP_mnist_2layer_bs64\nImage classification based on multi layer perceptron. We vary architecture, data, batchsize, normalization techniques, dropout, and loss type across problems.\nFixedNVP_mnist_3layer_thin_bs64 FixedNVP_mnist_5layer_bs64 FixedNVP_mnist_5layer_thin_bs64 FixedNVP_mnist_9layer_thin_bs16\nNon volume preserving flow models with different batchsizesm and architectures.\nFixedTextRNNClassification_imdb_patch128_LSTM128_avg_bs64 FixedTextRNNClassification_imdb_patch128_LSTM128_bs64 FixedTextRNNClassification_imdb_patch128_LSTM128_embed128_bs64 FixedTextRNNClassification_imdb_patch32_GRU128_bs128 FixedTextRNNClassification_imdb_patch32_GRU64_avg_bs128 FixedTextRNNClassification_imdb_patch32_IRNN64_relu_avg_bs128 FixedTextRNNClassification_imdb_patch32_IRNN64_relu_last_bs128 FixedTextRNNClassification_imdb_patch32_LSTM128_E128_bs128 FixedTextRNNClassification_imdb_patch32_LSTM128_bs128 FixedTextRNNClassification_imdb_patch32_VRNN128_tanh_bs128 FixedTextRNNClassification_imdb_patch32_VRNN64_relu_avg_bs128 FixedTextRNNClassification_imdb_patch32_VRNN64_tanh_avg_bs128\nRNN text classification problems with different RNN cell, sizes, embedding sizes, and batchsize.\nTwoD_Bowl1 TwoD_Bowl10 TwoD_Bowl100 TwoD_Bowl1000\n2D quadratic bowls with different condition numbers.\nTwoD_Rosenbrock TwoD_StyblinskiTang TwoD_Ackley TwoD_Beale\nToy 2D test functions." } ]
2,020
TASKSET: A DATASET OF OPTIMIZATION TASKS
SP:4bda50ce81c790cf9b19a24d81db4c07ec3729c1
[ "The purpose of the paper seems clear: it proposes an attack to the recently proposed algorithm called Instahide (ICML 2020) which is a probabilistic algorithm for generating synthetic private data in the distributed setting. The attack proposed in this paper is considered for the case where the private data is i.i.d. Gaussian distributed, and Thm 1.1 says that one can recover k original feature vectors with O(k^2) + O(M^2) computational complexity, where M is the total number of original data elements." ]
In this work, we examine the security of InstaHide, a scheme recently proposed by Huang et al. (2020b) for preserving the security of private datasets in the context of distributed learning. To generate a synthetic training example to be shared among the distributed learners, InstaHide takes a convex combination of private feature vectors and randomly flips the sign of each entry of the resulting vector with probability 1/2. A salient question is whether this scheme is secure in any provable sense, perhaps under a plausible complexity-theoretic assumption. The answer to this turns out to be quite subtle and closely related to the averagecase complexity of a multi-task, missing-data version of the classic problem of phase retrieval that is interesting in its own right. Motivated by this connection, under the standard distributional assumption that the public/private feature vectors are isotropic Gaussian, we design an algorithm that can actually recover a private vector using only the public vectors and a sequence of synthetic vectors generated by InstaHide.
[ { "affiliations": [], "name": "SPARSE MA" }, { "affiliations": [], "name": "TRIX FACTORIZATION" }, { "affiliations": [], "name": "Sitan Chen" }, { "affiliations": [], "name": "Xiaoxiao Li" }, { "affiliations": [], "name": "Danyang Zhuo" } ]
[ { "authors": [ "Sébastien Bubeck", "Yin Tat Lee", "Eric Price", "Ilya Razenshteyn" ], "title": "Adversarial examples from computational constraints", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "T Tony Cai", "Xiaodong Li", "Zongming Ma" ], "title": "Optimal rates of convergence for noisy sparse phase retrieval via thresholded wirtinger flow", "venue": "The Annals of Statistics,", "year": 2016 }, { "authors": [ "Emmanuel J Candes", "Thomas Strohmer", "Vladislav Voroninski" ], "title": "Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming", "venue": "Communications on Pure and Applied Mathematics,", "year": 2013 }, { "authors": [ "Emmanuel J Candes", "Xiaodong Li", "Mahdi Soltanolkotabi" ], "title": "Phase retrieval via wirtinger flow: Theory and algorithms", "venue": "IEEE Transactions on Information Theory,", "year": 1985 }, { "authors": [ "Nicholas Carlini", "Samuel Deng", "Sanjam Garg", "Somesh Jha", "Saeed Mahloujifar", "Mohammad Mahmoody", "Shuang Song", "Abhradeep Thakurta", "Florian Tramer" ], "title": "An attack on instahide: Is private learning possible with instance encoding", "venue": "arXiv preprint arXiv:2011.05315,", "year": 2020 }, { "authors": [ "Sitan Chen", "Jerry Li", "Zhao Song" ], "title": "Learning mixtures of linear regressions in subexponential time via fourier moments", "venue": "In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing (STOC),", "year": 2020 }, { "authors": [ "Aldo Conca", "Dan Edidin", "Milena Hering", "Cynthia Vinzant" ], "title": "An algebraic characterization of injectivity in phase retrieval", "venue": "Applied and Computational Harmonic Analysis,", "year": 2015 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition (CVPR),", "year": 2009 }, { "authors": [ "Ilias Diakonikolas", "Daniel M Kane", "Alistair Stewart" ], "title": "Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures", "venue": "IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2017 }, { "authors": [ "Ilias Diakonikolas", "Daniel Kane", "Nikos Zarifis" ], "title": "Near-optimal sq lower bounds for agnostically learning halfspaces and relus under gaussian marginals", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Ilias Diakonikolas", "Daniel M Kane", "Vasilis Kontonis", "Nikos Zarifis" ], "title": "Algorithms and sq lower bounds for pac learning one-hidden-layer relu networks", "venue": "In Conference on Learning Theory (COLT),", "year": 2020 }, { "authors": [ "Rong Ge", "Jason D Lee", "Tengyu Ma" ], "title": "Learning one-hidden-layer neural networks with landscape design", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Surbhi Goel", "Aravind Gollakota", "Zhihan Jin", "Sushrut Karmalkar", "Adam Klivans" ], "title": "Superpolynomial lower bounds for learning one-layer neural networks using gradient descent", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Surbhi Goel", "Aravind Gollakota", "Adam Klivans" ], "title": "Statistical-query lower bounds via functional gradients", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2020 }, { "authors": [ "Moritz Hardt", "Eric Price" ], "title": "Tight bounds for learning a mixture of two gaussians", "venue": "In Proceedings of the forty-seventh annual ACM symposium on Theory of computing (STOC),", "year": 2015 }, { "authors": [ "Yangsibo Huang", "Zhao Song", "Danqi Chen", "Kai Li", "Sanjeev Arora" ], "title": "Texthide: Tackling data privacy in language understanding tasks", "venue": "In The Conference on Empirical Methods in Natural Language Processing (Findings of EMNLP),", "year": 2020 }, { "authors": [ "Yangsibo Huang", "Zhao Song", "Kai Li", "Sanjeev Arora" ], "title": "Instahide: Instance-hiding schemes for private distributed learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Matthew Jagielski" ], "title": "https://colab.research.google.com/drive/ 1ONVjStz2m3BdKCE16axVHZ00hcwdivH2?usp=sharing", "venue": null, "year": 2020 }, { "authors": [ "Haotian Jiang", "Tarun Kathuria", "Yin Tat Lee", "Swati Padmanabhan", "Zhao Song" ], "title": "A faster interior point method for semidefinite programming", "venue": "In FOCS,", "year": 2020 }, { "authors": [ "Raymond Kan", "Cesare Robotti" ], "title": "On moments of folded and truncated multivariate normal distributions", "venue": "Journal of Computational and Graphical Statistics,", "year": 2017 }, { "authors": [ "Adam Klivans", "Pravesh Kothari" ], "title": "Embedding hard learning problems into gaussian space. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)", "venue": "Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik,", "year": 2014 }, { "authors": [ "Weihao Kong", "Raghav Somani", "Zhao Song", "Sham Kakade", "Sewoong Oh" ], "title": "Meta-learning for mixed linear regression", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning mixtures of linear regressions with nearly optimal complexity", "venue": "In Conference on Learning Theory (COLT),", "year": 2018 }, { "authors": [ "Yuanzhi Li", "Yang Yuan" ], "title": "Convergence analysis of two-layer neural networks with ReLU activation", "venue": "In Advances in neural information processing systems (NIPS),", "year": 2017 }, { "authors": [ "Ankur Moitra", "Gregory Valiant" ], "title": "Settling the polynomial learnability of mixtures of gaussians", "venue": "IEEE 51st Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2010 }, { "authors": [ "Praneeth Netrapalli", "Prateek Jain", "Sujay Sanghavi" ], "title": "Phase retrieval using alternating minimization", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2013 }, { "authors": [ "Matey Neykov", "Zhaoran Wang", "Han Liu" ], "title": "Agnostic estimation for misspecified phase retrieval models", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Oded Regev", "Aravindan Vijayaraghavan" ], "title": "On learning mixtures of well-separated gaussians", "venue": "IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2017 }, { "authors": [ "Xinyang Yi", "Constantine Caramanis", "Sujay Sanghavi" ], "title": "Alternating minimization for mixed linear regression", "venue": "In International Conference on Machine Learning (ICML),", "year": 2014 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Kai Zhong", "Zhao Song", "Inderjit S Dhillon" ], "title": "Learning non-overlapping convolutional neural networks with multiple kernels", "venue": "In arXiv preprint,", "year": 2017 }, { "authors": [ "Kai Zhong", "Zhao Song", "Prateek Jain", "Peter L. Bartlett", "Inderjit S. Dhillon" ], "title": "Recovery guarantees for one-hidden-layer neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Kai Zhong", "Zhao Song", "Prateek Jain", "Inderjit S Dhillon" ], "title": "Provable non-linear inductive matrix completion", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Carlini" ], "title": "2020) independently gave an attack breaking the InstaHide challenge originally released by the authors of Huang et al. (2020b). In that challenge, the public dataset was ImageNet, the private dataset consisted of npriv = 100 natural images, and kpriv = 2, kpub = 4, m = 5000", "venue": null, "year": 2020 }, { "authors": [ "Remark B" ], "title": "Here we give some interpretation to the quantitative guarantees", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "In this work, we examine the security of InstaHide, a scheme recently proposed by Huang et al. (2020b) for preserving the security of private datasets in the context of distributed learning. To generate a synthetic training example to be shared among the distributed learners, InstaHide takes a convex combination of private feature vectors and randomly flips the sign of each entry of the resulting vector with probability 1/2. A salient question is whether this scheme is secure in any provable sense, perhaps under a plausible complexity-theoretic assumption. The answer to this turns out to be quite subtle and closely related to the averagecase complexity of a multi-task, missing-data version of the classic problem of phase retrieval that is interesting in its own right. Motivated by this connection, under the standard distributional assumption that the public/private feature vectors are isotropic Gaussian, we design an algorithm that can actually recover a private vector using only the public vectors and a sequence of synthetic vectors generated by InstaHide." }, { "heading": "1 INTRODUCTION", "text": "In distributed learning, where decentralized parties each possess some private local data and work together to train a global model, a central challenge is to ensure that the security of any individual party’s local data is not compromised. Huang et al. (2020b) recently proposed an interesting approach called InstaHide for this problem. At a high level, InstaHide is a method for aggregating local data into synthetic data that can hopefully preserve the privacy of the local datasets and be used to train good models.\nInformally, given a collection of public feature vectors (e.g. a publicly available dataset like ImageNet Deng et al. (2009)) and a collection of private feature vectors (e.g. the union of all of the private datasets among learners), InstaHide produces a synthetic feature vector as follows. Let integers kpub, kpriv be sparsity parameters.\n1. Form a random convex combination of kpub public and kpriv private vectors.\n2. Multiply every coordinate of the resulting vector by an independent random sign in {±1}, and define this to be the synthetic feature vector.\nThe hope is that by removing any sign information from the vector obtained in Step 1, Step 2 makes it difficult to discern which public and private vectors were selected in Step 1. Strikingly, Huang et al. (2020b) demonstrated on real-world datasets that if one trains a ResNet-18 or a NASNet on a\n∗This work was supported in part by NSF CAREER Award CCF-1453261, NSF Large CCF-1565235 and Ankur Moitra’s ONR Young Investigator Award.\ndataset consisting of synthetic vectors generated in this fashion, one can still get good test accuracy on the underlying private dataset for modest sparsity parameters (e.g. kpub = kpriv = 2). 1\nThe two outstanding theoretical challenges that InstaHide poses are understanding:\n• Utility: What property, either of neural networks or of real-world distributions, lets one tolerate this kind of covariate shift between the synthetic and original datasets?\n• Security: Can one rigorously formulate a refutable security claim for InstaHide, under a plausible average-case complexity-theoretic assumption?\nIn this paper we consider the latter question. One informal security claim implicit in Huang et al. (2020b) is that given a synthetic dataset of a certain size, no efficient algorithm can recover a private image to within a certain level of accuracy (see Problem 1 for a formal statement of this recovery question). On the one hand, it is a worthwhile topic of debate whether this is a satisfactory guarantee from a security standpoint. On the other, even this kind of claim is quite delicate to pin down formally, in part because it seems impossible for such a claim to hold for arbitrary private datasets.\nKnown Attacks and the Importance of Distributional Assumptions If the private and public datasets consisted of natural images, for example, then attacks are known Jagielski (2020); Carlini et al. (2020). At a high level, the attack of Jagielski (2020) crucially leverages local Lipschitzness properties of natural images and shows that when kpriv + kpub = 2, even a single synthetic image can reveal significant information. The very recent attack of Carlini et al. (2020), which was independent of the present work and appeared a month after this submission appeared online, is more sophisticated and bears interesting similarities to the algorithms we consider. We defer a detailed discussion of these similarities to Appendix A in the supplement.\nWhile the original InstaHide paper Huang et al. (2020b) focused on image data, their general approach has the potential to be applicable to other forms of real-valued data, and it is an interesting mathematical question whether the above attacks remain viable. For instance, for distributions over private vectors where individual features are nearly independent, one cannot hope to leverage the kinds of local Lipschitz-ness properties that the attack of Jagielski (2020) exploits. Additionally, if the individual features are identically distributed, then it is information theoretically impossible to discern anything from just a single synthetic vector. For instance, if a synthetic vector ṽ is given by the entrywise absolute value of 12v1 + 1 2v2 for private vectors v1, v2, then an equally plausible pair of private vectors generating ṽ would be v′1, v ′ 2 given by swapping the i-th entry of v1 with that of v2 for any collection of indices i ∈ [d]. In other words, there are 2d pairs of private vectors which are equally likely under the Gaussian measure and give rise to the exact same synthetic vector.\nGaussian Images, and Our Results A natural candidate for probing whether such properties can make the problem of recovering private vectors more challenging is the case where the public and private vectors are sampled from the standard Gaussian distribution over Rd. While this distribution does not capture datasets in the real world, it avoids some properties of distributions over natural images that might make InstaHide more vulnerable to attack and is thus a clean testbed for stresstesting candidate security claims for InstaHide. Furthermore, in light of known hardness results for certain learning problems over Gaussian space Diakonikolas et al. (2017); Bruna et al. (2020); Diakonikolas et al. (2020b); Goel et al. (2020a); Diakonikolas et al. (2020a); Klivans & Kothari (2014); Goel et al. (2020b); Bubeck et al. (2019); Regev & Vijayaraghavan (2017), one might hope that when the vectors are Gaussian, one could rigorously establish some lower bounds, e.g. on the size of the synthetic dataset (information-theoretic) and/or the runtime of the attacker (computational), perhaps under an average-case assumption, or in some restricted computational model like SQ.\nOrthogonally, we note that the recovery task the attacker must solve appears to be an interesting inverse problem in its own right, namely a multi-task, missing-entry version of phase retrieval with an intriguing connection to sparse matrix factorization (see Section 2.2 and Section 3). The assumption of Gaussianity is a natural starting point for understanding the average-case complexity of this problem, and in this learning-theoretic context it is desirable to give algorithms with provable guarantees.\n1We did not describe how the labels for the synthetic vectors are assigned, but this part of InstaHide will not be important for our theoretical results and we defer discussion of labels to Section 4.\nGaussianity is often a standard starting point for developing guarantees for such inverse problems Moitra & Valiant (2010); Netrapalli et al. (2013); Candes et al. (2015); Hardt & Price (2015); Zhong et al. (2017b;a); Li & Yuan (2017); Ge et al. (2018); Li & Liang (2018); Zhong et al. (2019); Chen et al. (2020); Kong et al. (2020); Diakonikolas et al. (2020b).\nOur main result is to show that when the private and public data is Gaussian, we can use the synthetic and public vectors to recover a subset of the private vectors.\nTheorem 1.1 (Informal, see Theorem B.1). If there are npriv private vectors and npub public vectors, each of which is an i.i.d. draw from N (0, Idd), then as long as d = Ω(poly(kpub, kpriv) log(npub + npriv)), there is some m = o(n kpriv priv ) such that, given a sample of m random synthetic vectors independently generated as above, one can exactly recover kpriv + 2 private vectors in time O(d(m2 +n2pub)) + poly(npub) with probability 9/10 over the randomness of the private and public vectors and the randomness of the selection vectors.2\nWe emphasize that we can take m = o(nkprivpriv ), meaning we can achieve recovery even with access to a vanishing fraction of all possible combinations of private vectors among the synthetic vectors generated. For instance, when kpriv = 2, we show that m = O(n 4/3 priv ) suffices (see Theorem B.1). See Remark B.2 for additional discussion.\nAdditionally, to ensure we are not working in an uninteresting setting where InstaHide has zero utility, we empirically verify that in the setting of Theorem 1.1, one can train on the synthetic vectors and get reasonable test accuracy on the original Gaussian dataset (see Section 4).\nQualitatively, the main takeaway of Theorem 1.1 is that to prove meaningful security guarantees for InstaHide, we must be careful about the properties we posit about the underlying distribution generating the public and private data, even in challenging settings where this data does not possess the nice properties of natural images that have made other attacks possible." }, { "heading": "1.1 CONNECTIONS AND EXTENSIONS TO PHASE RETRIEVAL", "text": "Our algorithm is based on connections and extensions to the classic problem of phase retrieval. At a high level, this can be thought of as the problem of linear regression where the signs of the linear responses are hidden. More formally, this is a setting where we get pairs (x1, y1), ..., (xN , yN ) ∈ Cn × R for which there exists a vector w ∈ Cn satisfying |〈w, xi〉| = yi for all i = 1, ..., N , and the goal is to recover w. Without distributional assumptions on how x1, ..., xN are generated, this problem is NP-hard Yi et al. (2014), and in the last decade, there has been a huge body of work, much of it coming from the machine learning community, on giving algorithms for recovering w under the assumption that x1, ..., xN are i.i.d. Gaussian, see e.g. Candes et al. (2013; 2015); Conca et al. (2015); Netrapalli et al. (2013).\nTo see the connection between InstaHide and phase retrieval, first imagine that InstaHide only works with public vectors (in the notation of Theorem 1.1, npriv = kpriv = 0). Now, consider a synthetic vector y ∈ Rd generated by InstaHide, and let the vector w ∈ Rnpub be the one specifying the convex combination of public vectors that generated y. The basic observation is that for any feature i ∈ [d], if pi ∈ Rnpub is the vector consisting of i-th coordinates of all the public vectors, then |〈w, xi〉| = yi. In other words, if InstaHide only works with public vectors, then the problem of recovering which public vectors generated a given synthetic vector is formally equivalent to phase retrieval. In particular, if the public dataset is Gaussian, then we can leverage the existing algorithms for Gaussian phase retrieval. Huang et al. (2020b) already noted this connection but argued that if InstaHide also uses private vectors, the existing algorithms for phase retrieval fail. Indeed, consider the extreme case where InstaHide only works with private vectors (i.e. npub = 0), so that the only information we have access to is the synthetic vector (y1, ..., yd) generated by InstaHide. As noted above in the discussion about private distributions where the features are identically distributed, it is clearly information-theoretically impossible to recover anything about w or the private dataset.\nAs we will see, the key workaround is to exploit the fact that InstaHide ultimately generates multiple synthetic vectors, each of which is defined by a random sparse convex combination of public/private\n2See Problem 1 and Remark 2.7 for what exact recovery precisely means in this context.\nvectors. And as we will make formal in Section 2.2, the right algorithmic question to study in this context can be thought of as a multi-task, missing-data version of phase retrieval (see Problem 2) that we believe to be of independent interest.\nLastly, we remark that in spite of this conceptual connection to phase retrieval, and apart from one component of our algorithm (see Section B.1) which draws upon existing techniques for phase retrieval, the most involved parts of our algorithm and its analysis utilize techniques that are quite different from the existing ones in the phase retrieval literature. We elaborate upon these techniques in Section 3." }, { "heading": "2 TECHNICAL PRELIMINARIES", "text": "Miscellaneous Notation Given a subset T , let CkT denote the set of all subsets of T of size exactly k. Given a vector v ∈ Rn and a subset S ⊆ [n], let [v]S ∈ R|S| denote the restriction of v to the coordinates indexed by S. Definition 2.1. Given a Gaussian distribution N (0,Σ), let N fold(0,Σ) denote the folded Gaussian distribution defined as follows: to sample from N fold(0,Σ), sample g ∼ N (0,Σ) and output |g|." }, { "heading": "2.1 THE GENERATIVE MODEL", "text": "Definition 2.2 (Image matrix notation). Let image matrix X ∈ Rd×n be a matrix whose columns consist of vectors x1, ..., xn corresponding to n images each with d pixels taking values in F.3 It will also be convenient to refer to the rows of X as p1, ..., pd ∈ Rn. Definition 2.3 (Public/private notation). Let S ⊂ {1, ..., n} be some subset. We will refer to S and Sc , {1, ..., n}\\S as the set of public and private images respectively, and given a vector w ∈ Rn, we will refer to supp(w) ∩ S and supp(w) ∩ Sc as the public and private coordinates of w respectively. Definition 2.4 (Synthetic images). Given sparsity levels kpub ≤ |S|, kpriv ≤ |Sc|, image matrix X and a selection vector w ∈ Rn for which [w]S and [w]Sc are kpub- and kpriv-sparse respectively, the corresponding synthetic image is the vector\nyX,w , |Xw|, (1) where |·| denotes entrywise absolute value. We say that X and a sequence of selection vectors w1, ..., wm ∈ Rn give rise to a synthetic dataset consisting of the images {yX,w1 , ..., yX,wm}.\nNote that instead of the entrywise absolute value of Xw, InstaHide in Huang et al. (2020b) randomly flips the sign of every entry of Xw, but these two operations are interchangeable in terms of information; it will be slightly more convenient to work with the former.\nWe will work with the following distributional assumption on the entries of X: Definition 2.5 (Gaussian images). We say that X is a random Gaussian image matrix if its entries are sampled i.i.d. from N (0, 1).\nWe will also work with the following simple notion of “random convex combination” as our model for how the selection vectors w1, . . . , wm are generated: Definition 2.6 (Distribution over selection vectors). Let D be the distribution over selection vectors defined as follows. To sample once from D, draw random subset T1 ⊂ S, T2 ⊆ Sc of size kpub and kpriv and output the unit vector whose i-th entry is 1√\nkpub if i ∈ T1, 1√ kpriv if i ∈ T2, and zero\notherwise.4\nThe main algorithmic question we study is the following:\n3We will often refer to public/private/synthetic feature vectors as images, and their coordinates as pixels, in keeping with the original applications of InstaHide to image datasets in Huang et al. (2020b)\n4Note that any such vector does not specify a convex combination, but this choice of normalization is just to make some of the analysis later on somewhat cleaner, and our results would still hold if we chose the vectors in the support of D to have entries summing to 1.\nProblem 1 (Private (exact) image recovery). Let X ∈ Rd×n be a Gaussian image matrix. Given access to the public images {xs}s∈S and the synthetic dataset {yX,w1 , . . . , yX,wm}, where w1, ..., wm ∼ D are unknown selection vectors, output a vector x ∈ Rd for which there exists private image xs (where s ∈ Sc) satisfying |xi| = |(xs)i| for all i ∈ [d].\nRemark 2.7. Note that it is information-theoretically impossible to guarantee that xi = (xs)i. This is because the distribution over X and the distribution over matrices given by sampling X and multiplying every private image by -1 are both Gaussian. And if the selection vectors w1, ..., wm generated the synthetic images in the former case, then the selection vectors w′1 . . . , w ′ m, where w′j is obtained by multiplying the private coordinates of wj by -1, would generate the exact same synthetic images." }, { "heading": "2.2 MULTI-TASK PHASE RETRIEVAL WITH MISSING DATA", "text": "In this section we make formal the discussion in Section 1.1 and situate it in the notation above. First consider a synthetic dataset consisting of a single image y , yX,w, where w is arbitrary and X is a random Gaussian image. From Eq. (1) we know that\n|〈w, pj〉| = yj ∀ j ∈ [d].\nIf S = {1, ..., n}, then the problem of recovering selection vector w from synthetic dataset {y} is merely that of recovering w from pairs (pj , yj), and this is exactly the problem of phase retrieval over Gaussians. More precisely, because w is assumed to be sparse, this is the problem of sparse phase retrieval over Gaussians.\nIf S ( {1, ..., n}, then it’s clearly impossible to recover the private coordinates of w from yX,w alone. But it may still be possible to recover the public coordinates: formally, we can hope to recover [w]S given pairs ([pj ]S , yj), where the pj’s are sampled independently from N (0, Idn). This can be thought of as a missing-data version of sparse phase retrieval where some known subset of the coordinates of the inputs, those indexed by Sc, are unobserved.\nBut recall our ultimate goal is to say something about the private images. It turns out that because we actually observe multiple synthetic images, corresponding to multiple vectorsw, it becomes possible to recover xs for some s ∈ Sc (even in the extreme case where S = ∅!). This corresponds to the following inverse problem which is formally equivalent to Problem 1, but phrased in a self-contained way which may be of independent interest.\nProblem 2 (Multi-task phase retrieval with missing data). Let S ( [n] and Sc = [n]\\S. Let X ∈ Rd×n be a matrix whose entries are i.i.d. draws from N (0, 1), with rows denoted by p1, ..., pd and columns denoted by x1, ..., xn. Let w1, . . . , wm ∼ D.\nFor every j ∈ [d], we get a tuple (\n[pj ]S , y (1) j , . . . , y (m) j\n) satisfying\n|〈wi, pj〉| = y(i)j ∀ i ∈ [m], j ∈ [d].\nUsing just these, output x ∈ Rd such that for some s ∈ Sc, |xi| = |(xs)i| for all i ∈ [d]." }, { "heading": "3 PROOF OVERVIEW", "text": "At a high level, our algorithm has three components:\n1. Learn the public coordinates of all the selection vectors w1, ..., wm used to generate the synthetic dataset.\n2. Recover the m×m rescaled Gram matrix M whose (i, j)-th entry is k · 〈wi, wj〉. 3. Use M and the synthetic dataset to recover a private image.\nStep 1 draws upon techniques in Gaussian phase retrieval, while Step 2 follows by leveraging the correspondence between the covariance matrix of a Gaussian and the covariance matrix of its corresponding folded Gaussian (see Definition 2.1). Step 3 is the trickiest part and calls for leveraging delicate properties of the distribution D over selection vectors.\nLearning the Public Coordinates of Any Selection Vector We begin by describing how to carry out Step 1 above. First consider the case where S = {1, . . . , n}, that is, where every image is public. Recall from the discussion in Section 1.1 and 2.2 that in this case, the question of recovering w from synthetic image yX,w is equivalent to Gaussian phase retrieval. One way to get a reasonable approximation to w is to consider the n× n matrix\nN , E p,y\n[y2 · (pp> − Id)], p ∼ N (0, Idn) and y = |〈w, p〉|.\nIt is a standard calculation (see Lemma B.3) to show that N is a rank-one matrix proportional to ww>. And as every one of p1, . . . , pd is an independent sample from N (0, Id), and yX,wi satisfies 〈w, pi〉 = yX,wi for every pixel i ∈ [d], one can approximate N with the matrix\nN̂ , 1\nd d∑ i=1 ( yX,wi )2 · (pip>i − Id).\nThis is the basis for the spectral initialization procedure that is present in many works on Gaussian phase retrieval, see e.g. Candes et al. (2015); Netrapalli et al. (2013). N̂ will not be a sufficiently good spectral approximation to N when d n, so instead we use a standard post-processing step based on the canonical SDP for sparse PCA (see (2)). Instead of taking the top eigenvector of N̂, we can take the top eigenvector of the SDP solution and argue that as long as d = Ω̃(poly(kpub) log n), this will be sufficiently close to w that we can exactly recover supp(w).\nNow what happens when S ( {1, . . . , n}? Interestingly, if one simply modifies the definition of N to be Ep,y[y2 ·([p]S [p]>S − Id)] and defines the corresponding empirical analogue N̂ formed from the pairs {([pi]S , yX,wi )}i∈[d], one can still argue (see Lemma B.3) that the N is a rank-1 |S|×|S|matrix proportional to [w]S [w]>S and that the top eigenvector of the solution to a suitable SDP formed from N̂ will be close to w (see Lemma B.4).\nRecovering the Gram Matrix via Folded Gaussians As we noted earlier, it is informationtheoretically impossible to recover [wi]Sc for any i ∈ [m] given only yX,wi and [wi]S , but we now show it’s possible to recover the inner products 〈[wi]Sc , [wj ]Sc〉 for any i, j ∈ [m]. For the remainder of the overview, we will work in the extreme case where Sc = {1, ..., n}, though it is not hard (see Section B.7) to combine the algorithms we are about to discuss with the algorithm for recovering the public coordinates to handle the case of general S. For brevity, let k , kpriv.\nFirst note that the m× d matrix whose rows consist of yX,w1 , ..., yX,wm can be written as\nY , |〈p1, w1〉| · · · |〈pd, w1〉|... . . . ... |〈p1, wm〉| · · · |〈pd, wm〉| . Observe that without absolute values, each column would be an independent draw from the mvariate Gaussian N (0,M), where M is the Gram matrix defined above. Instead, with the absolute values, each column of Y is actually an independent draw from the folded Gaussian N fold(0,M) (Definition 2.1). The key point is that the covariance of N fold(0,M) can be directly related to M (see Corollary B.6), so by estimating the covariance of the folded Gaussian N fold(0,M) using the columns of Y, we can obtain a good enough approximation M̃ to M that we can simply round every entry of M̃ so that the rounded matrix exactly equals M. Furthermore, we only need to entrywise approximate the covariance of N fold(0,M) for all this to work, which is why it suffices for d to grow logarithmically in m.\nDiscerning Structure From the Gram Matrix By this point, we have access to the Gram matrix M. Equivalently, we now know for any i, j ∈ [m] whether supp(wi) ∩ supp(wj) 6= ∅, that is, for any pair of synthetic images, we can tell whether the set of private images generating one of them overlaps with the set generating the other. Note that M = k ·WW>, where W is the matrix whose i-th row is wi, so if we could factorize M in this way, we would be able to recover which private vectors generated each synthetic vector. Of course, this kind of factorization problem, even if we constrain the factors to be row-sparse like W, has multiple solutions. One reason is that any\npermutation of the columns of W would also be a viable solution, but this is not really an issue because the ordering of the private images is not identifiable to begin with.\nA more serious issue is that if m is too small, there might be row-sparse factorizations of M which appear to be valid, but which we could definitively rule out upon sampling more synthetic vectors. For instance, suppose the first k+1 selections vectors all satisfied |supp(wj)∩ supp(wj′)| = k−1. Ignoring the fact that this is highly unlikely, in such a scenario it is impossible to distinguish between the case where the corresponding synthetic images all have the same k−1 private images in common, and the case where there is a group T ⊆ [n] of k+ 1 private images such that each of these synthetic images is comprised of a subset of T of size k. But if we then sampled a new selection vector wk+2 for which |supp(wj) ∩ supp(wk+2)| = 1 for all j ∈ [k + 1], we could rule out the latter. This is indicative of a more general issue, namely that one cannot always recover the identity of a collection of subsets (even up to relabeling) if one only knows the sizes of their pairwise intersections!\nThis leads to the following natural combinatorial question. What families of sets are uniquely identified (up to trivial ambiguities) by the sizes of the pairwise intersections? One answer to this question, as we show, is the family of all subsets of {1, ..., k + 2} of size k (see Lemma B.11). This leads us to the following definition:\nDefinition 3.1 (Floral Submatrices). A ( k+2 k ) × ( k+2 k ) matrix H is floral if the following holds. Fix some lexicographic ordering on Ck[k+2] and index H according to this ordering. There is some permutation matrix Π for which the matrix H′ , Π>HΠ satisfies that for every pair of S, S′ ∈ Ck[k+2], H ′ S,S′ = |S ∩ S′|. See Example B.18 in the supplement.\nThe upshot is that if we can identify a floral submatrix of M, then we know for certain that the subsets of private images picked by those selection vectors comprise all size-k subsets of some subset of [n] of size k + 2. In summary, using the pairwise intersection size information provided by the Gram matrix M, we can pinpoint collections of selection vectors which share a nontrivial amount of common structure.\nLearning a Private Image With a Floral Submatrix What can we do with this common structure in a floral submatrix? Let t = ( k+2 k ) . Given that the selection vectors wi1 , ..., wit corresponding to the rows of the floral submatrix only involve k + 2 different private images altogether, and there are t > k + 2 constraints of the form |〈wij , p`〉| = y X,wij ` for any pixel ` ∈ [d], we can hope that for each pixel, we can uniquely recover the k + 2 private images from solving this system of equalities, where the unknowns are the values of the k + 2 private images at that particular pixel. A priori, the fact that the number of constraints in this system exceeds the number of unknowns does not immediately guarantee that this system has a unique solution up to multiplying the solution uniformly by −1. Here we exploit the fact that X is Gaussian to show however that this is the case almost surely (Lemma B.10). Finally, note that this system can be solved in time exp(O(k2)) by simply enumerating over 2t sign patterns. We conclude that if we could find a floral submatrix, then we would find not just one, but in fact k + 2 private images!\nExistence of Floral Submatrix, and How to Find It It remains to understand how big m has to be before we can guarantee the existence of a floral submatrix inside the Gram matrix M with high probability. Obviously if m were big enough that with high probability we see every possible synthetic image that could arise from a selection vector w in the support of D, then M will contain many floral submatrices. One surprising part of our result is that we can ensure the existence of a floral submatrix when m is much smaller. Our proof of this is quite technical, but at a high level it is based on the second moment method (see Lemma B.12).\nThe final question is: provided a floral submatrix of M exists, how do we find it? Note that naively, we could always brute-force over all\n( m\nO(k2)\n) ≤ nO(k3) principal submatrices with exactly ( k+2 k ) rows/columns, and for each such principal submatrix we can check in exp(Õ(k)) time whether it is floral.\nSurprisingly, we give an algorithm that can identify a floral submatrix of M in time dominated by the time it takes to write down the entries of the Gram matrix M. Note that an off-the-shelf algorithm for\nsubgraph isomorphism would not suffice as the size of the submatrix in question is O(k2), and furthermore such an algorithm would need to work for weighted graphs. Instead, our approach is to use the constructive nature of the proof in Lemma B.11, that the family of all subsets of {1, ..., k+ 2} of size k is uniquely identified by the sizes of the pairwise intersections. By algorithmizing this proof, we give an efficient procedure for finding a floral submatrix, see Algorithm 3 and Lemma B.17. An important fact we use is that if we restrict our attention to the entries of M equal to k − 1 or k − 2, this corresponds to a graph over the selection vectors which is sparse with high probability.\nWe defer the formal specification and analysis of our algorithm to the supplement." }, { "heading": "4 EXPERIMENTS", "text": "We describe an experiment demonstrating the utility of InstaHide for Gaussian images and comparing to the utility of another data augmentation scheme, MixUp Zhang et al. (2018). We also informally report on our implementation of LEARNPUBLIC and its empirical efficacy." }, { "heading": "4.1 CHOICE OF ARCHITECTURE AND PARAMETERS", "text": "As our empirical results are purely for proof-of-concept, we work with a fairly basic neural network architecture. We use a 4-layer neural network as a binary classifier,\ny = arg max(softmax(W4σ(W3σ(W2(σ(W1x+ b1)) + b2) + b3) + b4)),\nwhere x ∈ R10, W1 ∈ R100×10, W2 ∈ R100×100, W3 ∈ R100×100, W4 ∈ R2×100, b1 ∈ R100, b2 ∈ R100, b3 ∈ R100, b4 ∈ R100. We initialize the entries of each Wl and bl to be i.i.d. draws from N (ul, 1), where ul is sampled from N (0, α) at the outset. We train the neural network for 100 epochs with cross-entropy loss and SGD optimizer with a learning rate of 0.01. We do not need to distinguish between public and private images in our experiments, so let kpriv = 0 and kpub = k for k ∈ { 1, 2, 3, 6}, and for each choice of k, we use random k-sparse selection vectors whose nonzero entries equal 1/k. In all of our experiments, we separate our original image data (before generating synthetic data) into two categories: 80% training data and 20% test data. We train on synthetic images generated by MixUp or InstaHide using the training data, and measure the “test accuracy” on the training data and the test data separately. We provide more choices of k in Appendix C." }, { "heading": "4.2 GAUSSIAN DATA", "text": "Settings We considered binary classification on Gaussian data. We generated random images x1, . . . , x1000 ∈ R10 from N (0, Id) and a random vector v ∈ R10 ∼ N (0, Id). We then ranked all the Gaussian images based on ∑ i |xivi| and labeled the largest half as ‘1’ and the rest as ‘0’. The point of choosing this labeling function is that it would assign the same label to any x, x′ which agree entrywise in magnitudes. Given a synthetic image generated via MixUp or InstaHide using selection vector w, we assigned it the label which is the convex combination of the one-hot encodings of the labels of the original images indexed by w.\nResults We compare training and test loss over epochs when training on a synthetic dataset generated by either MixUp or Instahide, as shown in Figure 1. We use the convention in this paper of defining synthetic images under InstaHide to have all nonnegative entries (rather than imposing random sign flips), though we explore in the supplement how random sign flips can affect learnability. Compared to training on MixUp, InstaHide results on lower model performance (accuracy). As we expected, when we increase the k, both MixUp and InstaHide suffer from accuracy loss. Instahide dropped by ∼ 10% accuracy when k = 6 compared to classical training k = 1, while MixUp dropped by ∼ 5%." }, { "heading": "4.3 IMPLEMENTATION OF LEARNPUBLIC", "text": "We implemented LEARNPUBLIC for kpriv = 2 and n ∈ {2000, 5000, 7500, 10000}. For kpub = 2, 4, 6, we respectively chose d = 1000, 1800, 2400. In particular, our choice of d is meant to work essentially for any choice of n (modulo the logarithmic dependence which does not noticeably\nmanifest in this regime). One heuristic modification that we made to LEARNPUBLIC was to use a diagonal thresholding approach from Cai et al. (2016) in place of solving the SDP in (2): namely for every j ∈ [n] we computed the quantity 1d ∑d i=1 y 2 i ·(xi)2j , zeroed out all but the principal submatrix\nof M̃ indexed by the top 25 such j, and computed the top eigenvector of the resulting matrix. For each parameter setting we found that, as expected, we were able to recover an average of at least 90% of the support. As this experiment was primarily to demonstrate that d can be much less than n, we did not explore further optimizations to the algorithm." }, { "heading": "A DISCUSSION OF OTHER ATTACKS", "text": "Attack of Jagielski (2020) It has been pointed out Jagielski (2020) that for kpriv = 2, kpub = 0, given a single synthetic image one can discern large regions of the constituent private images simply by taking the entrywise absolute value of the synthetic image. The reason is the pixel values of a natural image are mostly continuous, i.e. nearby pixels typically have similar values, so the entrywise absolute value of the InstaHide image should be similarly continuous. That said, natural images have enough discontinuities that this breaks down if one mixes more than just two images, and as discussed above, this attack is not applicable when the individual private features are i.i.d. like in our setting.\nAttack of Carlini et al. (2020) A month after this submission, Carlini et al. Carlini et al. (2020) independently gave an attack breaking the InstaHide challenge originally released by the authors of Huang et al. (2020b). In that challenge, the public dataset was ImageNet, the private dataset consisted of npriv = 100 natural images, and kpriv = 2, kpub = 4, m = 5000. They were able to produce a visually similar copy of each private image.\nMost of their work goes towards recovering which private images contributed to each synthetic image. Their first step is to train a neural network on the public dataset to compute a similarity matrix with rows and columns indexed by the synthetic dataset, such that the (i, j)-th entry approximates the indicator for whether the pair of private images that are part of synthetic image i overlaps with the pair that is part of synthetic image j. Ignoring the rare event that two private images contribute to two distinct synthetic images, and ignoring the fact that the accuracy of the neural network for estimating similarity is not perfect, this similarity matrix is precisely our Gram matrix in the kpriv = 2 case.\nThe bulk of Carlini et al.’s work Carlini et al. (2020) is focused on giving a heuristic for factorizing this Gram matrix. They do so essentially by greedily decomposing the graph with adjacency matrix given by the Gram matrix into npriv cliques (plus some k-means post-processing) and regarding each clique as consisting of synthetic images which share a private image in common. They then construct anm×npriv bipartite graph as follows: for every synthetic image index i and every private image index j, connect i to j if for four randomly chosen elements i1, ..., i4 ∈ [m] of the j-th clique, the (i, i`)-th entries of the Gram matrix are nonzero. Finally, they compute a min-cost max-flow on this instance to assign every synthetic image to exactly kpriv = 2 private images.\nIt then remains to handle the contribution from the public images. Their approach is quite different from our sparse PCA-based scheme. At a high level, they simply pretend the contribution from the public images is mean-zero noise and set up a nonconvex least-squares problem to solve for the values of the constituent private images.\nComparison to Our Generative Model Before we compare our algorithmic approach to that of Carlini et al. (2020), we mention an important difference between the setting of the InstaHide challenge and the one studied in this work, namely the way in which the random subset of public/private images that get combined into a synthetic image is sampled. In our case, for each synthetic image, the subset is chosen independently and uniformly at random from the collection of all subsets consisting of kpriv private images and kpub public images. For the InstaHide challenge, batches of npriv synthetic images get sampled one at a time via the following process: for a given batch, sample two random permutations π1, π2 on npriv elements and let the t-th synthetic image in this batch be given by combining the private images indexed by π1(t) and π2(t). Note that this process ensures that every private image appears exactly 2m/npriv times, barring the rare event that π1(t) = π2(t) for some t in some batch. It remains to be seen to what extent the attack of Carlini et al. (2020) degrades in the absence of this sort of regularity property in our setting.\nComparison to Our Attack The main commonality between our approach and that of Carlini et al. (2020) is to identify the question of extracting private information from the Gram matrix as the central algorithmic challenge.\nHow we compute this Gram matrix differs. We use the relationship between covariance of a folded Gaussian and covariance of a Gaussian, while Carlini et al. (2020) use the public dataset to train a neural network on public data to approximate the Gram matrix.\nHow we use this matrix also differs significantly. We do not produce a candidate factorization but instead pinpoint a collection of synthetic images such that we can provably ascertain that each one comprises kpriv private images from the same set of kpriv + 2 private images. This allows us to set up an appropriate piecewise linear system of sizeO(kpriv) with a provably unique solution and solve for the kpriv + 2 private images.\nAn exciting future direction is to understand how well the heuristic in Carlini et al. (2020) scales with kpriv. Independent of the connection to InstaHide, it would be very interesting from a theoretical standpoint if one could show that their heuristic provably solves the multi-task phase retrieval problem defined in Problem 2 in time scaling only polynomially with kpriv (i.e. the sparsity of the vectors w1, . . . , wm in the notation of Problem 2)." }, { "heading": "B RECOVERING PRIVATE IMAGES FROM A GAUSSIAN DATASET", "text": "In this section we prove our main algorithmic result: Theorem B.1 (Main). Let S ( [n], and let npub = |S| and npriv = |Sc|. Let k = kpub + kpriv.\nIf d ≥ Ω(poly(kpub, kpriv) · log(npub + npriv)) and m ≥ Ω ( n kpriv− 2kpriv+1 priv k poly(kpriv) ) , then with\nhigh probability over X and the sequence of randomly chosen selection vectors w1, . . . , wm ∼ D, there is an algorithm which takes as input the synthetic dataset {yX,wi}i∈[m] and the columns of X indexed by S, and outputs kpriv + 2 distinct images x̃1, . . . , x̃kpriv+2 for which there exist kpriv + 2 distinct private images xi1 , . . . , xikpriv+2 satisfying |x̃j | = |xij | for all j ∈ [kpriv + 2]. Furthermore, the algorithm runs in time\nO(dm2 + dn2pub + n 2ω+1 pub ).\nwhere ω ≈ 2.373 is the exponent of matrix multiplication. Remark B.2. Here we give some interpretation to the quantitative guarantees of Theorem B.1:\n• The number of pixels d only needs to depend logarithmically on the number of public/private images and polynomially in the sparsity kpub, kpriv, which will be some small positive integer (e.g. kpub + kpriv = 4 or 8 in Huang et al. (2020a), kpub + kpriv = 4 or 6 in Huang et al. (2020b) and kpub + kpriv = 2 in the implementation of MixUp in Zhang et al. (2018)), so the regime in which Theorem B.1 applies is quite realistic.\n• Note that we can achieve recovery even when m = o(nkprivpriv ). The reason this is significant is that as soon as m = Ω(nkprivpriv ), all possible combinations of k private images are used. While it is still not immediately clear how to recover private images once this has happened, we regard the fact that we can do so well before this point to be one of the most interesting aspects of our result. Finally, we remark that the runtime is largely dominated by the O(m2) term coming from forming an m × m matrix whose (i, j)-th entry turns out to equal 〈wi, wj〉 for all i, j ∈ [m]. In fact, naive implementations of the most sophisticated part of our algorithm (see Sections B.3, B.4, B.5, and B.6) require time ω(m2), and getting these parts of the algorithm to run in O(m2) time turns out to be quite subtle." }, { "heading": "B.1 LEARNING THE PUBLIC COORDINATES VIA GAUSSIAN PHASE RETRIEVAL", "text": "In this section we give a procedure which, given any synthetic image yX,w, recovers the entire support of [w]S . The algorithm is inspired by existing algorithms for sparse phase retrieval, with the catch that we need to handle the fact that we only get to observe the public subset of coordinates of any of the vectors pj . Our algorithm, LEARNPUBLIC is given in Algorithm 1 below.\nWe first show that the population version of the matrix M̃ formed in Step 1 is a rank-1 projector whose top eigenvector is in the direction of [w]S .\nLemma B.3. Let w be a unit vector. Let M̃ ∈ Rn×n be defined as\nM̃ , 1\nd d∑ j=1 (y2j − 1) · ( [pj ]S · [pj ]>S − Id )\nAlgorithm 1: LEARNPUBLIC({([pj ]S , yj)}j∈[d]) Input: Samples ([p1]S , y1), ..., ([pd]S , yd) Output: supp([w]S) with probability at least 1− δ, provided d ≥ poly(kpub)/ log(n/δ)\n1 Form the matrix M̃ , 1d ∑d j=1(y 2 j − 1) · ( [pj ]S · [pj ]>S − Id ) . 2 Solve the semidefinite program (SDP) (this step takes n2ω+1pub via Jiang et al. (2020))\nmax Z 0 〈Z, M̃〉 subject to Tr(Z) = 1, ∑ i,j |Zi,j | ≤ kpub (2)\nCompute the top eigenvector w̃ of Z. 3 return coordinates of the k entries of w̃ with the largest magnitudes.\nThen E[M̃] = 12 [w]S [w] > S .\nProof. First, it is obvious that the expectation of M̃ can be written as\nE[M̃] = E p∼N (0,Id)\n[(〈w, p〉2 − 1) · (pSp>S − Id)].\nFor any vector v ∈ Rn with ‖v‖2 = 1, we can compute v>E[M̃]v\nv>E[M̃]v = v>E p [(〈w, p〉2 − 1) · (pSp>S − Id)]v\n= E p\n[(〈w, p〉2 − 1) · (〈[v]S , p〉2 − 1)]\n= E p\n[(〈w, p〉2 − 1) · (‖[v]S‖22〈[v]S/‖[v]S‖2, p〉2 − 1)]\n= E p\n[(〈w, p〉2 − 1) · (‖[v]S‖22〈[v]S/‖[v]S‖2, p〉2 − ‖[v]S‖22)]\n+ E p\n[(〈w, p〉2 − 1) · (‖[v]S‖22 − 1)]\n=: A1 +A2\nwhere the second step follows from ‖v‖22 = 1. For the first term in the above equation, we have\nA1 = E p\n[(〈w, p〉2 − 1) · (‖[v]S‖22〈[v]S/‖[v]S‖2, p〉2 − ‖[v]S‖22)]\n= ‖[v]S‖22 E p [(〈w, p〉2 − 1) · (〈[v]S/‖[v]S‖2, p〉2 − ‖[v]S‖22)]\n= 2‖[v]S‖22 E p [φ2(〈w, p〉) · φ2(〈[v]S/‖[v]S‖2, p〉)]\n= 2‖[v]S‖22〈w, [v]S/‖[v]S‖2〉2\n= 2〈w, [v]S〉2\nwhere the third step follows from the fact that w and [v]S/‖[v]S‖2 are unit vectors, φ2 denotes the normalized degree-2 Hermite polynomial φ2(z) , 1√2 (z\n2 − 1), and the last step follows from the standard fact that Eg∼N (0,Id)[φi(〈g, v1〉)φj(〈g, v2〉)] = 〈v1, v2〉i if i = j and 0 otherwise. For the second term, we have\nA2 = E p [(〈w, p〉2 − 1) · (‖[v]S‖22 − 1)] = (‖[v]S‖22 − 1) ·E p [〈w, p〉2 − 1] = 0.\nThus, we have\nA1 +A2 = 2〈w, [v]S〉2.\nIn particular, for v = [w]S/‖[w]S‖2, the above quantity is 2‖[w]S‖22, while for v ⊥ [w]S , the above quantity is 0. Thus we complete the proof.\nFinally, we complete the proof of correctness of LEARNPUBLIC. Here we leverage the fact that we are running an SDP (the canonical SDP for sparse PCA) to show that as long as d is at least polynomially large in kpub and logarithmically large in n, with high probability we can recover supp([w]S). Lemma B.4 (Learning the public coordinates). For any δ > 0, if d ≥ poly(kpub)/ log(n/δ), then with probability at least 1 − δ over the randomness of X, we have that the coordinates output by LEARNPUBLIC({([pj ]S , yj)}j∈[d] for yj , |〈pj , w〉| are exactly equal to supp([w]S).\nProof. Let Z be the solution to the SDP in (2), and define w∗ , [w]S/‖[w]S‖. Because w∗ is a feasible solution for the SDP, by optimality of Z we get that\n0 ≤ 〈Z − w∗w>∗ , M̃〉\n= 〈Z − w∗w>∗ ,E[M̃]〉+ 〈Z − w∗w>∗ , M̃−E[M̃]〉\n= ‖[w]S‖2 2 〈Z − w∗w>∗ , w∗w>∗ 〉︸ ︷︷ ︸\n1\n+ 〈Z − w∗w>∗ , M̃−E[M̃]〉︸ ︷︷ ︸ 2 , (3)\nwhere in the last step we used Lemma B.3.\nBecause ‖Z‖F ≤ Tr(Z) = 1 = ‖x∗‖, we may upper bound 1 by − 12‖Z − w∗w > ∗ ‖2F . For 2 , note that because the entrywise L1 norm of Z and x∗x>∗ are both upper bounded by k, by Holder’s we can upper bound 2 by 2kpub · ‖M̃ − E[M̃]‖max. Standard concentration (see e.g. Neykov et al. (2016)) implies that as long as d ≥ log(n/δ)/η2, then ‖M̃ − E[M̃]‖max ≤ η. We conclude from (3) that\n0 ≤ −‖[w]S‖ 2\n4 ‖Z − w∗w>∗ ‖2F + 2kpubη,\nso ‖Z − w∗w>∗ ‖2F ≤ 8kpubη/‖[w]S‖2 ≥ 8ηk2pub, where in the last step we used that if w has at least one public coordinate, then ‖[w]S‖2 ≥ 1/kpub. By Davis-Kahan, this implies that the top eigenvector w̃ ofZ satisfies ‖w̃−w∗‖2 ≤ 8ηk2pub. As the nonzero entries ofw∗ are at least 1/ √ kpub,\nby taking η = O(1/k3pub) we ensure that ‖w̃ − w∗‖∞ ≤ ‖w̃ − w∗‖2 < 1/2 √ kpub, so the largest entries of w̃ in magnitude will be in the same coordinates as the nonzero entries of w∗." }, { "heading": "B.2 RECOVERING THE GRAM MATRIX VIA FOLDED GAUSSIANS", "text": "We now turn to the second step of our overall recovery algorithm: recovering the m × m Gram matrix whose (i, j)-th entry is supp(wi) ∩ supp(wj). For this section and the next four sections, we will assume that S = ∅, i.e. that all images are private. For brevity, let k , kpriv. This turns out to be without loss of generality. Given that in the case where S 6= ∅ we can recover the public coordinates of any selection vector using LEARNPUBLIC, passing to the case of general S will be a simple matter of subtracting the contribution of the public coordinates from the entries of the Gram matrix obtained by GRAMEXTRACT to reduce to the case of S = ∅. We will elaborate on this in the final proof of Theorem B.1.\nGiven selection vectors w1, ..., wm, define the matrix W ∈ Rm×d to have rows consisting of these vectors, so that the Gram matrix we are after is simply given byWW>. Recall that them×dmatrix whose rows consist of yX,w1 , ..., yX,wm can be written as\nY , |〈p1, w1〉| · · · |〈pd, w1〉|... . . . ... |〈p1, wm〉| · · · |〈pd, wm〉| , and as each entry of X is an independent standard Gaussian, the columns of Y ∈ Rm×d≥0 can be regarded as independent draws from N fold(0,WW>), where W is defined above. Let Σfold denote the covariance of this folded Gaussian distribution. It is known that one can recover information about the covariance WW> of the original Gaussian distribution from the covariance Σfold of its folded counterpart:\nLemma B.5 (Page 7 in Kan & Robotti (2017)). Given a GaussianN (0,Σ), the covariance Σfold ∈ Rm×m of the corresponding folded Gaussian distribution N fold(0,Σ) is given by Σfoldi,i = Σi,i and, for i 6= j,\nΣfoldi,j = Σi,j(4Φ2(0, 0; ρi,j)− 1) + 4Σ 1/2 i,i Σ 1/2 j,j (1− ρ 2 i,j)φ2(0, 0; ρi,j)−\n2 π Σ 1/2 i,i Σ 1/2 j,j\nwhere ρi,j , Σi,j/(Σ 1/2 i,i Σ 1/2 j,j ).\nWe can apply Lemma B.5 in our specific setting to obtain the following relationship betweenWW> and the covariance of N fold(0,WW>): Corollary B.6. If Σ = WW> ∈ Rm×m for some matrix W ∈ Rm×n where the rows of W are unit vectors, then the covariance Σfold ∈ Rm×m of the corresponding folded Gaussian distribution N fold(0,Σ) is given by\nΣfoldi,j = { 1, if i = j; Ψ(〈wi, wj〉), if i 6= j.\nwhere Ψ(z) , 2π (z · arcsin(z) + √ 1− z2 − 1).\nProof. Because the rows of W are unit vectors, we have that Σi,j = ρi,j = 〈wi, wj〉 for all i, j ∈ [m]. To compute the off-diagonal entries of Σfold, note that by definition of CDF and PDF,\nφ2(0, 0; 〈wi, wj〉) = 1 2π √ 1− 〈wi, wj〉2 , Φ2(0, 0; 〈wi, wj〉) = 1 4 + arcsin〈wi, wj〉 2π .\nThe claim follows.\nAlgorithm 2: GRAMEXTRACT({yX,wi}i∈[m], η) Input: InstaHide dataset {yX,wi}i∈[m]), accuracy parameter η Output: Matrix M equal to the Gram matrix k ·WW>, scaled to have integer entries (see\nLemma B.7) 1 η∗ ← O(η2). 2 Let z1, ..., zd ∈ Rm be the vectors given by\n(zj)i = y X,wi j .\nfor all i ∈ [m], j ∈ [d]. 3 Form the empirical estimates\nµ̂ = 1\nd d∑ i=1 zi Σ̂ = 1 d d∑ i=1 (zi − µ̂)(zi − µ̂)>\nand define Σ̂′ to be the matrix obtained by applying the function clipη∗ entrywise to Σ̂. 4 Let Σ̃ be the matrix obtained by applying Ψ−1 entrywise to Σ̂′. 5 Let Σ∗ denote the matrix obtained by entrywise rounding every entry of Σ̃ to the nearest\nmultiple of 1/k. 6 return k ·Σ∗.\nWe now show that provided the number of pixels is moderately large, we can recover the matrix exactly, regardless of the choice of selection vectors w1, ..., wm ∈ Rn. The full algorithm, GRAMEXTRACT, is given in Algorithm 2 above. Lemma B.7 (Extract Gram matrix). Suppose d = Ω(log(m/δ)/η4). For random Gaussian image matrix X and arbitrary w1, ..., wm ∈ Sd−1≥0 , let Σ̃ be the matrix computed in Step 4 of GRAMEXTRACT ({yX,wi}i∈[m], η), and let Σ∗ be the output. Then with probability 1−δ over the randomness of X, we have that |Σ̃i,i′ − 〈wi, wi′〉| ≤ η for all i, i′ ∈ [m]. In particular, if η = 1/2k, the conditioned on this happening, Σ∗ = k ·WW>.\nTo prove this, we will need the following helper lemma about Ψ−1. Lemma B.8. There is an absolute constant c > 0 such that for any 0 < η < 1 and ẑ, z ≥ η,\n|Ψ−1(ẑ)−Ψ−1(z)| ≤ c√ η · |ẑ − z|.\nProof. Noting that Ψ′(z) = 2 arcsin(x)/π, we get that the derivative of Ψ−1 at z is given by 1 Ψ′(Ψ−1(z)) = π 2 arcsin(Ψ−1(z)) . One can verify numerically that for 0 ≤ x ≤ 1, x2 π ≤ Ψ(x) ≤ 1.2x2 π ,\nso in particular √ πz/1.2 ≤ Ψ−1(z) ≤ √ πz. The derivative of Ψ−1 at z is therefore upper bounded\nby O(1/ arcsin( √ πz/1.2)) ≤ O( √ 1.2/(πz)). In particular, for z ≥ η, this is at most O(1/√η). In other words, over η ≤ z ≤ 1, Ψ−1 is O(1/√η)-Lipschitz as claimed.\nUp to this point we have not used the randomness of the process generating the selection vectors w1, ..., wm. Note that without leveraging this, there exist choices of W for which it is informationtheoretically impossible to discern anything. Indeed, consider a situation where w1, ..., wm ∈ Sd−1≥0 have pairwise disjoint supports. In this case all we know is that the columns of Y are independent standard Gaussian vectors, as WW> = Id. We now proceed to the most involved component of our proof, where we exploit the randomness of the selection vectors." }, { "heading": "B.3 SOLVING A LARGE SYSTEM OF EQUATIONS", "text": "In this section we show that if we can pinpoint a collection of selection vectors corresponding to all size-k subsets of some set of k + 2 private images, then we can solve a certain system of equations to uniquely (up to sign) recover those private images. We will need the following basic notion corresponding to the fact that this system has only one unique solution, up to sign. Definition B.9 (Generic solution of system of equations). For anym and any vector v = (vS)S∈Ck\n[m]\n∈ R( m k ), we say that v is generic if there are at most two solutions to the system∣∣∣∣∣∑\ni∈S ai ∣∣∣∣∣ = vS ∀S ∈ Ck[m] in the variables {ai}i∈[m]. Note that there are exactly two solutions {a′i} and {a′′i } to this system if and only if a′i = −a′′i for all i ∈ [m] and a′i 6= 0 for some i ∈ [m].\nWe now show that for Gaussian images, the abovementioned system of equations almost surely has a unique solution up to sign. Lemma B.10 (Vector of Gaussian subset sums is generic). Let g1, ..., gm be independent draws from N (0, 1). For any m satisfying m ≥ k + 2, the vector v = (vS)S∈Ck\n[m] given by vS , ∑ i∈S gi is\ngeneric almost surely (with respect to the randomness of g1, ..., gm).\nProof. First note that the entries of v are all nonzero almost surely. For v to not be generic, there must exist another vector v′ whose entrywise absolute value satisfies |v| = |v′| but for which v′ 6= v,−v and for which there exists h1, ..., hm satisfying ∑ i∈S hi = v ′ S for all S ∈ Ck[m]. This would imply there exist indices S, T for which v′S = vS and v ′ T = −vT .\nBy the assumption that m ≥ k + 2 (and recalling that k > 1 in our setup), we have that ( m k ) > m. In particular, the set of vectors w = (wS)S∈Ck [m] for which there exist numbers {g′i} such that\nwS = ∑ i∈S g ′ i for all S is a proper subspace U of R( m k ). Let `1, ..., `a be a basis for the set of vectors ` satisfying 〈`, w〉 = 0 for all w ∈ U . Note that there is at least one nonzero generic vector inU , for instance, the vectorw∗ given byw∗S = 1[i ∈ S] (here we again use the fact thatm ≥ k+2).\nLetting D ∈ R( m k )×( m k ) denote the diagonal matrix whose S-th diagonal entry is equal to vS/v′S , note that the existence of h1, ..., hm above implies that v additionally satisfies 〈D`i, v〉 = 0 for all i ∈ [a]. But there must be some i for which D`i does not lie in the span of `1, ..., `a, or else we would conclude that for any w ∈ U , the vector w′ whose S-th entry is wS · vS/v′S would also lie\nin U . Because of the existence of indices S, T for which v′S = vS and v ′ T = −vT , we know that w 6= w′,−w′, so we would erroneously conclude that w is not generic for any w ∈ U , contradicting the fact that the vector w∗ defined above is generic.\nWe conclude that there is some i for which D`i lies outside the span of `1, . . . , `a. But then the fact that 〈D`i, v〉 = 0 for this particular i implies that the variables gi satisfy some nontrivial linear relation. This almost surely cannot be the case because g1, ..., gm are independent draws from N (0, 1)." }, { "heading": "B.4 LOCATING A SET OF USEFUL SELECTION VECTORS", "text": "In the previous section we showed that we just need to find a set of selection vectors from among the rows of W that correspond to size-k subsets of some set of k + 2 private images. Here we show that such a collection of selection vectors is uniquely identified, up to trivial ambiguities, by their pairwise inner products.\nLemma B.11 (Uniquely identifying a family of subsets). Let F = {TS}S∈Ck [k+2] be a collection of subsets of [n] for which |TS ∩ TS′ | = |S ∩ S′| for all S, S′ ∈ Ck[k+2]. Then there is some subset U ⊆ [n] of size k + 2 for which {TS} = CkU as (unordered) sets.\nk−4 ) = 1 set S′′′.\nProof. For the reader’s convenience, we illustrate the sequence of subsets constructed in the following proof in Table 1.\nSuppose without loss of generality that F contains the sets S1,2 , {1, ..., k} and Sk+1,k+2 , {3, ..., k + 2} (the indexing will become clear momentarily). We will show that {TS} = CkU for U = [k + 2].\nLet S∗ , S0 ∩ S1. For any S′ ∈ Ck[k+2] satisfying |S0 ∩ S ′| = |S1 ∩ S′| = k − 1, observe that S′ must contain S∗ and one element from each of S0\\S1 = {1, 2} and S1\\S0 = {k + 1, k + 2}, so there are four such choices of S′, call them {Sa,b}a∈{1,2},b∈{k+1,k+2}, and F must contain all of them.\nNow consider any subset S′′ ⊂ [k + 2] for which, for some b 6= b′ ∈ {k + 1, k + 2}, we have that |S′′ ∩ S1,2| = |S′′ ∩ S1,b| = |S′′ ∩ S2,b| = k − 1, and |S′′ ∩ Sk+1,k+2| = |S′′ ∩ S′1,b′ | = |S′′ ∩S′2,b′ | = k− 2. Observe that it must be that |S′′ ∩S∗| = k− 3 and that S′′ contains {1, 2}, so there are 2 · ( k−2 k−3 )\n= 2k − 4 such choices of S′′, and F must contain all of them. We can similarly consider S′′ for which, for some a 6= a′ ∈ {1, 2}, we have that |S′′ ∩ Sk+1,k+2| = |S′′ ∩ Sa,k+1| =\n|S′′ ∩ Sa,k+2| = k − 1, and |S′′ ∩ S1,2| = |S′′ ∩ S′a′,k+1| = |S′′ ∩ S′a′,k+2| = 2k − 4, for which there are again 2k − 4 choices of S′′, and F must contain all of them. Alternatively, if F contained k − 2 subsets S′′ satisfying |S′′ ∩ S1,2| = |S′′ ∩ Sb,k+1| = |S′′ ∩ Sb,k+2| = k − 1 for some b ∈ {1, 2}, then it would have to be that any such S′′ contains the k − 1 elements of {b, 3, . . . , k}, and therefore the intersection between any pair of such S′′ must be equal to k − 1, violating the constraint that |TS ∩ TS′ | = |S ∩ S′| for all S, S′ ∈ Ck[k+2]. The same reasoning applies to rule out the case where F contains k − 2 subsets S′′ satisfying |S′′ ∩ Sk+1,k+2| = |S′′ ∩ S1,b| = |S′′ ∩ S2,b| = k − 1 for some b ∈ {k + 1, k + 2}. Finally, consider the set of all subsets S′′′ distinct from the ones exhibited thus far, and for which |S′′′ ∩ S0| = |S′′′ ∩ S1| = |S′′′ ∩ Sa,b| = k − 2 for all a ∈ {1, 2}, b ∈ {k + 1, k + 2} and |S′′′ ∩ S′′ for at least one of the 4k− 8 subsets constructed two paragraphs above. Observe that any S′′′ distinct from the ones exhibited thus far which satisfies the first constraint must either contain S∗ and two elements outside of {1, ..., k + 4}, or must satisfy |S′′′ ∩ S∗| = k − 4 and contain {1, 2, k + 1, k + 2}. In the former case, such an S′′′ would violate the second constraint. As for the latter case, there are ( k−2 k−4 )\nsuch choices of S′′′, and F must therefore contain all of them. We have now produced 4k − 2 + ( k−2 k−4 ) = ( k+2 k ) unique subsets, all belonging to Ck[k+2], and F is of size(\nk+2 k\n) , concluding the proof." }, { "heading": "B.5 EXISTENCE OF A FLORAL SUBMATRIX", "text": "Recall the notion of a floral submatrix from Definition 3.1. In this section we show that with high probability M contains a floral principal submatrix. In the language of sets, this means that with high probability over a sufficiently long sequence of randomly chosen size-k subsets of [n], there is a collection of ( k+2 k ) subsets in the sequence which together comprise all size-k subsets of some U ⊆ [n] of size k + 2. Quantitatively, we have the following:\nLemma B.12 (Existence of a floral submatrix). Let m ≥ Ω(kO(k3)nk− 2\nk+1 ). If sets T1, ..., Tm are independent draws from the uniform distribution over Ckn, then with probability at least 9/10, there is some U ∈ Ck+2[n] for which every element of C k U is present among T1, ..., Tm. Proof. Let L = ( k+2 k ) = 12 (k + 2)(k + 1). Define\nZ , ∑\ni1<···<iL∈[m]\n1 [ {Ti1 , ..., TiL} = CkU for some U ∈ Ck+2[n] ] .\nBy linearity of expectation, E[Z] is equal to ( m L ) times the probability that {T1, ..., TL} = CkU for\nsome U ∈ Ck+2[n] . The latter probability is equal to ( n k+2 ) · L! · ( n k )−L , so we conclude that\nE[Z] =\n( m\nL\n) · ( n\nk + 2\n) · L! · ( n\nk )−L ≥ mL · n k+2\nnkL · L! · (k!) L LL · (k + 2)k+2\n≥ Ω ( mLnk+2−kL ) ≥ Ω(1),\nwhere in the penultimate step we used that L!·(k!) L\nLL·(k+2)k+2 is nonnegative and increasing over k ≥ 2, and in the last step we used that m ≥ Ω ( nk− 2 k+1 ) .\nWe now upper bound E[Z2]. Consider a pair of distinct summands (i1, ..., iL) and (i′1, ..., i ′ L). Without loss of generality, we may assume these are (1, ..., L) and (s+1, ..., L) for some 0 ≤ s ≤ L. In order for {T1, ..., TL} = CkU and {TL−s+1, ..., T2L−s+1} = CkU ′ for some U,U ′ ∈ C k+2 [n] , it must be that {TL−s+1, ..., TL} = CkU∩U ′ . Note that if |U ∩ U ′| = k + 2, then U = U ′ and therefore s must be 0. So if s > 0, it must be that |U ∩ U ′| ∈ {k, k + 1}.\nIn either case, the probability that {T1, ..., TL−s+1} = CkU\\CkU∩U ′ , {TL+1, ..., T2L−s+1} = CkU\\CkU∩U ′ , and {TL−s+1, ..., TL} = CkU∩U ′ is\n(L− s)!2 · s! · ( n\nk\n)−2L+s ≤ L!2 · (k/n)2kL−ks\nIf |U ∩ U ′| = k, then s must be 1, and there are( n\nk\n) · ( n− k − 2\n2\n) · ( n− k − 4\n2\n) ≤ nk+4\nchoices for (U,U ′). If |U ∩ U ′| = k + 1 then s must be k + 1 and there are and there are( n\nk + 1\n) · (n− k − 1) · (n− k − 2) ≤ nk+3\nchoices for (U,U ′). Finally, note that there are ( m L ) pairs of summands (i1, ..., il), (i′1, ..., i ′ L) for which s = 0 (namely\nthe ones for which ij = i′j for all j), m · ( m−1 L−1 ) · ( m−L L−1 ) ≤ Θ(m)2L−1 · L!2 pairs for which s = 1,\nand ( m k+1 ) · ( m−k−1 L−k−1 ) · ( m−L L−k−1 ) ≤ Θ(m)2L−k−1 · L!2 for which s = k + 1. Putting everything together, we conclude that\nE[Z2] = E[Z] + Θ(m)2L−1 · L!4 · nk+4 · (k/n)2kL−k + Θ(m)2L−k−1 · L!4 · nk+3 · (k/n)2kL−k(k+1)\n≤ E[Z]2 · ( 1 +O(1/m) · L!4 · k2kL−k +O(1/mk+1) · L!4 · k2kL−k(k+1) · nk 2−1 )\n≤ (1.01 E[Z])2,\nwhere in the last step we used that L ≤ k2 and that nk2−1/mk+1 ≤ 1 because m ≥ kΩ(k3)nk−1. By Paley-Zygmund, we conclude that\nP[Z > 0.01 E[Z]] ≥ 0.992 · E[Z]2\nE[Z2] ≥ 9/10,\nas desired, upon picking constant factors appropriately.\nLemma B.12 implies that with probability at least 9/10 over the randomness of the mixup vectors w1, ..., wm, if m ≥ Ω(kO(k 3)nk− 2\nk+1 ), then there is a subset of [m] for which the corresponding principal submatrix of WW> is floral. By Lemma B.7, with high probability M = k ·WW>, so this is also the case for the output of GRAMEXTRACT." }, { "heading": "B.6 FINDING A FLORAL SUBMATRIX", "text": "As mentioned in Section 3, to find a floral principal submatrix of M, one option is to enumerate over all subsets of size ( k+2 k ) of [m], which would take nO(k\n3) time. We now give a much more efficient procedure for identifying a floral principal submatrix of M, whose runtime is dominated by the time it takes to write down the entries of M. At a high level, the reason we can obtain such dramatic savings is that the underlying graph defined by the large entries of WW> is quite sparse, i.e. vertices of the graph typically have degree independent of k.\nWe will need the following basic notion:\nDefinition B.13. Given i ∈ [m] and integer 0 ≤ t ≤ k, let N ti , {j : 〈wi, wj〉 = t/k}. For any j ∈ N ti , we refer to i and j as t-neighbors (this relation is obviously commutative).\nWe will also need the following helper lemmas establishing certain deterministic regularity conditions that WW> will satisfy with high probability. Lemma B.14 (Hypergraph sparsity). For any δ > 0, if m ≥ nk−1 log(1/δ), then with probability at least 1 − 2mδ over the randomness of w1, ..., wm, we have that for every j ∈ [m], there are at most O(m · kk+1 ·n1−k) (k− 1)-neighbors of j, and at most O(m · kk+2 ·n2−k) (k− 2)-neighbors of j.\nProof. We will union bound over j ∈ [m], so without loss of generality fix j = 1 in the argument below. LetXj′ (resp. Yj′ ) denote the indicator for the event that 1 and j′ are (k−1)-neighbors (resp. (k− 2)-neighbors). As wj′ is sampled independently of w1, conditioned on w1 we know that Xj′ is a Bernoulli random variable with expectation E[Xj′ ] =\nk(n−k) (nk) ≤ n1−k · kk+1, where the factor of k(n− k) comes from the number of ways to pick supp(w1)\\supp(wj′) and supp(wj′)\\supp(w1). Similarly, Yj′ is a Bernoulli random variable with expectation E[Yj′ ] = (k2)( n−k\n2 ) (nk) ≤ n2−k · kk+2. By Chernoff, we conclude that ∑ j′>2Xj′ > 2n 1−k · kk+1 with probability at most\nexp ( −m ·D(Ber(2n1−k · kk+1)‖Ber(n1−k · kk+1)) ) ≤ exp(−Ω(mn1−k · kk+1)) ≤ exp(−Ω(mn1−k)),\nfrom which the first claim follows. Similarly by Chernoff, ∑ j′>2 Yj′ > 2n\n2−k · kk+2 with probability at most\nexp ( −m ·D(Ber(2n2−k · kk+2)‖Ber(n2−k · kk+2)) ) ≤ exp(−Ω(mn2−k · kk+2)) ≤ exp(−Ω(mn2−k)),\nfrom which the second claim follows.\nDefinition B.15. Given symmetric matrix M ∈ Zm×m and distinct indices i, j1, ..., j4 ∈ [m] for which j1 < j4, we say that (i; j1, . . . , j4) is a house (see Figure 2) if for all 1 ≤ a < b ≤ 4, Mja,jb = k − 1 if (a, b) ∈ {(1, 2), (2, 1), (2, 3), (3, 4), (1, 4)} and Mja,jb = k − 2 otherwise, and furthermore Mi,ja = k − 1 for all a ∈ [4]. Lemma B.16 (Upper bounding the number of houses). If m ≥ Ω(n2k/3), then with probability at least 9/10 over the randomness of w1, . . . , wm, there are at most O(k5k ·m5 · n−4k+2) houses in M.\nProof. Define Z , ∑ i,j1,...j4 distinct,j1<j4 1 [(i; j1, . . . , j4) is a house] .\nBy linearity of expectation, E[Z] is equal tom · ( m−1\n4\n) ≤ m5 times the probability that (1; 2, 3, 4, 5)\nis a house. Note that the only way for (1; 2, 3, 4, 5) to be a house is if there are disjoint subsets S1, S − 2, T ⊆ [n] of size 2, 2, and k − 2 respectively such that w1 is supported on S ∪ T and each of w2, . . . , w5 is supported on {s1, s2} ∪ T where s1 ∈ S1, s2 ∈ S2. There are\nO (( n k−2 ) · ( n 2 )2) ≤ nk+2 such choices of (S1, S2, T ), and for each is an O((nk)−5) chance that the supports of w1, . . . , w5 correspond to a given (S1, S2, T ), so we conclude that\nE[Z] = O ( m5 · nk+2 · ( n\nk\n)−5) ≤ O(k5k ·m5 · n−4k+2).\nWe now upper bound E[Z2]. Consider a pair of distinct summands (i; j1, . . . , j4) and (i′; j′1, . . . , j ′ 4). Recall that they correspond to some (S1, S2, T ) and (S ′ 1, S ′ 2, T\n′) respectively. Note that if these tuples overlap in any index (e.g. (1; 2, 3, 4, 5) and (6; 1, 7, 8, 9)), then |(S1 ∪ S2 ∪ T ) ∩ (S′1 ∪ S′2 ∪ T ′)| ≥ k. There are at most\nO\n(( n\nk\n) · ( n− k\n2\n) · ( n− k − 2\n2\n) + ( n\nk + 1\n) · ( n− k\n1\n) · ( n− k − 1\n1\n) + ( n\nk + 2\n)) ≤ O(nk+4)\npairs of sets U,U ′ ⊆ [n] of size k + 2 with intersection of size at least k, and given a set U of size k + 2, there are O (( k+2 k−2 )) ≤ poly(k) ways of partitioning U into three disjoint sets of size 2, 2, and k − 2 respectively. We conclude that any pair of distinct summands in the expansion of E[Z2] altogether contributes at most poly(k) · O(nk+4) · ( n k\n)−b ≤ k10k · n−(b−1)k+4, where 6 ≤ b ≤ 10 is the number of distinct indices within the tuples (i; j1, . . . , j4) and (i′; j′1, . . . , j ′ 4). For any b, there\nare ( m 5 ) · ( m−5 b−5 ) ≤ mb such pairs of tuples.\nIn the special case where b = 6, we will use a slightly sharper bound by noting that then, it must be that S1 ∪ S2 ∪ T and S′1 ∪ S′2 ∪ T ′ are identical, in which case we can improve the above bound of O(nk+4) for the number of pairs U,U ′ to O(nk+2).\nWe conclude that\nE[Z2] ≤ E[Z] + k10km6 · n−5k+2 + 10∑ b=7 mb · n−(b−1)k+4 ≤ O(k10k ·m10 · n−8k+4).\nwhere in the last step we used the fact that m ≥ O(n2k/3) and k ≥ 2 to bound the summands corresponding to b = 6 and b = 7. Finally, by our bounds on E[Z] and E[Z2], we conclude by Chebyshev’s that with probability at least 9/10, there are most 2 E[Z] ≤ O(k5k · m5 · n−4k+2) houses in M.\nLemma B.17 (Finding a floral submatrix). Supposem = Ω(nk− 2\nk+1 ). With probability at least 3/4, FINDFLORALSUBMATRIX(M) runs in timeO(n2k− 4 k+1 ·exp(poly(k))) and outputs ( k+2 k ) × ( k+2 k ) - sized subset I ⊆ [m] indexing a principal submatrix of M which is floral, together with a function F : I → Ck[k+2] such that Mj,j′ = |F (j) ∩ F (j ′)| for all j, j′ ∈ I.\nProof. The proof of correctness essentially follows immediately from the proof of Lemma B.11, while the runtime analysis will depend crucially on the sparsity of the underlying weighted graph defined by M, as guaranteed by Lemmas B.14 and B.16. Henceforth, condition on the events of those lemmas holding, which will happen with probability at least 3/4.\nFirst note that if one reaches as far as Step 20 in FINDFLORALSUBMATRIX, then by the proof of Lemma B.11, the I produced in Step 22 indexes a principal submatrix of M which is floral. The recursive call in Step 24 is applied to a submatrix of M whose size is independent of n, and it is evident that the time expended past that point is no worse than some exp(poly(k)), and inductively we know that the resulting F produced in Step 25 when the recursion is complete correctly maps indices j ∈ [m] to subsets in Ck[k+2] such that Mj,j′ = |F (j) ∩ F (j ′)| for all j, j′ ∈ I.\nTo carry out the rest of the runtime analysis, it suffices to bound the time expended leading up to the recursive call. Consider any house (i0; j1, j2, j3, j4) encountered in Step 5. First note that one can compute ⋂4 a=1N k−1 ja\nwith a basic hash table, so because the first part of Lemma B.14 tells us that with high probability, |N k−1ja | ≤ O(m · k\nk+1 · n1−k) for all a ∈ [4], Step 5 only requires O(m · kk+1 · n1−k) time. Similarly, for each of the O(1) possibilities in the loop in Step 14, it takes O(m · kk+1 · n1−k) time to enumerate over (k − 1)-neighbors of iz, iα, iβ in Step 15 and, by\nthe second part of Lemma B.14, O(m · kk+2 · n2−k) time to enumerate over (k − 2)-neighbors of i1−z, iγ , iδ , and it takes poly(k) to check that the resulting indices i′′ are not all (k − 1)-neighbors of each other. And once more, in Step 20 it takes O(m · kk+2 · n2−k) time to enumerate over all indices which are (k − 2) neighbors of i0, i1 and of every i′′ ∈ I ′′. We conclude that for every house (i0; j1, j2, j3, j4), FINDFLORALSUBMATRIX expends at most O(m · kk+2 · n2−k) time checking whether the house can be expanded into a set of indices corresponding to a floral principal submatrix of M. Note that for any (i0; j1, j2, j3, j4) encountered in Step 4 which is not a house, the algorithm expends O(1) time. As |N k−1i0 | ≤ O(m · n\n1−k · kk+1) with high probability for any i0, there are most O(m ·m4 ·n4−4k · k4k+4) ≤ O(m5 ·n4−4k · k4k+4) such tuples which are not houses.\nAnd because Lemma B.16 tells us that with high probability there are O(k5k ·m5 · n−4k+2) houses in M, FINDFLORALSUBMATRIX outputs None with low probability. In particular, given that any single house (i0; j1, j2, j3, j4) expends O(m · kk+2 · n2−k) time from Step 9 all the way potentially to Step 24, we conclude that the houses contribute a total of at most O(k5k ·m5 ·n−4k+2 ·m ·kk+2 · n2−k) ≤ O(m6 · n4−5k · k6k+2) to the runtime. Putting everything together, we conclude that FINDFLORALSUBMATRIX runs in time\nO ( m5 · n4−4k · k4k+4 +m6 · n4−5k · k6k+2 ) = O ( nk+4− 10 k+1 · kO(k) ) .\nLastly, note that k + 4− 10k+1 ≤ 2k − 4 k+1 whenever k ≥ 2, completing the proof." }, { "heading": "B.7 PUTTING EVERYTHING TOGETHER", "text": "We are now ready to conclude the proof of correctness of our main algorithm, LEARNPRIVATEIMAGE.\nProof. By Lemma B.4, the subsets Si computed in Step 3 correctly index the public coordinates of wi. By Lemma B.7, with high probability over the randomness of X, the matrix M formed from GRAMEXTRACT in Step 1 of LEARNPRIVATEIMAGE is exactly equal to the Gram matrix WW>, so after Step 5 and Step 6, M is equal to the Gram matrix of the vectors [w1]Sc , . . . , [wm]Sc , i.e. the restrictions of the selection vectors to the private coordinates. We are now in a position to apply the results of Sections B.3, B.4, B.5, and B.6.\nBy Lemma B.17, with high probability the output I, F of FINDFLORALSUBMATRIX in Step 7 satisfies that 1) the principal submatrix of M indexed by I, a set of indices of size ( kpriv+2 kpriv ) , is floral, and 2) the function F : I → Ckpriv[kpriv+2] satisfies that |F (i) ∩ F (j)| = Mi,j for all i, j ∈ I. By Lemma B.11, because the principal submatrix indexed by I is floral, there exists some subset U ⊆ [n] of size kpriv + 2 for which the supports of the mixup vectors wj for j ∈ I are all the subsets of U of size kpriv. Finally, by Lemma B.10 and the fact that the entries of X are independent Gaussians, for every pixel index ` ∈ [d], the solution {x̃(`)i } to the system in Step 8 satisfies that there is some column x of the original private image matrix X such that for every i ∈ [kpriv + 2], x̃\n(`) i is, up to signs, equal to the `-th pixel of x.\nNote that the runtime of LEARNPRIVATEIMAGE is dominated by the operations of forming the matrix M and running FINDFLORALSUBMATRIX, which take time O(m2) by Lemma B.17." }, { "heading": "B.8 EXAMPLE OF A FLORAL SUBMATRIX", "text": "Example B.18. For k = 2, the following 6× 6 matrix, after dividing every entry by k, is floral: {1, 3} {2, 4} {1, 4} {1, 2} {3, 4} {2, 3}\n{1, 3} 2 0 1 1 1 1 {2, 4} 0 2 1 1 1 1 {1, 4} 1 1 2 1 1 0 {1, 2} 1 1 1 2 0 1 {3, 4} 1 1 1 0 2 1 {2, 3} 1 1 0 1 1 2\nAlgorithm 3: FINDFLORALSUBMATRIX(M, k, r) Input: Query access to matrix M ∈ RM×M , sparsity level k Output: ( k+2 k ) × ( k+2 k ) -sized subset I ⊆ [M ], function F : I → Ck[k+2] (Lemma B.17)\n1 Nhouses ← 0. 2 for i0 ∈ [M ] do 3 F (i0)← {1, ..., k}. 4 for j1, . . . , j4 in N k−1i0 for which j1 < j4 do 5 if (i0; j1, j2, j3, j4) is a house then 6 Nhouses ← Nhouses + 1. 7 if Nhouses ≥ Ω(k5k ·M5 · n−4k+2) then 8 return None. 9 I ′ ← {j1, j2, j3, j4}.\n10 if ⋂4 a=1N k−1 ja \\{i0} 6= ∅ then\n11 Let i1 be the (unique) element of ⋂4 a=1N k−1 ja \\{i0}. 12 I ′′ ← ∅. 13 F (i1)← {3, · · · , k + 2}. 14 for z ∈ {0, 1} and distinct α, β, γ, δ ∈ [4] for which α < β and iγ (resp. iδ) is a\n(k − 1)-neighbor of iα (resp. iβ), and for which i0, α, β are (k − 1)-neighbors and i1, γ, δ are (k − 1)-neighbors do\n15 if exactly k − 2 choices of i′′ which are (k − 1)-neighbors of iz, iα, iβ and (k − 2)-neighbors of i1−z, iγ , iδ , and which are not all (k − 1)-neighbors of each other then 16 Add to I ′′ all such i′′. 17 if |I ′′| = 4k − 8 then 18 If z = 0, set F (iα)← {1, 3, . . . , k, k + 1},\nF (iβ)← {2, 3, . . . , k, k + 1}, F (iγ)← {1, 3, . . . , k, k + 2}, and F (iδ)← {2, 3, . . . , k, k + 2}.\n19 If z = 1, set F (iα)← {1, 3, . . . , k, k + 1}, F (iβ)← {1, 3, . . . , k, k + 2}, F (iγ′)← {2, 3, . . . , k, k + 1} and F (iδ′)← {2, 3, . . . , k, k + 2}.\n20 if exactly ( k−2 k−4 )\nchoices of i′′′ which are (k − 2)-neighbors of i0, i1, iα, iβ , iγ , and iδ , and which are also (k − 1)-neighbors of at least one i′′ ∈ I ′′ then\n21 Let I ′′′ denote the set of such i′′′. 22 I ← {i0, i1} ∪ I ′ ∪ I ′′ ∪ I ′′′. 23 Let Msub denote the ( k−2 k−4 ) × ( k−2 k−4 )\nsubmatrix of M given by restricting to the rows and columns indexed by I ′′′ and subtracting 4 from every entry.\n24 _, G←FINDFLORALSUBMATRIX(Msub, k − 2). 25 For every i′′′ ∈ I ′′′, set F (i′′′)← G(i′′′) ∪ {1, 2, k + 1, k + 2}. 26 return I, F .\nAlgorithm 4: LEARNPRIVATEIMAGE({yX,wi}i∈[m]) Input: InstaHide dataset {yX,wi}i∈[m] Output: Vectors x̃1, ..., x̃k+2 ∈ Rd equal to k + 2 images (up to signs) from the original" }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS", "text": "" } ]
2,021
null
SP:a1c54d5c42097b8ba971ac20470de864ae87dd4e
[ "In this work the authors propose a framework to perform object detection when there is noise present in class labels as well as bounding box annotations. The authors propose a two-step process, where in the first step the bounding boxes are corrected in class-agnostic way, and in the second step knowledge distillation has been used to correct the class labels. The propose method has been evaluated on two different datasets with synthetic noise." ]
Training deep object detectors requires large amounts of human-annotated images with accurate object labels and bounding box coordinates, which are extremely expensive to acquire. Noisy annotations are much more easily accessible, but they could be detrimental for learning. We address the challenging problem of training object detectors with noisy annotations, where the noise contains a mixture of label noise and bounding box noise. We propose a learning framework which jointly optimizes object labels, bounding box coordinates, and model parameters by performing alternating noise correction and model training. To disentangle label noise and bounding box noise, we propose a two-step noise correction method. The first step performs class-agnostic bounding box correction, and the second step performs label correction and class-specific bounding box refinement. We conduct experiments on PASCAL VOC and MS-COCO dataset with both synthetic noise and machine-generated noise. Our method achieves state-of-the-art performance by effectively cleaning both label noise and bounding box noise 1.
[]
[ { "authors": [ "Eric Arazo", "Diego Ortego", "Paul Albert", "Noel E. O’Connor", "Kevin McGuinness" ], "title": "Unsupervised label noise modeling and loss correction", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Devansh Arpit", "Stanislaw Jastrzkebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron C. Courville", "Yoshua Bengio", "Simon Lacoste-Julien" ], "title": "A closer look at memorization in deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian J. Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Avrim Blum", "Tom M. Mitchell" ], "title": "Combining labeled and unlabeled data with co-training", "venue": "In COLT, pp", "year": 1998 }, { "authors": [ "Simon Chadwick", "Paul Newman" ], "title": "Training object detectors with noisy data", "venue": "In IEEE Intelligent Vehicles Symposium,", "year": 2019 }, { "authors": [ "Kai Chen", "Jiaqi Wang", "Jiangmiao Pang", "Yuhang Cao", "Yu Xiong", "Xiaoxiao Li", "Shuyang Sun", "Wansen Feng", "Ziwei Liu", "Jiarui Xu", "Zheng Zhang", "Dazhi Cheng", "Chenchen Zhu", "Tianheng Cheng", "Qijie Zhao", "Buyu Li", "Xin Lu", "Rui Zhu", "Yue Wu", "Jifeng Dai", "Jingdong Wang", "Jianping Shi", "Wanli Ouyang", "Chen Change Loy", "Dahua Lin" ], "title": "MMDetection: Open mmlab detection toolbox and benchmark", "venue": "arXiv preprint arXiv:1906.07155,", "year": 2019 }, { "authors": [ "Pengfei Chen", "Benben Liao", "Guangyong Chen", "Shengyu Zhang" ], "title": "Understanding and utilizing deep neural networks trained with noisy labels", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ramazan Gokberk Cinbis", "Jakob J. Verbeek", "Cordelia Schmid" ], "title": "Weakly supervised object localization with multi-fold multiple instance learning", "venue": null, "year": 2017 }, { "authors": [ "Thomas Deselaers", "Bogdan Alexe", "Vittorio Ferrari" ], "title": "Localizing objects while learning their appearance", "venue": "In ECCV, pp", "year": 2010 }, { "authors": [ "Thomas G. Dietterich", "Richard H. Lathrop", "Tomás Lozano-Pérez" ], "title": "Solving the multiple instance problem with axis-parallel rectangles", "venue": "Artif. Intell.,", "year": 1997 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher K.I. Williams", "John M. Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (VOC) challenge", "venue": null, "year": 2010 }, { "authors": [ "Jiyang Gao", "Jiang Wang", "Shengyang Dai", "Li-Jia Li", "Ram Nevatia" ], "title": "NOTE-RCNN: noise tolerant ensemble RCNN for semi-supervised object detection", "venue": null, "year": 2019 }, { "authors": [ "Ross B. Girshick" ], "title": "Fast R-CNN", "venue": "In ICCV, pp", "year": 2015 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In NIPS, pp", "year": 2005 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor W. Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Judy Hoffman", "Sergio Guadarrama", "Eric Tzeng", "Ronghang Hu", "Jeff Donahue", "Ross B. Girshick", "Trevor Darrell", "Kate Saenko" ], "title": "LSDA: large scale detection through adaptation", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Borui Jiang", "Ruixuan Luo", "Jiayuan Mao", "Tete Xiao", "Yuning Jiang" ], "title": "Acquisition of localization confidence for accurate object detection", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Ksenia Konyushkova", "Jasper R.R. Uijlings", "Christoph H. Lampert", "Vittorio Ferrari" ], "title": "Learning intelligent dialogs for bounding box annotation", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Alina Kuznetsova", "Hassan Rom", "Neil Alldrin", "Jasper Uijlings", "Ivan Krasin", "Jordi Pont-Tuset", "Shahab Kamali", "Stefan Popov", "Matteo Malloci", "Tom Duerig", "Vittorio Ferrari" ], "title": "The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale", "venue": null, "year": 2018 }, { "authors": [ "Kuang-Huei Lee", "Xiaodong He", "Lei Zhang", "Linjun Yang" ], "title": "Cleannet: Transfer learning for scalable image classifier training with label noise", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Junnan Li", "Richard Socher", "Steven C.H. Hoi" ], "title": "Dividemix: Learning with noisy labels as semisupervised learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge J. Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C. Lawrence Zitnick" ], "title": "Microsoft COCO: common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross B. Girshick", "Kaiming He", "Bharath Hariharan", "Serge J. Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Rafael Müller", "Simon Kornblith", "Geoffrey E. Hinton" ], "title": "When does label smoothing help", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Dim P. Papadopoulos", "Alasdair D.F. Clarke", "Frank Keller", "Vittorio Ferrari" ], "title": "Training object class detectors from eye tracking data", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Dim P. Papadopoulos", "Jasper R.R. Uijlings", "Frank Keller", "Vittorio Ferrari" ], "title": "We don’t need no bounding-boxes: Training object class detectors using only human verification", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Dim P. Papadopoulos", "Jasper R.R. Uijlings", "Frank Keller", "Vittorio Ferrari" ], "title": "Training object class detectors with click supervision", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Dim P. Papadopoulos", "Jasper R.R. Uijlings", "Frank Keller", "Vittorio Ferrari" ], "title": "Extreme clicking for efficient object annotation", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Scott E. Reed", "Honglak Lee", "Dragomir Anguelov", "Christian Szegedy", "Dumitru Erhan", "Andrew Rabinovich" ], "title": "Training deep neural networks on noisy labels with bootstrapping", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross B. Girshick", "Jian Sun" ], "title": "Faster R-CNN: towards real-time object detection with region proposal networks", "venue": "In NIPS, pp", "year": 2015 }, { "authors": [ "Olga Russakovsky", "Li-Jia Li", "Fei-Fei Li" ], "title": "Best of both worlds: Human-machine collaboration for object annotation", "venue": "In CVPR, pp", "year": 2015 }, { "authors": [ "Hao Su", "Jia Deng", "Li Fei-Fei" ], "title": "Crowdsourcing annotations for visual object detection", "venue": "In AAAI Human Computation Workshop,", "year": 2012 }, { "authors": [ "Daiki Tanaka", "Daiki Ikami", "Toshihiko Yamasaki", "Kiyoharu Aizawa" ], "title": "Joint optimization framework for learning with noisy labels", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Yuxing Tang", "Josiah Wang", "Boyang Gao", "Emmanuel Dellandréa", "Robert J. Gaizauskas", "Liming Chen" ], "title": "Large scale semi-supervised object detection using visual and semantic knowledge transfer", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Jasper R.R. Uijlings", "Stefan Popov", "Vittorio Ferrari" ], "title": "Revisiting knowledge transfer for training object class detectors", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Arash Vahdat" ], "title": "Toward robustness against label noise in training deep discriminative neural networks", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Andreas Veit", "Neil Alldrin", "Gal Chechik", "Ivan Krasin", "Abhinav Gupta", "Serge J. Belongie" ], "title": "Learning from noisy large-scale datasets with minimal supervision", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Kun Yi", "Jianxin Wu" ], "title": "Probabilistic end-to-end noise correction for learning with noisy labels", "venue": null, "year": 2019 }, { "authors": [ "Xingrui Yu", "Bo Han", "Jiangchao Yao", "Gang Niu", "Ivor W. Tsang", "Masashi Sugiyama" ], "title": "How does disagreement help generalization against label corruption", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Xiaopeng Zhang", "Yang Yang", "Jiashi Feng" ], "title": "Learning to localize objects with noisy labeled instances", "venue": "In AAAI,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The remarkable success of modern object detectors largely relies on large-scale datasets with extensive bounding box annotations. However, it is extremely expensive and time-consuming to acquire high-quality human annotations. For example, annotating each bounding box in ILSVRC requires 42s on Mechanical Turk (Su et al., 2012), whereas the recent OpenImagesV4 Kuznetsova et al. (2018) reports 7.4 seconds with extreme clicking (Papadopoulos et al., 2017b). On the other hand, there are ways to acquire annotations at lower costs, such as limiting the annotation time, reducing the number of annotators, or using machine-generated annotations. However, these methods would yield annotations with both label noise (i.e. wrong classes) and bounding box noise (i.e. inaccurate locations), which could be detrimental for learning.\nLearning with label noise has been an active area of research. Some methods perform label correction using the predictions from the model and modify the loss accordingly (Reed et al., 2015; Tanaka et al., 2018). Other methods treat samples with small loss as those with clean labels, and only allow clean samples to contribute to the loss (Jiang et al., 2018b; Han et al., 2018). However, most of those methods focus on the image classification task where the existence of an object is guaranteed.\nSeveral recent works have studied object detection with noisy annotations. Zhang et al. (2019) focus on the weakly-supervised (WS) setting where only image-level labels are available, and find reliable bounding box instances as those with low classification loss. Gao et al. (2019) study a semisupervised (SS) setting where the training data contains a small amount of fully-labeled bounding boxes and a large amount of image-level labels, and propose to distill knowledge from a detector pretrained on clean annotations. However, these methods require access to some clean annotations.\nIn this work, we address a more challenging and practical problem, where the annotation contains an unknown mixture of label noise and bounding box noise. Furthermore, we do not assume access to any clean annotations. The entanglement of label noise and bounding box noise increases the difficulty to perform noise correction. A commonly used noise indicator, namely the classification loss, is incapable to distinguish label noise from bounding box noise. Furthermore, it is problematic to correct noise directly using the model predictions, because label correction requires accurate\n1Code will be released.\nbounding box coordinates to crop the object, whereas bounding box correction requires accurate class labels to produce the regression offset.\nTo overcome these difficulties, we propose a two-step noise correction procedure. In the first step, we perform class-agnostic bounding box correction (CA-BBC), which seeks to decouple bounding box noise from label noise, and optimize the noisy ground-truth (GT) bounding box regardless of its class label. An illustration of CA-BBC is shown in Figure 1. It is based on the following intuition: if a bounding box tightly covers an object, then two diverged classifiers would agree with each other and produce the same prediction. Furthermore, both classifiers would have low scores for the background class, i.e., high objectness scores. Therefore, we directly regress the noisy GT bounding box to minimize both classifier discrepancy and background scores. CA-BBC also has the option to reject a bounding box as false positive if the objectness score is too low.\nIn the second step, we leverage the model’s output for label noise correction and class-specific bounding box refinement. It has been shown that co-training two models can filter different types of noise and help each other learn (Blum & Mitchell, 1998; Han et al., 2018; Yu et al., 2019; Chadwick & Newman, 2019). Therefore, we distil knowledge from the ensemble of dual detection heads for noise correction, by generating soft labels and bounding box offsets. We show that soft labels with well-adjusted temperature lead to better performance even for a clean dataset.\nTo summarize, this paper proposes a noise-resistant learning framework to train object detectors with noisy annotations. The proposed framework jointly optimizes object labels, bounding box coordinates, and model parameters by performing alternating noise correction and model training. We conduct experiments on two benchmarks: PASCAL VOC and MS-COCO, which contain different levels of synthetic noise as well as machine-generated noise. The proposed method outperforms previous methods by a large margin. We also provide qualitative results to demonstrate the efficacy of the two-step noise correction, and ablation studies to examine the effect of each component." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 CROWDSOURCING FOR OBJECT DETECTION", "text": "Crowdsourcing platforms such as Amazon Mechanical Turk (AMT) have enabled the collection of large-scale datasets. Due to the formidable cost of human annotation, many efforts have been devoted to reduce the annotation cost. However, even an efficient protocol still report 42.4s to annotate one object in an image (Su et al., 2012). Other methods have been proposed which trade off annotation quality for lower cost, by using click supervision (Papadopoulos et al., 2017a), human-inthe-loop labeling (Russakovsky et al., 2015; Papadopoulos et al., 2016; Konyushkova et al., 2018), or exploiting eye-tracking data (Papadopoulos et al., 2014). These methods focus on reducing human effort, rather than combating the annotation noise as our method does." }, { "heading": "2.2 LEARNING WITH LABEL NOISE", "text": "Deep Neural Networks (DNNs) can easily overfit to noisy labels in the training data, leading to poor generalization performance (Zhang et al., 2017). Many works have addressed learning with label noise. Some approaches correct noise by relabeling the noisy samples (Vahdat, 2017; Veit et al., 2017; Lee et al., 2018), but they rely on a small set of clean samples for noise correction. Iterative relabeling methods (Tanaka et al., 2018; Yi & Wu, 2019) have been proposed which produce hard or soft labels using the model predictions. Other approaches filter noise by reweighting or selecting training samples (Jiang et al., 2018b; Ren et al., 2018; Chen et al., 2019b; Arazo et al., 2019; Li et al., 2020). Since DNNs learn clean samples faster than noisy ones, samples with smaller classification loss are usually considered to be clean (Arpit et al., 2017). To avoid error accumulation during the noise correction process, co-teaching (Han et al., 2018) trains two networks simultaneously, where each network selects small-loss samples to train the other. Co-teaching+ (Yu et al., 2019) further keeps the two networks diverged by training on disagreement data." }, { "heading": "2.3 WEAKLY-SUPERVISED AND SEMI-SUPERVISED OBJECT DETECTION", "text": "Weakly-supervised object detection aims to learn object detectors with only image-level labels. Most existing works formulate it as a multiple instance learning (MIL) task (Dietterich et al., 1997), where each label is assigned to a bag of object proposals. A common pipeline is to iteratively alternate between mining object instances using a detector and training the detector using the mined instances (Deselaers et al., 2010; Cinbis et al., 2017). To address the localization noise in the object proposals, Zhang et al. (2019) propose an adaptive sampling method which finds reliable instances as those with high classification scores, and use the reliable instances to impose a similarity loss on noisy images. Different from weakly-supervised object detection which assumes that the correct object label is given, our method deals with label noise and bounding box noise at the same time.\nSemi-supervised methods train object detectors using training data with bounding box annotations for some images and only image-level labels for other images (Hoffman et al., 2014; Tang et al., 2016; Uijlings et al., 2018; Gao et al., 2019). Gao et al. (2019) propose an iterative training-mining framework consisting of detector initialization, box mining, and detector retraining. To address the annotation noise of the mined boxes, they use a detector pretrained on clean annotations for knowledge distillation. Different from all semi-supervised learning methods, our method does not need access to any clean annotations." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 OVERVIEW", "text": "Given a training dataset with images X , noisy object labels Y , and noisy bounding boxes B, our method aims to train an object detector parameterized by ⇥, by jointly optimizing Y , B and ⇥. We first warm-up ⇥ where we train the detector in a standard manner using the original noisy annotations. After the warm-up, we perform alternating optimization on the annotations and the model. Specifically, for each mini-batch of data X = {xi}, Y = {yi}, B = {bi}, we first keep ⇥ fixed and perform noise correction to update Y and B, then we used the corrected annotations to update ⇥. An overview of the algorithm is shown in Algorithm 1.\nWe use a popular two-stage object detector (i.e. Faster-RCNN (Ren et al., 2015)), which consists of a backbone feature extractor parameterized by ✓cnn, a Region Proposal Network (RPN) ✓rpn, a classification head ✓c, and a bounding box (bbox) regression head ✓b. Note that ✓c and ✓b have shared layers. Let detection head with parameters ✓d denote the union of the classification head and the bbox regression head. During training, we simultaneously train two detection heads ✓1d = {✓1c , ✓1b} and ✓2d = {✓2c , ✓2b}, which are kept diverged from each other by different (random) parameter initializations and different (random) training instance (i.e. RoI) sampling.\nDue to the entanglement of an unknown mixture of label noise and bbox noise, it is difficult to correct both types of noise in a single step. Therefore, we propose a two-step noise correction method. In the first step, we perform class-agnostic bounding box correction (CA-BBC), which disentangles bbox noise from label noise. In the second step, we utilize the outputs from dual detection heads for label noise correction and class-specific bbox refinement. Figure 2 shows an illustration of our framework. Next we delineate the details.\nAlgorithm 1: alternating two-step noise correction and model training. 1 Input: model ⇥ = {✓cnn, ✓rpn, ✓1d, ✓2d}, noisy training dataset (X ,Y,B). 2 while not MaxIters do 3 Mini-batch X = {xi}, Y = {yi}, B = {bi}. 4 for b in B do 5 Update b ! b⇤ with CA-BBC (Eq. 2 & 3). 6 end 7 for (y, b⇤) in (Y,B⇤) do 8 Update y ! y⇤ with dual-head soft label correction (Eq. 4 & 5). 9 Update b⇤ ! b⇤⇤ with class-specific bbox refinement (Eq. 6).\n10 end 11 Update ⇥ by SGD on Lrpn(B⇤⇤), L1+2cls (Y ⇤), L1+2loc (B ⇤⇤, Y ⇤). 12 end" }, { "heading": "3.2 CLASS-AGNOSTIC BOUNDING BOX CORRECTION", "text": "We first correct bounding box noise by updating B ! B⇤ regardless of the label noise in Y . As illustrated in Figure 1, CA-BBC uses two diverged classification heads to produce two sets of class predictions on the same image region, and updates the bounding box to minimize classifier discrepancy and maximize region objectness. The intuition is: if a bounding box tightly covers an object, then two classifiers would agree with each other and produce the same predictions. Moreover, both predictions would have low scores on the background class.\nSpecifically, given an image x 2 X , the backbone first extracts a convolutional feature map. For each noisy GT bounding box b 2 B, we perform a RoI-Pooling operation on the feature map to extract a fixed-sized feature (x, b). Then we give the RoI feature to the two classification heads to produce two sets of softmax predictions over C + 1 classes (including the background class), p1( (x, b); ✓1c ) and p2( (x, b); ✓2c ). For simplicity we denote them as p1 and p2. The discrepancy between the two predictions is defined as their L2 distance:\nD(p1, p2) = kp1 p2k22 . (1)\nMinimizing the classifier discrepancy w.r.t the bounding box will push it to a region where the two classifiers agree on its class label. To prevent the bounding box from simply moving to a background region, we also minimize the classifiers’ scores on the background class, pbg1 and p bg 2 . In other words, we want to maximize the objectness of the region covered by the bounding box.\nTherefore, we aim to find the optimal b⇤ that minimizes the following objective function:\nL(b) = D(p1, p2) + (pbg1 + p bg 2 ), (2)\nwhere controls the balance of the two terms and is set to 0.1 in our experiments.\nFor faster speed, we estimate b⇤ by performing a single step of gradient descent to update b:\nb⇤ = b ↵@L(b) @b , (3)\nwhere ↵ is the step size.\nSince RoI-Pooling (Ren et al., 2015) or RoI-Align (He et al., 2017) performs discrete sampling on the feature map to generate (x, b), L(b) is not differentiable w.r.t b. Therefore, we adopt the Precise RoI-Pooling method (Jiang et al., 2018a), which avoids any quantization of coordinates and has a continuous gradient on b.\nIn order to handle false positive bboxes that do not cover any object, we add a reject option which removes b from the ground-truth if both classifiers give a low objectness score (high background score), pbg1 > 0.9 and p bg 2 > 0.9." }, { "heading": "3.3 DUAL-HEAD DISTILLATION FOR NOISE CORRECTION", "text": "In the second step, we perform class-specific self-distillation for label noise correction and bbox refinement. We simultaneously train two diverged detection heads which can filter different types of noise, and distil knowledge from their ensemble to clean the annotation noise. Using the ensemble of two heads helps alleviate the confirmation bias problem (i.e. a model confirms its own mistakes) that commonly occurs in self-training.\nSoft label correction. Given the RoI feature (x, b⇤), the two classification heads produce two sets of softmax predictions over object classes, p⇤1 and p⇤2. Inspired by the bootstrapping method (Reed et al., 2015), we use the classifiers’ predictions to update the noisy GT label. Let y 2 {0, 1}C represent the GT label as a one-hot vector over C classes, we create the soft label by first averaging the classifiers’ predictions and the GT label:\nȳ = (p⇤1 + p ⇤ 2 + y) 3. (4)\nThen we apply a sharpening function on the soft label to reduce the entropy of the label distribution. The sharpening operation is defined as:\ny⇤ = ȳc 1 T\nCX\nc=1\nȳc 1 T , c = 1, 2, ..., C, (5)\nwhere ȳc is the score for class c. The temperature T controls the ‘softness’ of the label and is set to 0.4 in our experiments. A lower temperature decreases the softness and has the implicit effect of entropy minimization, which encourages the model to produce high confidence predictions and allows better decision boundary to be learned (Grandvalet & Bengio, 2005; Berthelot et al., 2019).\nClass-specific bounding box refinement. The two bbox regression heads produce two sets of perclass bounding box regression offsets, t1 and t2. Let c⇤ denote the class with the highest score in the soft label, i.e. c⇤ = argmaxc y⇤c , c = 1, 2, ..., C. We refine the bounding box b⇤ by merging the class-specific outputs from both bbox regression heads:\nt = (tc ⇤ 1 + t c⇤ 2 )/2\nb⇤⇤ = b⇤ + ⇢t, (6)\nwhere tc ⇤ 1 and tc ⇤ 2 are the bounding box offsets for class c⇤, and ⇢ controls the magnitude of the refinement." }, { "heading": "3.4 MODEL TRAINING", "text": "Let Y ⇤ and B⇤⇤ denote a mini-batch of soft labels and refined bounding boxes, respectively. We use them as the new GT to train the model. Specifically, we update ⇥ = {✓cnn, ✓rpn, ✓1d, ✓2d} to optimize the following losses: (1) the loss function of RPN defined in (Ren et al., 2015), Lrpn(B⇤⇤); (2) the classification loss for the two detection heads, L1cls(Y ⇤) and L2cls(Y ⇤), defined as the crossentropy loss P i y ⇤ i log(pi); (3) the localization loss for the two detection heads, L1loc(B⇤⇤, Y ⇤) and L2loc(B ⇤⇤, Y ⇤), defined as the smooth L1 loss (Girshick, 2015)." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND IMPLEMENTATION DETAILS", "text": "Since most available datasets for object detection have been extensively verified by human annotators and contain little noise, we created noisy annotations using two popular benchmark datasets, PASCAL VOC (Everingham et al., 2010) and MS-COCO (Lin et al., 2014), First, we generated synthetic noise to simulate human mistakes of different severity, by corrupting the training annotation with a mixture of label noise and bounding box noise. For label noise, we follow previous works (Jiang et al., 2018b; Arazo et al., 2019) and generate symmetric label noise. Specifically, we randomly choose Nl% of the training samples and change each of their labels to another random label. For bounding box noise, we perturb the coordinates of all bounding boxes by a number of pixels uniformly drawn from [ wNb%,+wNb%] (w is bbox width) for horizontal coordinates or [ hNb%,+hNb%] (h is bbox height) for vertical coordinates. We experiment with multiple combinations of label noise ranging from 0% to 60% and bounding box noise ranging from 0% to 40%. Under 40% bbox noise, the average IoU between a noisy bbox and its corresponding clean bbox is only 0.45. For VOC, we use the union set of trainval2007 and trainval2012 as training data, and test2007 as test data. We report mean average precision (mAP@.5) as the evaluation metric. For MS-COCO, we use train2017 as training data, and report mAP@.5 and mAP@[.5, .95] on val2017.\nWe also mined large amounts of free training data with noisy annotations by using machinegenerated annotations on unlabeled images. We first train a Faster R-CNN detector on 10% of labeled data from COCO train2017, which has a validation mAP@.5 of 40.5. Then we use the trained detector to annotate unlabeled2017, which contains 123k unlabeled images. We use COCO unlabeled2017 with machine-generated annotations as our noisy training data.\nWe use the common Faster-RCNN (Ren et al., 2015) architecture with ResNet-50 (He et al., 2016) and FPN (Lin et al., 2017) as the feature extractor. We train the model using SGD with a learning rate of 0.02, a momentum of 0.9, and a weight decay of 1e 4. The hyper-parameters are set as = 0.1, T = 0.4, ⇢ = 0.5, and ↵ 2 {0, 100, 200}, which are determined by the validation performance on 10% of training data with clean annotations (only used for validation). We implement our framework based on the mmdetection toolbox (Chen et al., 2019a). In terms of computation time, our method increases training time by ⇠24% compared to vanilla training. During inference, we only use the first detection head unless otherwise specified, which does not increase inference time." }, { "heading": "4.2 EVALUATION ON CA-BBC", "text": "First, we evaluate the effect of the proposed CA-BBC method by itself. We train a detector following the proposed learning framework, except that we only perform the first step of noise correction (i.e. CA-BBC). Table 1 shows the results on VOC with different mixtures of label noise and bounding box noise. Compared to vanilla training without any noise correction (Ren et al., 2015; Chen et al., 2019a), performing CA-BBC can significantly improve performance, especially\nfor higher level of bbox noise. The improvement is consistent despite the increase of label noise, which demonstrates the ability of CA-BBC to disentangle the two types of noise and effectively correct bbox noise. We also demonstrate the effect of the proposed discrepancy minimization by removing D(p1, p2) from the loss in Eq. 2, and only maximize the objectness of the bbox region, which leads to lower performance. Figure 3 show qualitative examples of CA-BBC. The noisy GT bboxes are shown in red whereas the corrected bboxes are shown in green. CA-BBC can update the bounding boxes to more accurately capture the objects of interest." }, { "heading": "4.3 COMPARISON WITH THE STATE-OF-THE-ART", "text": "We evaluate our full learning framework with two-step noise correction and compare it with multiple existing methods for learning with noisy annotations. We implement all methods using the same network architecture. Since previous methods operate in different settings as ours, we adapt them for our problem to construct strong baselines as described in the following:\n• Co-teaching (Han et al., 2018) simultaneously trains two models where each model acts as a teacher for the other by selecting its small-loss samples as clean data to train the other. It has been employed by Chadwick & Newman (2019) for training object detectors with noisy data. We adapt co-teaching into our dual-head network, where each detection head selects box samples with small classification loss to train the other head. Note that the RPN is trained on all boxes.\n• SD-LocNet (Zhang et al., 2019) proposes an adaptive sampling method that assigns a reliable weight to each box sample. Higher weights are assigned to samples with higher classification scores and lower prediction variance over consecutive training epochs.\n• NOTE-RCNN (Gao et al., 2019) uses clean seed box annotations to train the bbox regression head. It also pretrains a teacher detector on the clean annotations for knowledge distillation. Because we do not have clean annotations, we follow previous works (Han et al., 2018; Arazo et al., 2019) and consider box samples with smaller classification loss as clean ones. We first train a detector in a standard manner to mine clean samples. Then we utilize the clean samples following NOTE-RCNN (Gao et al., 2019).\nTable 2 shows the comparison results on VOC, where the training data contains different mixtures of label noise and bbox noise. Our method significantly outperforms all other methods across all noise settings. For high levels of noise (Nb = 40%, Nl 2 {40%, 60%}), our method achieves ⇠20% improvement in mAP compared to vanilla training, and >10% improvement compared to the state-of-the-art NOTE-RCNN (Gao et al., 2019)\nOn clean training data with 0% annotation noise, our method can still improve upon vanilla training by +1.9%, mostly due to the proposed soft labels. Compared to the one-hot GT labels, soft labels contain more information about an image region in cases where multiple objects co-exists in the same bounding box. Moreover, using soft labels has the effect of label smoothing, which could prevent overfitting and improve a model’s generalization performance (Müller et al., 2019).\nTable 3 shows the results on COCO. Our method outperforms all baselines by a large margin. Under 40% of label and bbox noise, vanilla training results in a catastrophic degradation of 24.8% in mAP@.5 compared to training on clean data (oracle), whereas our method can reduce the performance drop to 7.1%. The proposed method also achieves improvement under machine-generated noise, which validates its practical usage to train detectors by utilizing free unlabeled data." }, { "heading": "4.4 ABLATION STUDY", "text": "In Table 4, we add or drop different components in our framework to examine their effects. Below we explain the results in detail. More ablation study and qualitative results are shown in the appendix.\n• In the first row, we perform noise correction with only one detection head, by using its output to create soft labels and regress bounding boxes. Compared with the proposed dual-head network where knowledge is distilled from the ensemble, using a single head suffers from confirmation bias where the model’s prediction error would accumulate and thus degrade the performance.\n• In the second row, we remove CA-BBC from the proposed framework and only perform the dual-head noise correction (Section 3.3). Compared with the results using the proposed two-step noise correction (the third row), the performance decreases considerably for higher level (40%) of bounding box noise, which validates the importance of the proposed CA-BBC.\n• The third row shows the results using the proposed method.\n• In the last row, we use the ensemble of both detection heads during inference by averaging their outputs, which leads to further performance improvement." }, { "heading": "5 CONCLUSION", "text": "To conclude, this paper addresses a new challenging research problem, which aims to train object detectors from noisy annotations that contain entangled label noise and bounding box noise. We propose a noise-resistant learning framework which jointly optimizes noisy annotations and model parameters. A two-step noise correction method is proposed, where the first step performs classagnostic bbox correction to disentangle bbox noise and label noise, and the second step performs dual-head noise correction by self-distillation. Experiments on both synthetic noise and machinegenerated noise validate the efficacy of the proposed framework. We believe that our work is one step forward towards alleviating human from the tedious annotation effort." } ]
2,020
null
SP:4fde35c9931ca15ab6cd53b171323e1abf0224db
[ "This paper proposes an approach to self-supervised learning from videos. The approach takes advantage of compressed videos, using the encoded residuals and motion vectors within the video codec. Using encoded videos has been shown to reduce computation time required by decoding videos. Previous works have explored compressed videos for supervised recognition, showing the potential, while this paper introduces a way to leverage compressed videos for self-supervised learning." ]
Self-supervised learning of video representations has received great attention. Existing methods typically require frames to be decoded before being processed, which increases compute and storage requirements and ultimately hinders largescale training. In this work, we propose an efficient self-supervised approach to learn video representations by eliminating the expensive decoding step. We use a three-stream video architecture that encodes I-frames and P-frames of a compressed video. Unlike existing approaches that encode I-frames and P-frames individually, we propose to jointly encode them by establishing bidirectional dynamic connections across streams. To enable self-supervised learning, we propose two pretext tasks that leverage the multimodal nature (RGB, motion vector, residuals) and the internal GOP structure of compressed videos. The first task asks our network to predict zeroth-order motion statistics in a spatio-temporal pyramid; the second task asks correspondence types between I-frames and P-frames after applying temporal transformations. We show that our approach achieves competitive performance on compressed video recognition both in supervised and self-supervised regimes.
[ { "affiliations": [], "name": "Youngjae Yu" }, { "affiliations": [], "name": "Sangho Lee" }, { "affiliations": [], "name": "Gunhee Kim" } ]
[ { "authors": [ "Relja Arandjelovic", "Andrew Zisserman" ], "title": "Look, listen and learn", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": null, "year": 2017 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Aidan Clark", "Jeff Donahue", "Karen Simonyan" ], "title": "Efficient video generation on complex datasets", "venue": "arXiv preprint arXiv:1907.06571,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Christoph Feichtenhofer", "Axel Pinz", "Andrew Zisserman" ], "title": "Convolutional two-stream network fusion for video action recognition", "venue": null, "year": 2016 }, { "authors": [ "Christoph Feichtenhofer", "Haoqi Fan", "Jitendra Malik", "Kaiming He" ], "title": "Slowfast networks for video recognition", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Basura Fernando", "Hakan Bilen", "Efstratios Gavves", "Stephen Gould" ], "title": "Self-supervised video representation learning with odd-one-out networks", "venue": null, "year": 2017 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kristen Grauman", "Trevor Darrell" ], "title": "The pyramid match kernel: Discriminative classification with sets of image features", "venue": "In ICCV,", "year": 2005 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Phillip Isola", "Daniel Zoran", "Dilip Krishnan", "Edward H Adelson" ], "title": "Learning visual groups from co-occurrences in space and time", "venue": "In ICLR Workshop,", "year": 2015 }, { "authors": [ "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Slow and steady feature analysis: Higher order temporal coherence in video", "venue": null, "year": 2016 }, { "authors": [ "Longlong Jing", "Xiaodong Yang", "Jingen Liu", "Yingli Tian" ], "title": "Self-supervised spatiotemporal feature learning via video rotation prediction", "venue": null, "year": 2018 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The Kinetics Human Action Video Dataset", "venue": "arXiv preprint arXiv:1705.06950,", "year": 2017 }, { "authors": [ "Dahun Kim", "Donghyeon Cho", "In So Kweon" ], "title": "Self-supervised video representation learning with space-time cubic puzzles", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In Neurips,", "year": 2018 }, { "authors": [ "Hildegard Kuehne", "Hueihan Jhuang", "Estíbaliz Garrote", "Tomaso Poggio", "Thomas Serre" ], "title": "HMDB: A Large Video Database for Human Motion Recognition", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Vijay Kumar BG", "Gustavo Carneiro", "Ian Reid" ], "title": "Learning local image descriptors with deep siamese and triplet convolutional networks by minimising global loss functions", "venue": null, "year": 2016 }, { "authors": [ "Svetlana Lazebnik", "Cordelia Schmid", "Jean Ponce" ], "title": "Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories", "venue": "In CVPR,", "year": 2006 }, { "authors": [ "Didier Le Gall" ], "title": "Mpeg: A video compression standard for multimedia applications", "venue": "Communications of the ACM,", "year": 1991 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Ishan Misra", "C Lawrence Zitnick", "Martial Hebert" ], "title": "Shuffle and learn: unsupervised learning using temporal order verification", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": null, "year": 2018 }, { "authors": [ "Andrew Owens", "Alexei A Efros" ], "title": "Audio-visual scene analysis with self-supervised multisensory features", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "AJ Piergiovanni", "Anelia Angelova", "Michael S Ryoo" ], "title": "Evolving losses for unsupervised video representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Michael S Ryoo", "AJ Piergiovanni", "Juhana Kangaspunta" ], "title": "Assemblenet++: Assembling modality representations via attention connections. 2020a", "venue": null, "year": 2020 }, { "authors": [ "Michael S Ryoo", "AJ Piergiovanni", "Mingxing Tan", "Anelia Angelova" ], "title": "Assemblenet: Searching for multi-stream neural connectivity in video architectures", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Rodrigo Santa Cruz", "Basura Fernando", "Anoop Cherian", "Stephen Gould" ], "title": "Deeppermnet: Visual permutation learning", "venue": null, "year": 2017 }, { "authors": [ "Zheng Shou", "Xudong Lin", "Yannis Kalantidis", "Laura Sevilla-Lara", "Marcus Rohrbach", "Shih-Fu Chang", "Zhicheng Yan" ], "title": "Dmc-net: Generating discriminative motion cues for fast compressed video action recognition", "venue": null, "year": 2019 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "UCF101: A Dataset of 101 Human Action Classes", "venue": "From Videos in The Wild. CRCV-TR-12-01,", "year": 2012 }, { "authors": [ "Du Tran", "Lubomir Bourdev", "Rob Fergus", "Lorenzo Torresani", "Manohar Paluri" ], "title": "Learning Spatiotemporal Features with 3D Convolutional Networks", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Du Tran", "Heng Wang", "Lorenzo Torresani", "Jamie Ray", "Yann LeCun", "Manohar Paluri" ], "title": "A Closer Look at Spatiotemporal Convolutions for Action Recognition", "venue": null, "year": 2018 }, { "authors": [ "Carl Vondrick", "Abhinav Shrivastava", "Alireza Fathi", "Sergio Guadarrama", "Kevin Murphy" ], "title": "Tracking emerges by colorizing videos", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Jiangliu Wang", "Jianbo Jiao", "Linchao Bao", "Shengfeng He", "Yunhui Liu", "Wei Liu" ], "title": "Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance dtatistics", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Shiyao Wang", "Hongchao Lu", "Zhidong Deng" ], "title": "Fast object detection in compressed video", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Unsupervised learning of visual representations using videos", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Allan Jabri", "Alexei A Efros" ], "title": "Learning correspondence from the cycle-consistency of time", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Donglai Wei", "Joseph J Lim", "Andrew Zisserman", "William T Freeman" ], "title": "Learning and using the arrow of time", "venue": null, "year": 2018 }, { "authors": [ "Chao-Yuan Wu", "Manzil Zaheer", "Hexiang Hu", "R Manmatha", "Alexander J Smola", "Philipp Krähenbühl" ], "title": "Compressed video action recognition", "venue": null, "year": 2018 }, { "authors": [ "Dejing Xu", "Jun Xiao", "Zhou Zhao", "Jian Shao", "Di Xie", "Yueting Zhuang" ], "title": "Self-supervised spatiotemporal learning via video clip order prediction", "venue": null, "year": 2019 }, { "authors": [ "Bowen Zhang", "Limin Wang", "Zhe Wang", "Yu Qiao", "Hanli Wang" ], "title": "Real-time action recognition with enhanced motion vector cnns", "venue": null, "year": 2016 }, { "authors": [ "Jing" ], "title": "2018) is our IMRNet pretrained using the 3D rotation prediction task (we used the IMRNet + Rotation pretrained model reported in Table 2 of our main paper), and ImageNet is a ResNet152 fully-supervised with ImageNet ILSVRC-2012", "venue": "Russakovsky et al", "year": 2015 } ]
[ { "heading": null, "text": "Self-supervised learning of video representations has received great attention. Existing methods typically require frames to be decoded before being processed, which increases compute and storage requirements and ultimately hinders largescale training. In this work, we propose an efficient self-supervised approach to learn video representations by eliminating the expensive decoding step. We use a three-stream video architecture that encodes I-frames and P-frames of a compressed video. Unlike existing approaches that encode I-frames and P-frames individually, we propose to jointly encode them by establishing bidirectional dynamic connections across streams. To enable self-supervised learning, we propose two pretext tasks that leverage the multimodal nature (RGB, motion vector, residuals) and the internal GOP structure of compressed videos. The first task asks our network to predict zeroth-order motion statistics in a spatio-temporal pyramid; the second task asks correspondence types between I-frames and P-frames after applying temporal transformations. We show that our approach achieves competitive performance on compressed video recognition both in supervised and self-supervised regimes." }, { "heading": "1 INTRODUCTION", "text": "There has been significant progress on self-supervised learning of video representations. It learns from unlabeled videos by exploiting their underlying structures and statistics as free supervision signals, which allows us to leverage large amounts of videos available online. Unfortunately, training video models is notoriously difficult to scale. Typically, practitioners have to make trade-offs between compute (decode frames and store them as JPEG images for faster data loading, but at the cost of large storage) and storage (decode frames on-the-fly at the cost of high computational requirements). Therefore, large-batch training of video models is difficult without high-end compute clusters. Although these issues are generally applicable to any video-based scenarios, they are particularly problematic for self-supervised learning because large-scale training is one key ingredient (Brock et al., 2019; Clark et al., 2019; Devlin et al., 2019) but that is exactly where these issues are aggravated.\nRecently, several approaches demonstrated benefits of compressed video recognition (Zhang et al., 2016; Wu et al., 2018; Shou et al., 2019; Wang et al., 2019b). Without ever needing to decode frames, these approaches can alleviate compute and storage requirements, e.g., resulting in 3 to 10 times faster solutions than traditional video CNNs at a minimal loss on accuracy (Wu et al., 2018; Wang et al., 2019b). Also, motion vectors embedded in compressed videos provide a free alternative to optical flow which is compute-intensive; leveraging this has been shown to be two orders of magnitude faster than optical flow-based approaches (Shou et al., 2019). However, all the previous work on compressed video has focused on supervised learning and there has been no study that shows the potential of compressed videos in self-supervised learning; this is the focus of our work.\n?Equal Contribution\nIn this work, we propose a self-supervised approach to learning video representations directly in the compressed video format. We exploit two inherent characteristics of compressed videos: First, video compression packs a sequence of images into several Group of Pictures (GOP). Intuitively, the GOP structure provides atomic representation of motion; each GOP contains images with just enough scene changes so a video codec can compress them with minimal information loss. Because of this atomic property, we enjoy less spurious, more consistent motion information at the GOP-level than at the frame-level. Second, compressed videos naturally provide multimodal representation (i.e. RGB frames, motion vectors, and residuals) that we can leverage for multimodal correspondence learning. Based on these, we propose two novel pretext task (see Fig. 1): The first task asks our model to predict zeroth-order motion statistics (e.g.where is the most dynamic region) in a pyramidal spatio-temporal grid structure. The second involves predicting correspondence types between I-frames and P-frames after temporal transformation. Solving our tasks require implicitly locating the most salient moving objects and matching their appearance-motion correspondences between I-frames and P-frames; this encourages our model to learn discriminative representation of compressed videos.\nA compressed video contains three streams of multimodal information – i.e. RGB images, motion vectors, and residuals – with a dependency structure between an I-frame stream and the two P-frame streams punctuated by GOP boundaries. We design our architecture to encode this dependency structure; it contains one CNN encoding I-frames and two other CNNs encoding motion vectors and residuals in P-frames, respectively. Unlike existing approaches that encode I-frames and P-frames individually, we propose to jointly encode them to fully exploit the underlying structure of compressed videos. To this end, we use a three-stream CNN architecture and establish bidirectional dynamic connections going from each of the two P-frame streams into the I-frame stream, and vice versa, and put these connections layer-wise to learn the correlations between them at multiple spatial/temporal scales (see Fig. 1). These connections allow our model to fully leverage the internal GOP structure of compressed videos and effectively capture atomic representation of motion.\nIn summary, our main contributions are two-fold: (1) We propose a three-stream architecture for compressed videos with bidirectional dynamic connections to fully exploit the internal structure of compressed videos. (2) We propose novel pretext tasks to learn from compressed videos in a self-supervised manner. We demonstrate our approach by pretraining the model on Kinetics-400 (Kay et al., 2017) and finetuning it on UCF-101 (Soomro et al., 2012), HMDB-51 (Kuehne et al., 2011). Our model achieves new state-of-the-art performance in compressed video classification tasks in both supervised and self-supervised regimes, while maintaining a similar computational efficiency as existing compressed video recognition approaches (Wu et al., 2018; Shou et al., 2019)." }, { "heading": "2 APPROACH", "text": "We use videos compressed according to the MPEG-4 Part 2 specifications (Le Gall, 1991) as our input, following the previous work (Wu et al., 2018; Shou et al., 2019; Wang et al., 2019b). This compression format encodes an RGB image sequence as a series of GOPs (Group of Pictures) where each GOP starts with one I-frame followed by a variable number of P-frames. An I-frame stores RGB values of a complete image and can be decoded on its own. A P-frame holds only the changes from the previous reference frame using motion vectors and residuals. The motion vectors store 2D displacements of the most similar patches between the reference and the target frames, and the residuals store pixel-wise differences to correct motion compensation errors. We use all the three modalities contained in compressed videos as our input.\nFormally, our input is T GOPs, G0, · · · , GT−1, where each Gt contains one I-frame It ∈ RH×W×3 followed by K − 1 pairs of motion vectors Mt,k ∈ RH×W×2 and residuals Rt,k ∈ RH×W×3, k ∈ [1,K). For efficiency and simplicity, we assume an identical GOP size K for all t ∈ [0, T )." }, { "heading": "2.1 IMR NETWORK FOR COMPRESSED VIDEOS", "text": "Our model consists of three CNNs, each with 3D convolutional kernels modeling spatio-temporal dynamics within each input stream {It}, {Mt,k}, {Rt,k}, t ∈ [0, T ), k ∈ [0,K); we denote these sub-networks by I-network fI , M-network fM , and R-network fR, respectively, and call our model IMR Network (IMRNet). We account for the difference in the amount of information between I-frames and P-frames by adjusting the capacity of networks accordingly. Specifically, following (Wu et al., 2018), we make the capacity of fI larger than fM and fR by setting the number of channels in each layer of fI to be γ times higher than those of fM and fR (we set γ = 64).\nExisting models for compressed videos typically perform late fusion (Wu et al., 2018; Shou et al., 2019), i.e., they combine embeddings of I-frames and P-frames only after encoding each stream. However, we find that it is critical to allow our sub-networks to share information as they encode their respective input streams. To this end, we establish layer-wise lateral connections between fI & fM and between fI & fR.\nBidirectional dynamic connections. Lateral connections have been used to combine information from different streams, e.g., RGB images and optical flow images (Feichtenhofer et al., 2016), and RGB images sampled at different frame rates (Feichtenhofer et al., 2019). In this work, we use it to combine information from I-frames and P-frames. Our approach is different from previous work in two key aspects: (1) We establish bidirectional connections between streams, instead of unidirectional connections as was typically done in the past (Feichtenhofer et al., 2016; 2019), so that information sharing is symmetrical between streams. (2) We incorporate multimodal gated attention to dynamically adjust the connections based on multimodal (I-frame and P-frames) information. We call our approach bidirectional dynamic connections to highlight these two aspects and differentiate\nours from previous work, e.g., SlowFast networks (Feichtenhofer et al., 2019) establish unidirectional lateral connections and the connections are static regardless of the content from the other stream.\nWe combine embeddings from different sub-networks via channel-wise concatenation, which requires embeddings to match their spatio-temporal dimensions. However, fI processes κ times less frames than fM and fR, producing embeddings that are κ times smaller in the temporal dimension. Therefore, we transform the embeddings with time-strided 3D (de-)convolution with (κ× 1× 1) kernels, C/8 channels, and (κ, 1, 1) temporal stride: We use convolution for fI → fM/fR to decrease the time dimension and deconvolution for fM/fR → fI to increase it. Note that simply using the (de-)conv layers will perform static transformation regardless of what is provided from the other sub-network, similar to (Feichtenhofer et al., 2019). However, we find it critical to make the transformations aware of information from both sub-networks so that the networks can dynamically adjust the connections and selectively share only the most relevant information from each sub-network.\nTo achieve this, we dynamically modulate (de-)conv layer outputs using multimodal-gated attention weights. Let xI ∈ RTI×W×H×CI and xM ∈ RTM×W×H×CM be the embeddings from fI and fM , respectively. We max-pool xI and xM and concatenate them to obtain multimodal embedding z ∈ RCZ with CZ = CI + CM . We define multimodal gate functions that take as input z and generate attention weights aI ∈ RCI/8 and aM ∈ RCM/8 as\naI = σ (W3h+ b3) , aM = σ (W4h+ b4) , h = ζ (W2ζ (W1z+ b1) + b2) (1)\nwhere σ is a sigmoid function, ζ is a Leaky ReLU function, and W1,W2 ∈ RCZ×CZ , b1, b2 ∈ RCZ ,W3 ∈ RCI/8×CZ , b3 ∈ RcI/8,W4 ∈ RCM/8×CZ , b4 ∈ RCM/8 are weight parameters. Next, we use these attention weights to modulate the (de-)conv output embeddings,\nvI→M = aM ⊗ 3d_conv(xI), vM→I = aI ⊗ 3d_deconv(xM ) (2)\nwhere ⊗ is channel-wise multiplication. We repeat the same process for fI & fR to obtain vI→R and vR→I , and combine them with the feature embeddings via channel-wise concatenation,\nx̂I = [xI ;vM→I ;vR→I ], x̂M = [xM ;vI→M ], x̂R = [xR;vI→R] (3)\nEach of these is fed into the next layer in the corresponding sub-network. We establish these lateral connections across multiple layers of our network. To obtain the final embedding, we apply average pooling on the output from the final layer of each sub-network and concatenate them channel-wise.\nNote that the design of IMRNet is orthogonal to the design of video CNNs; while we adapt 3DResNet (He et al., 2016) as the backbone in our experiments, we can use any of existing CNN architectures as the backbone, e.g., C3D (Tran et al., 2015), I3D (Carreira & Zisserman, 2017), R(2+1)D (Tran et al., 2018). What is essential, however, is that (i) there are three sub-networks, each modeling one of the three input streams, and (ii) information from different networks are combined via bidirectional dynamic connections as they are encoded." }, { "heading": "2.2 SELF-SUPERVISED LEARNING OBJECTIVES", "text": "Compressed videos have unique properties, i.e., the multimodal nature of information (RGB, motion vector, residuals) and the internal GOP structure that provides atomic representation of motion. We turn these properties into free self-supervisory signals and design two novel pretext tasks.\nPyramidal Motion Statistics Prediction (PMSP). One important desideratum of video CNNs is learning visual representation that captures salient objects and motion. We hypothesize that there is an implicit videographer bias captured in videos in-the-wild that naturally reflect visual saliency: Videos\nare purposely recorded to highlight important objects and their movements.1 Therefore, regions with the highest energy of motion can provide clues to learning the desired video representation. We can easily find those regions in compressed videos: the motion vectors in P-frames readily provide magnitude and angular information of motion, which we can harness to find the most vibrant regions.\nBased on this intuition, we design a task that asks our model to predict the zeroth-order motion statistics (i.e., the most vibrant region) in a given video. For this, we must be able to deal with a variety of object sizes because a salient moving object can appear at any location in any size. A classical solution to this is to perform pyramidal prediction (Grauman & Darrell, 2005; Lazebnik et al., 2006): We divide a video into spatio-temporal 3D grids at multiple scales and ask our network to predict the most vibrant region at each scale.\nSpecifically, we define a pyramidal classification task with the following loss function,\nLPMSP = − ∑ i ∑ r ∑ q y(i)q,r · logαr ( x(i)q,r ) (4)\nThis is a cross-entropy loss computed at every q-th grid in every r-th level of a spatio-temporal pyramid; i is the sample index. We define a 9-level spatio-temporal pyramid with 3 spatial and 3 temporal scales, i.e., r ∈ {(s, t)|s ∈ {[2×2], [3×3], [4×4]}, t ∈ {1, 3, 5}}. The index q iterates over all possible temporal coordinates in the r-th level of the pyramid, e.g., in Figure 3 (a), q ∈ [0, · · · , 4] with r = ([2×2], 5). y(i)q,r is a one-hot label marking the location with the highest energy of motion in the q-th grid in r-th level in the pyramid, e.g., in Figure 3 (a), y(i)q,r is a 4-dimensional one-hot vector. We provide a pseudo-code to obtain the ground-truth labels from motion vectors in Appendix. x(i)q,r is the (q, r)-th feature in a 3D grid; we concatenate output embeddings from all three sub-networks, x(i) = [x\n(i) I ;x (i) M ;x (i) R ]. Finally, αr(·) is a 2-layer MLP with a softmax classifier predicting the most\nvibrant region in the given grid; we define one such classifier for each r.\nCorrespondence Type Prediction (CTP). One idea often used in self-supervision is applying certain transformations to data and asking a network to predict the correspondence type given a pair of instances (e.g., true pair or randomly selected pair) (Owens & Efros, 2018; Chen et al., 2020; He et al., 2020; Misra & van der Maaten, 2020). The multimodal nature of compressed videos makes them an ideal data format to apply such self-supervision technique: The three frame types in compressed videos exhibit different characteristics, yet they are strongly correlated with each other. This allows us to consider I-frames as a heavily transformed version of the corresponding P-frames, and vice versa. Learning the correspondence type between I-frames and P-frames can therefore encourage our network to learn discriminative representation of videos.\n1This is, of course, a weak hypothesis. But we show some convincing empirical evidence in Appendix.\nWe define a correspondence type prediction task with the following loss function, LCTP = − ∑ i ∑ j y (i) j · log β ( x (i) I , T (x (i) M ,x (i) R , j) ) (5)\nwhere i is the sample index and j iterates over a set of transformations. y(i)j is a one-hot label indicating different correspondence types determined by the type of transformation done, and T (·, j) is a data transformation function that changes the input using the j-th transformation. We define four transformation types (see Figure 4): (1) Aligned keeps the original input (no transformation), (2) Random replaces the data with P-frames from a randomly selected video, (3) Shuffle randomly shuffles the GOP order, (4) Shift randomly divides GOPs into two groups and switch the order, e.g., [1, 2, 3, 4, 5] to [2, 3, 4, 5, 1]. Finally, β(·) is a 2-layer MLP with a softmax classifier. Note that there is a nuanced difference between random P-frames and shuffled/shifted P-frames. The former contains P-frames that come from a different clip, while the latter contains P-frames of the same clip as the I-frames, yet in a different frame order. Intuitively, the former encourages our network to learn from global (clip-level) correspondence, while the latter formulates a local (framelevel) correspondence task. Therefore, our CTP task encourages our network to learn discriminative representations at both global and local levels. We provide empirical evidence showing the importance of this global-local mixed objective in Section 3.2.\nFinal Objective. We optimize our model using a learning objective LPMSP + λLCTP with λ = 1. The classifiers αr and β are used only during self-supervised training; we detach them thereafter." }, { "heading": "3 EXPERIMENTS", "text": "Implementation Detail. We adopt 3D ResNet (He et al., 2016) as the backbone; see Appendix for architectural details. We establish bidirectional dynamic connections after {conv1, res2, res3, res4} layers. We pretrain our model end-to-end from scratch for 20 epochs, including the initial warm-up period of 5 epochs. For downstream scenarios, we finetune our model for 500 epochs for UCF-101 and for 300 epochs for HMDB-51, including the warm-up period of 30 epochs. For both the pretraining and finetuning stages, we use SGD with momentum 0.9, weight decay 10−4, and half-period cosine learning rate schedule. We use 4 NVIDIA Tesla V100 GPUs and use a batch size of 100.\nData. We pretrain our model on Kinetics-400 (Kay et al., 2017). For evaluation, we finetune the pretrained model for action recognition using UCF-101 (Soomro et al., 2012) and HMDB-51 (Kuehne et al., 2011). We use 2-second video clips encoded in 30 FPS with a GOP size T = 12. We use all T = 5 GOPs but subsample every other P-frames within each GOP; this results in 5 I-frames and 25 P-frames. We randomly crop 224× 224 pixels from videos resized to 256 pixels in the shorter side while keeping the aspect ratio. For data augmentation, we resize the video with various scales [.975, .9, .85] and apply random horizontal flip. For test videos, we take three equidistant 224× 224 pixel crops from videos resized to 256 pixels to fully cover the spatial region. We approximate the fully-convolutional testing (Wang et al., 2018) by averaging the softmax scores for final prediction." }, { "heading": "3.1 SUPERVISED LEARNING EXPERIMENTS", "text": "We first demonstrate our proposed IMR network in the fully-supervised setup, training it without using our self-supervised pretext tasks. We use the standard training and evaluation protocols for both UCF-101 (Soomro et al., 2012) and HMDB-51 (Kuehne et al., 2011). For fair comparisons with existing approaches (Wu et al., 2018; Shou et al., 2019), we report results both when we train the model from scratch and when we pretrain it on Kinetics-400 (Kay et al., 2017) and finetune it on downstream datasets (indicated in column Pretrain).\nTable 1 summarizes the results. When trained from scratch, our model outperforms CoViAR (Wu et al., 2018) by a large margin regardless of the chosen backbone. The performance gap is alleviated when the models are pretrained on Kinetics400, but our approach continues to outperform them even in this scenario. This suggest that CoViAR struggles to learn discriminative representations without help from a large-scale pretraining data. We believe the performance gap comes from the difference in how the two models encode compressed videos: CoViAR combines information from I-frames and P-frames only after encoding them separately, while we combine them in the early layers of CNN.\nCoViAR and DMC-Net reported improved results when they are trained using optical flow. Therefore, we also conduct experiments by adding an I3D network (Carreira & Zisserman, 2017) to encode optical flow images; we simply concatenate our IMRNet features with the I3D features as our final representation (no lateral connections between IMRNet and I3D). This model outperforms both CoViAR and DMC-Net trained with optical flow (bottom group, Table 1). DMC-Net improves upon CoViAR by adapting GANs (Goodfellow et al., 2014) to reconstruct optical flow from P-frames. Note that our approach (with 3D-ResNet50 backbone) outperforms DMC-Net (with ResNet152/18 backbones) on both datasets even without using optical flow during training and thus significantly simplifies the training setup (no GANs required).\nNext, we conduct an ablation study on the bidirectional dynamic connection: (a) No connection removes lateral connections and thus is similar to CoViAR, (b) Unidirectional establishes connections from M/R-Networks to I-Network, but not vice versa, i.e., Equation equation 3 becomes x̂M = xM , x̂R = xR, (c) No conv replaces (de-)conv layers with simple up/down-sampling, (d) No attention removes the multimodal-gated attention module. The results are shown in Table 1. We can see that lateral connections are critical component of our model (Ours vs. No connection) and doing so in a bidirectional fashion significantly improves performance (Ours vs. Unidirection). We can also see that using (de-)conv layers and dynamically modulating the connection with gate functions improve performance (Ours vs. No conv and No attention).\nTable 2 shows per-frame runtime speed (ms) and GFLOPs measured on an NVIDIA Tesla P100 GPU with Intel E5-2698 v4 CPUs (∗ process individual frames. † and ‡ process 16- and 25-frame sequences, respectively). Our approach has the same preprocessing time of CoViAR and DMC because all three approaches use the same video loader implementation (Wu et al., 2018). As for the inference speed, IMRNet is comparable to CoViAR and even slightly faster than DMC (we divide the total inference time by #frames following the convention of Wu et al. (2018)). This is partly because we use lighter backbones (R18/R50 vs. R152 used in CoViAR and DMC) to compensate for the expensive 3D convolutional operations, while DMC requires an OF generator network of 7 all-convolutional layers, which adds extra cost. In terms of per-frame FLOPs, ours is more efficient than CoViAR and DMC because the computation is done at the sequence-level rather than per-frame; we observe a similar trend for R(2+1)D (which uses ResNet18) vs. ResNet152. This shows that our\n3D CNN backbones do not bring any significant extra cost compared to CoViAR and DMC, and thus our model enjoys all the computational benefits of compressed video processing." }, { "heading": "3.2 SELF-SUPERVISED LEARNING EXPERIMENTS", "text": "We move to the self-supervised regime and demonstrate our pretext tasks by pretraining our IMRNet on Kinetics400 (Kay et al., 2017) and transferring it to action recognition. Because ours is the first self-supervised approach to learn compressed video representation, there exist no published baseline that we can directly compare with. Therefore, we provide results from existing self-supervised approaches that require the decoding step. We include approaches that learn from RGB images – AOT (Wei et al., 2018), Rotation (Jing et al., 2018), MotPred (Wang et al., 2019a), RotNet3D (Jing et al., 2018), ST-Puzzle (Kim et al., 2019), ClipOrder (Xu et al., 2019), DPC (Han et al., 2019) – as well as those that learn from audio and visual channels in videos – Multisensory (Owens & Efros, 2018), AVTS (Korbar et al., 2018), Elo (Piergiovanni et al., 2020).\nTable 3 summarizes the results. We first notice that pretraining the models with any pretext tasks improves downstream performance (the first group of results), suggesting self-supervised pretraining is effective in general. We also see that IMRNet pretrained using our pretext tasks (PMSP+CTP) outperforms the baseline pretext tasks (second group) and self-supervised methods for uncompressed videos (third group). This shows the effectiveness of our IMRNet pretrained with our pretext tasks.\nNext, we conduct an ablation study by pretraining the base models using either PMSP and CTP alone. We also test CTP (Binary) which is a simplied version of our CTP task with only two modes: Aligned and Random (see Figure 4). Note that this is a typical pair correspondence setup used in the literature (Arandjelovic & Zisserman, 2017). Table 3 (fourth group) shows the results. We can see that using either of our pretext tasks leads to a significant improvements compared to the Scratch result. The CTP (Binary) results suggests that the two additional transformation types (Shuffle and Shift in Figure 4) improves the task by making it more difficult to solve; we noticed that the loss curve of CTP (Binary) decreases significantly faster than CTP and quickly saturates thereafter." }, { "heading": "4 RELATED WORK", "text": "Self-supervised learning of video representation. Self-supervised learning has received significant attention (Kumar BG et al., 2016; Santa Cruz et al., 2017; Doersch et al., 2015; Wang & Gupta, 2015). Based on strong progress in the image domain, several works proposed to learn video representations in a self-supervised manner. One popular idea is leveraging temporal information (Wang & Gupta, 2015; Isola et al., 2015; Jayaraman & Grauman, 2016; Misra et al., 2016; Fernando et al., 2017; Wei et al., 2018). Temporal coherence of video pixels has been leveraged as a self-supervisory signal (Vondrick et al., 2018; Wang et al., 2019c). Another popular idea is learning transformationinvariant representations (Kim et al., 2019; Gidaris et al., 2018; Jing et al., 2018). Also, contrastive learning (Oord et al., 2018; Hjelm et al., 2018; He et al., 2020; Chen et al., 2020) has been successfully applied to videos (Han et al., 2019). Despite active research in this field, to the best of our knowledge, there has not been prior work on self-supervised learning from compressed videos.\nCompressed video recognition. Compressed video understanding has been tackled in a supervised setting (Zhang et al., 2016; Wu et al., 2018; Shou et al., 2019). Existing approaches encode each stream separately and perform late fusion, e.g., feature concatenation (Zhang et al., 2016; Wu et al., 2018). However, as we show in our experiments, this can miss out useful information that can only be learned by modeling the interaction across streams. Unlike previous approaches, our approach shares relevant information across streams during the encoding process. In addition, because compressed videos do not provide continuous RGB frames, it is not easy to directly apply 3D CNNs to encode I-frames. Therefore, existing approaches use 2D CNNs to process compressed video frames, e.g., CoViAR (Wu et al., 2018) uses 2D CNNs to process each stream and perform average pooling over P-frames, which is insufficient to model complex motion dynamics. DMC-Net (Shou et al., 2019) reconstructs the optical flow from P-frames and later use the reconstructed signal as input to I3D (Carreira & Zisserman, 2017), but this requires ground-truth optical flow which is computeintensive. Instead, our IMR network adopts the gated attention Hu et al. (2018); Ryoo et al. (2020a) and bidirectional connection Ryoo et al. (2020b); Feichtenhofer et al. (2019) for lateral connection to model complex motion dynamics with I,P frames freely available in compressed videos." }, { "heading": "5 CONCLUSION", "text": "We introduced an IMR network for compressed video recognition and two pretext tasks for selfsupervised learning of compressed video representation. Our work complements and extends existing work on compressed video recognition by (1) proposing the first self-supervised training approach on the compressed videos, and (2) proposing a three-stream 3D CNN architecture to encode compressed videos while dynamically modeling interaction between I-frames and P-frames. We demonstrated that our IMRNet outperforms state-of-the-art approaches for compressed videos in both fully-supervised and self-supervised settings, and that our pretext tasks yield better performance in downstream tasks.\nAcknowledgement We thank SNUVL lab members, especially Hyeokjun Kwon and Hyungyu Park, for their helpful discussions. This research was supported by Seoul National University, Brain Research Program by National Research Foundation of Korea (NRF) (2017M3C7A1047860), and AIR Lab (AI Research Lab) in Hyundai Motor Company through HMC-SNU AI Consortium Fund, and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01082, SW StarLab)." }, { "heading": "A PMSP GROUND-TRUTH LABELS", "text": "We obtain the ground-truth labels for the pyramidal motion statistics prediction (PMSP) task directly from motion vectors provided in compressed videos. Algorithm 1 shows a pseudo-code to compute the labels at multiple spatio-temporal scales, r ∈ {(s, t)|s ∈ {[2× 2], [3× 3], [4× 4]}, t ∈ {1, 3, 5}}.\nAlgorithm 1: Self-supervision label for Pyramidal Motion Statistics Prediction [1] Motion vectors {M0,1, ...,MT,K} with T GOPs each having K − 1 motion vectors Generate dx, dy by convolving motion vectors with Prewitt operator, Gx, Gy t← 1 ; . Set temporal scale t to 1 Y ← [] . Empty list for labels i = 0 2 n = 0 t− 1 sum_dx← sum(dx[n ∗K : (n+ T − t+ 1) ∗K]) sum_dy ← sum(dy[n ∗K : (n+ T − t+ 1) ∗K]) magnitude← cartToPolar(sum_dx, sum_dy) magnitude[2×2] ← makeGrid(magnitude, spatial = 2) magnitude[3×3] ← makeGrid(magnitude, spatial = 3) magnitude[4×4] ← makeGrid(magnitude, spatial = 4) y[2×2] ← argmaxq∈[1,··· ,22](magnitude[2×2]) y[3×3] ← argmaxq∈[1,··· ,32](magnitude[3×3]) y[4×4] ← argmaxq∈[1,··· ,42](magnitude[4×4]) Y ← Y ∪ [y[2×2], y[3×3], y[4×4]] t← t+ 2 PMSP labels, Y , at multiple scales {(s, t)|s ∈ {[2× 2], [3× 3], [4× 4]}, t ∈ {1, 3, 5}}" }, { "heading": "B PMSP LABEL VISUALIZATION", "text": "In Section 2.3 of the main paper, we motivated the design of our PMSP task by arguing that there is an implicit videographer bias captured in videos in-the-wild that naturally reflects visual saliency: Videos are purposely recorded to highlight important objects and their movements; therefore, regions with the highest energy of motion – captured by our PMSP labels – can provide clues to learning video representation that captures salient moving objects. We acknowledged that this is, of course, a weak hypothesis (footnote 1 in the main paper). However, in this section we provide some convincing empirical evidence.\nFigures 5-12 are generated by visualizing regions with the highest energy of motion – i.e. , the PMSP labels – at multiple spatio-temporal scales. It needs a bit of explanation on how to read the figures as there is a lot going on. Each figure is organized into three rows; each row shows results with multiple spatial regions at a particular temporal scale, t ∈ {1, 3, 5}. We color-code different spatial scales: Red boxes are in a [2× 2] spatial scale, green boxes are in a [3× 3] spatial scale, and blue boxes are in a [4× 4] spatial scale. Notice that all five I-frames in the top rows (t = 1) in each set of results always contain identical regions. This is because, at the temporal scale t = 1 (meaning, a temporal grid of size 1), we compute the regions with the highest motion energy over the entire video (5 GOPs), hence the regions are identical across all I-frames in a video. Conversely, the bottom rows (t = 5, a temporal grid of size 5) show the regions computed at each GOP, and hence the regions may differ by every I-frame (recall that each GOP contains a single I-frame). The middle rows (t = 3) show regions computed over 3 GOPs. We overlay the regions at the overlapping I-frames, e.g., the third I-frame at t = 3 contains regions computed at all three grid locations spanning over the I-frame indices [1,2,3], [2,3,4], and [3,4,5]. We order the figures at an increasing level of complexity, and provide detailed analyses of the results in the captions of the figures.\nThe results in Figures 5-12 suggest that the most vibrant regions, as computed by our PMSP labels, tend to overlap with semantically important regions, e.g., the most salient moving objects. Intuitively, training our model to detect those regions encourages it to learn visual representations that capture salient objects and motion. This allows our model to learn discriminative visual representation in a self-supervised manner." }, { "heading": "C PMSP PREDICTION RESULTS", "text": "D VIDEO-TO-VIDEO RETRIEVAL\nTo demonstrate the quality of video representations learned using our self-supervised learning objectives (Section 2.3 in the main paper), we evaluate our method in the video-to-video retrieval task. To do this, we measure the cosine similarity between a query video and all the other video in a candidate set, and show the top-1 retrieved video. We compare ours to two baselines: 3D Rotation Jing et al. (2018) is our IMRNet pretrained using the 3D rotation prediction task (we used the IMRNet + Rotation pretrained model reported in Table 2 of our main paper), and ImageNet is a ResNet152 fully-supervised with ImageNet ILSVRC-2012 Russakovsky et al. (2015). We visualize the results in Figures 15-18 and analyze the results in the caption of each figure." }, { "heading": "E ARCHITECTURE DETAILS", "text": "Table 4 provides architecture details of our IMRNet. In our experiments, we used both 3D ResNet-18 and 3D ResNet-50 as the backbone; we provide the details of both models in the table. We also include the details of our bidirectional dynamic connections, which include 3D convolutional/deconvolutional layers that downsamples/upsamples the computed features along the temporal dimension. We establish the connections after {conv1, res2, res3, res4} layers, each with different numbers of channels.\nStage I Pathway M/R Pathway Output sizes T × S2\nraw clip – – 60× 2242 data layer stride 12, 12 stride 2, 12 I: 5× 224 2\nM/R: 25× 2242\nconv1 1× 72, 64 stride 1, 22\n5× 72, 8 stride 1, 22\nI: 5× 1122 M/R: 25× 1122\npool1 1× 32,max stride 1, 22\n5× 32,max stride 1, 22\nI: 5× 562 M/R: 25× 562\nres2\n(3D ResNet-18)[ 1× 32, 64 1× 32, 64 ] ×2\n(3D ResNet-18)[ 3× 32, 4 1× 32, 4 ] ×2\nI: 5× 562 M/R: 25× 562(3D ResNet-50) 1× 12, 641× 32, 64\n1× 12, 256 ×3 (3D ResNet-50) 3× 12, 41× 32, 4 1× 12, 16 ×3\nres3\n(3D ResNet-18)[ 1× 32, 128 1× 32, 128 ] ×2\n(3D ResNet-18)[ 3× 32, 8 1× 32, 8 ] ×2\nI: 5× 282 M/R: 25× 282(3D ResNet-50)1× 12, 1281× 32, 128\n1× 12, 512 ×4 (3D ResNet-50) 3× 12, 81× 32, 8 1× 12, 32 ×4\nres4\n(3D ResNet-18)[ 3× 32, 256 1× 32, 256 ] ×2\n(3D ResNet-18)[ 3× 32, 16 1× 32, 16 ] ×2\nI: 5× 142 M/R: 25× 142(3D ResNet-50) 3× 12, 2561× 32, 256\n1× 12, 1024 ×6 (3D ResNet-50)3× 12, 161× 32, 16 1× 12, 64 ×6\nres5\n(3D ResNet-18)[ 3× 32, 512 1× 32, 512 ] ×2\n(3D ResNet-18)[ 3× 32, 32 1× 32, 32 ] ×2\nI: 5× 72 M/R: 25× 72(3D ResNet-50) 3× 12, 5121× 32, 512\n1× 12, 2048 ×3 (3D ResNet-50) 3× 12, 321× 32, 32 1× 12, 128 ×3 Stage conv1 res2 res3 res4\nI to M/R 1× 5 2, 8 stride 5, 12 1× 52, 8 stride 5, 12 1× 52, 16 stride 5, 12\n1× 52, 32 stride 5, 12\nM/R to I 5× 7 2, 4 stride 5, 12 5× 72, 4 stride 5, 12 5× 72, 8 stride 5, 12 5× 72, 16 stride 5, 12\nTable 4: IMRNet architecture details. We show two versions of IMRNet with different backbones: 3D ResNet-18 and 3D ResNet-50. We denote the input dimensions by {temporal size, spatial size2}, kernels by {temporal size, spatial size2, channel size} and strides by {temporal stride, spatial stride2}." } ]
2,021
SELF-SUPERVISED LEARNING OF COMPRESSED VIDEO REPRESENTATIONS
SP:2d804ce6cd9917277ac5c4d6c72cceeb14bf0641
[ "The paper presents two algorithms - one for the deterministic and one for stochastic bilevel optimization. The paper claims the methods are lower cost in computational complexity for various terms and easy to implement. A finite-time convergence proof is provided for the algorithms. Empirical results are presented for meta-learning, and (in the appendix) hyperparameter optimization." ]
Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization problem. For deterministic bilevel optimization, we provide a comprehensive finite-time convergence analysis for two popular algorithms respectively based on approximate implicit differentiation (AID) and iterative differentiation (ITD). For the AID-based method, we orderwisely improve the previous finitetime convergence analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate. Our analysis also provides a quantitative comparison between ITD and AID based approaches. For stochastic bilevel optimization, we propose a novel algorithm named stocBiO, which features a sample-efficient hypergradient estimator using efficient Jacobianand Hessian-vector product computations. We provide the finite-time convergence guarantee for stocBiO, and show that stocBiO outperforms the best known computational complexities orderwisely with respect to the condition number κ and the target accuracy . We further validate our theoretical results and demonstrate the efficiency of bilevel optimization algorithms by the experiments on meta-learning and hyperparameter optimization.
[ { "affiliations": [], "name": "BILEVEL OPTI" } ]
[ { "authors": [ "Luca Bertinetto", "Joao F Henriques", "Philip Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jerome Bracken", "James T McGill" ], "title": "Mathematical programs with optimization problems in the constraints", "venue": "Operations Research,", "year": 1973 }, { "authors": [ "Justin Domke" ], "title": "Generic methods for optimization-based modeling", "venue": "In Artificial Intelligence and Statistics (AISTATS),", "year": 2012 }, { "authors": [ "Matthias Feurer", "Frank Hutter" ], "title": "Hyperparameter optimization", "venue": "In Automated Machine Learning,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proc. International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Rémi Flamary", "Alain Rakotomamonjy", "Gilles Gasso" ], "title": "Learning constrained task similarities in graphregularized multi-task learning. Regularization, Optimization, Kernels, and Support Vector", "venue": null, "year": 2014 }, { "authors": [ "Luca Franceschi", "Michele Donini", "Paolo Frasconi", "Massimiliano Pontil" ], "title": "Forward and reverse gradient-based hyperparameter optimization", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Luca Franceschi", "Paolo Frasconi", "Saverio Salzo", "Riccardo Grazzi", "Massimiliano Pontil" ], "title": "Bilevel programming for hyperparameter optimization and meta-learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Saeed Ghadimi", "Mengdi Wang" ], "title": "Approximation methods for bilevel programming", "venue": "arXiv preprint arXiv:1802.02246,", "year": 2018 }, { "authors": [ "Stephen Gould", "Basura Fernando", "Anoop Cherian", "Peter Anderson", "Rodrigo Santa Cruz", "Edison Guo" ], "title": "On differentiating parameterized argmin and argmax problems with application to bi-level optimization", "venue": "arXiv preprint arXiv:1607.05447,", "year": 2016 }, { "authors": [ "Riccardo Grazzi", "Luca Franceschi", "Massimiliano Pontil", "Saverio Salzo" ], "title": "On the iteration complexity of hypergradient computation", "venue": "In Proc. International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Pierre Hansen", "Brigitte Jaumard", "Gilles Savard" ], "title": "New branch-and-bound rules for linear bilevel programming", "venue": "SIAM Journal on Scientific and Statistical Computing,", "year": 1992 }, { "authors": [ "Mingyi Hong", "Hoi-To Wai", "Zhaoran Wang", "Zhuoran Yang" ], "title": "A two-timescale framework for bilevel optimization: Complexity analysis and application to actor-critic", "venue": "arXiv preprint arXiv:2007.05170,", "year": 2020 }, { "authors": [ "Kaiyi Ji", "Jason D Lee", "Yingbin Liang", "H Vincent Poor" ], "title": "Convergence of meta-learning with task-specific adaptation over partial parameters", "venue": "arXiv preprint arXiv:2006.09486,", "year": 2020 }, { "authors": [ "Kaiyi Ji", "Junjie Yang", "Yingbin Liang" ], "title": "Multi-step model-agnostic meta-learning: Convergence and improved algorithms", "venue": "arXiv preprint arXiv:2002.07836,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Vijay R Konda", "John N Tsitsiklis" ], "title": "Actor-critic algorithms. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2000 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Gautam Kunapuli", "Kristin P Bennett", "Jing Hu", "Jong-Shi Pang" ], "title": "Classification model selection via bilevel programming", "venue": "Optimization Methods & Software,", "year": 2008 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Junyi Li", "Bin Gu", "Heng Huang" ], "title": "Improved bilevel model: Fast and optimal algorithm with theoretical guarantee", "venue": "arXiv preprint arXiv:2009.00690,", "year": 2020 }, { "authors": [ "Renjie Liao", "Yuwen Xiong", "Ethan Fetaya", "Lisa Zhang", "KiJung Yoon", "Xaq Pitkow", "Raquel Urtasun", "Richard Zemel" ], "title": "Reviving and improving recurrent back-propagation", "venue": "In Proc. International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Risheng Liu", "Pan Mu", "Xiaoming Yuan", "Shangzhi Zeng", "Jin Zhang" ], "title": "A generic first-order algorithmic framework for bi-level programming beyond lower-level singleton", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Jonathan Lorraine", "Paul Vicol", "David Duvenaud" ], "title": "Optimizing millions of hyperparameters by implicit differentiation", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2020 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Gregory M Moore" ], "title": "Bilevel programming algorithms for machine learning model selection", "venue": "Rensselaer Polytechnic Institute,", "year": 2010 }, { "authors": [ "Takayuki Okuno", "Akiko Takeda", "Akihiro Kawana" ], "title": "Hyperparameter learning via bilevel nonsmooth optimization", "venue": "arXiv preprint arXiv:1806.01520,", "year": 2018 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Fabian Pedregosa" ], "title": "Hyperparameter optimization with approximate gradient", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of MAML", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Aravind Rajeswaran", "Chelsea Finn", "Sham M Kakade", "Sergey Levine" ], "title": "Meta-learning with implicit gradients", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Aaron Roth", "Jonathan Ullman", "Zhiwei Steven Wu" ], "title": "Watch and learn: Optimizing from revealed preferences feedback", "venue": "In Annual ACM Symposium on Theory of Computing (STOC),", "year": 2016 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li FeiFei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Amirreza Shaban", "Ching-An Cheng", "Nathan Hatch", "Byron Boots" ], "title": "Truncated back-propagation for bilevel optimization", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Chenggen Shi", "Jie Lu", "Guangquan Zhang" ], "title": "An extended kuhn–tucker approach for linear bilevel programming", "venue": "Applied Mathematics and Computation,", "year": 2005 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Tong Yu", "Hong Zhu" ], "title": "Hyper-parameter optimization: A review of algorithms and applications", "venue": "arXiv preprint arXiv:2003.05689,", "year": 2020 }, { "authors": [ "Daniel Zügner", "Stephan Günnemann" ], "title": "Adversarial attacks on graph neural networks via meta learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Shaban" ], "title": "LDtr(w, λ) is often strongly-convex w.r.t. w. For example, for the data hyper-cleaning application proposed by Franceschi et al", "venue": null, "year": 2019 }, { "authors": [ "prolearner/hypertorch" ], "title": "AID-FP (Grazzi et al., 2020): AID with the fixed-point method. We use its implementation", "venue": null, "year": 2020 }, { "authors": [ "Arnold" ], "title": "2019), we partition these classes into 64 classes for meta-training, 16 classes for meta-validation, and 20 classes for meta-testing. Following the repository (Arnold et al., 2019), we use a four-layer CNN with four convolutional blocks, where each block sequentially consists of a 3× 3 convolution, batch normalization, ReLU activation, and 2× 2 max pooling", "venue": null, "year": 2019 }, { "authors": [ "≤ L" ], "title": "Recall that Φ(x) = f(x, y∗(x)) in eq. (2). Then, we use the following lemma to characterize the Lipschitz properties of∇Φ(x), which is adapted from Lemma 2.2 in Ghadimi & Wang (2018)", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Bilevel optimization has received significant attention recently and become an influential framework in various machine learning applications including meta-learning (Franceschi et al., 2018; Bertinetto et al., 2018; Rajeswaran et al., 2019; Ji et al., 2020a), hyperparameter optimization (Franceschi et al., 2018; Shaban et al., 2019; Feurer & Hutter, 2019), reinforcement learning (Konda & Tsitsiklis, 2000; Hong et al., 2020), and signal processing (Kunapuli et al., 2008; Flamary et al., 2014). A general bilevel optimization takes the following formulation.\nmin x∈Rp Φ(x) := f(x, y∗(x)) s.t. y∗(x) = arg min y∈Rq g(x, y), (1)\nwhere the upper- and inner-level functions f and g are both jointly continuously differentiable. The goal of eq. (1) is to minimize the objective function Φ(x) w.r.t. x, where y∗(x) is obtained by solving the lower-level minimization problem. In this paper, we focus on the setting where the lower-level function g is strongly convex with respect to (w.r.t.) y, and the upper-level objective function Φ(x) is nonconvex w.r.t. x. Such types of geometrics commonly exist in many applications including meta-learning and hyperparameter optimization, where g corresponds to an empirical loss with a strongly-convex regularizer and x are parameters of neural networks.\nA broad collection of algorithms have been proposed to solve such types of bilevel optimization problems. For example, Hansen et al. (1992); Shi et al. (2005); Moore (2010) reformulated the bilevel problem in eq. (1) into a single-level constrained problem based on the optimality conditions of the lower-level problem. However, such type of methods often involve a large number of constraints, and are hard to implement in machine learning applications. Recently, more efficient gradient-based bilevel optimization algorithms have been proposed, which can be generally categorized into the approximate implicit differentiation (AID) based approach (Domke, 2012; Pedregosa, 2016; Gould et al., 2016; Liao et al., 2018; Ghadimi & Wang, 2018; Grazzi et al., 2020; Lorraine\net al., 2020) and the iterative differentiation (ITD) based approach (Domke, 2012; Maclaurin et al., 2015; Franceschi et al., 2017; 2018; Shaban et al., 2019; Grazzi et al., 2020). However, most of these studies have focused on the asymptotic convergence analysis, and the finite-time analysis (that characterizes how fast an algorithm converges) has not been well explored except a few attempts recently. Ghadimi & Wang (2018) provided the finite-time analysis for the ITD-based approach. Grazzi et al. (2020) provided the iteration complexity for the hypergradient computation via ITD and AID, but did not characterize the finite-time convergence for the entire execution of algorithms.\n• Thus, the first focus of this paper is to develop a comprehensive and enhanced theory, which covers a broader class of bilevel optimizers via ITD and AID based techniques, and more importantly, to improve the exiting analysis with a more practical parameter selection and order-level lower computational complexity.\nThe stochastic bilevel optimization often occurs in applications where fresh data need to be sampled as the algorithms run (e.g., reinforcement learning (Hong et al., 2020)) or the sample size of training data is large (e.g., hyperparameter optimization (Franceschi et al., 2018), Stackelberg game (Roth et al., 2016)). Typically, the corresponding objective function is given by\nmin x∈Rp\nΦ(x) = f(x, y∗(x)) := { Eξ [F (x, y∗(x); ξ)] 1 n ∑n i=1 F (x, y ∗(x); ξi)\ns.t. y∗(x) = arg min y∈Rq g(x, y) := { Eζ [G(x, y∗(x); ζ)] 1 m ∑m i=1G(x, y ∗(x); ζi), (2)\nwhere f(x, y) and g(x, y) take either the expectation form w.r.t. the random variables ξ and ζ or the finite-sum form over given data Dn,m = {ξi, ζj , i = 1, ..., n; j = 1, ...,m} often with large sizes n and m. During the optimization process, the algorithms sample data batch via the distributions of ξ and ζ or from the setDn,m. For such a stochastic setting, Ghadimi & Wang (2018) proposed a bilevel stochastic approximation (BSA) method via single-sample gradient and Hessian estimates. Based on such a method, Hong et al. (2020) further proposed a two-timescale stochastic approximation (TTSA), and showed that TTSA achieves a better trade-off between the complexities of inner- and outer-loop optimization stages than BSA.\n• The second focus of this paper is to design a more sample-efficient algorithm for bilevel stochastic optimization, which achieves an order-level lower computational complexity over BSA and TTSA." }, { "heading": "1.1 MAIN CONTRIBUTIONS", "text": "Our main contributions lie in developing enhanced theory and provably faster algorithms for the nonconvex-strongly-convex bilevel deterministic and stochastic optimization problems, respectively. Our analysis involves several new developments, which can be of independent interest.\nWe first provide a unified finite-time convergence and complexity analysis for both ITD and AID based bilevel optimizers, which we call as ITD-BiO and AID-BiO. Compared to existing analysis in Ghadimi & Wang (2018) for AID-BiO that requires a continuously increasing number of innerloop steps to achieve the guarantee, our analysis allows a constant number of inner-loop steps as often used in practice. In addition, we introduce a warm start initialization for the inner-loop updates and the outer-loop hypergradient estimation, which allows us to backpropagate the tracking errors to previous loops, and results in an improved computational complexity. As shown in Table 1, the gradient complexities Gc(f, ), Gc(g, ), and Jacobian- and Hessian-vector product complexities JV(g, ) and HV(g, ) of AID-BiO to attain an -accurate stationary point improve those of Ghadimi & Wang (2018) by the order of κ, κ −1/4, κ, and κ, respectively, where κ is the condition number. In addition, our analysis shows that AID-BiO requires less computations of Jacobian- and Hessianvector products than ITD-BiO by an order of κ and κ1/2, which provides a justification for the observation in Grazzi et al. (2020) that ITD often has a larger memory cost than AID.\nWe then propose a stochastic bilevel optimizer (stocBiO) to solve the stochastic bilevel optimization problem in eq. (2). Our algorithm features a mini-batch hyper-gradient estimation via implicit differentiation, where the core design involves a sample-efficient Hypergradient estimator via the Neumann series. As shown in Table 2, the gradient complexities of our proposed algorithm w.r.t. F\nand G improve upon those of BSA (Ghadimi & Wang, 2018) by an order of κ and −1, respectively. In addition, the Jacobian-vector product complexity JV(G, ) of our algorithm improves that of BSA by an order of κ. In terms of the target accuracy , our computational complexities improve those of TTSA (Hong et al., 2020) by an order of −1/2.\nWe further provide the theoretical complexity guarantee of ITD-BiO, AID-BiO and stocBiO in metalearning and hyperparameter optimization. The experiments validate our theoretical results for determinisitic bilevel optimization, and demonstrate the superior efficiency of stocBiO for stochastic bilevel optimization. Due to the space limitations, we present all theoretical and empirical results on hyperparameter optimization in the supplementary materials." }, { "heading": "1.2 RELATED WORK", "text": "Bilevel optimization approaches: Bilevel optimization was first introduced by Bracken & McGill (1973). Since then, a number of bilevel optimization algorithms have been proposed, which include but not limited to constraint-based methods (Shi et al., 2005; Moore, 2010) and gradient-based methods (Domke, 2012; Pedregosa, 2016; Gould et al., 2016; Maclaurin et al., 2015; Franceschi et al., 2018; Ghadimi & Wang, 2018; Liao et al., 2018; Shaban et al., 2019; Hong et al., 2020; Liu et al., 2020; Li et al., 2020; Grazzi et al., 2020; Lorraine et al., 2020). Among them, Ghadimi & Wang (2018); Hong et al. (2020) provided the finite-time complexity analysis for their proposed methods for the nonconvex-strongly-convex bilevel optimization problem. For such a problem, this paper develops a general and enhanced finite-time analysis for gradient-based bilevel optimizers for the deterministic setting, and proposes a novel algorithm for the stochastic setting with order-level lower computational complexity than the existing results.\nSome works have studied other types of loss geometries. For example, Liu et al. (2020); Li et al. (2020) assumed that the lower- and upper-level functions g(x, ·) and f(x, ·) are convex and stronglyconvex, and provided an asymptotic analysis for their methods. Ghadimi & Wang (2018); Hong et al. (2020) studied the setting where Φ(·) is strongly-convex or convex, and g(x, ·) is strongly-convex. Bilevel optimization in meta-learning: Bilevel optimization framework has been successfully employed in meta-learning recently (Snell et al., 2017; Franceschi et al., 2018; Rajeswaran et al., 2019; Zügner & Günnemann, 2019; Ji et al., 2020a;b). For example, Snell et al. (2017) proposed a bilevel optimization procedure for meta-learning to learn a common embedding model for all tasks. Rajeswaran et al. (2019) reformulated the model-agnostic meta-learning (MAML) (Finn et al., 2017) as a bilevel optimization problem, and proposed iMAML via implicit gradient. The paper provides a theoretical guarantee for two popular types of bilevel optimization algorithms, i.e., AID-BiO and ITD-BiO, for meta-learning.\nBilevel optimization in hyperparameter optimization: Hyperparameter optimization has become increasingly important as a powerful tool in the automatic machine learning (autoML) (Okuno et al.,\nAlgorithm 1 Deterministic bilevel optimization via AID or ITD 1: Input: Stepsizes α, β > 0, initializations x0, y0, v0. 2: for k = 0, 1, 2, ...,K do 3: Set y0k = y T k−1 if k > 0 and y0 otherwise\n4: for t = 1, ...., T do 5: Update ytk = y t−1 k − α∇yg(xk, y t−1 k ) 6: end for 7: Hypergradient estimation via\n• AID: 1) set v0k = vNk−1 if k > 0 and v0 otherwise 2) solve vNk from∇2yg(xk, yTk )v = ∇yf(xk, yTk ) via N steps of CG starting from v0k 3) compute Jacobian-vector product∇x∇yg(xk, yTk )vNk via automatic differentiation 4) compute ∇̂Φ(xk) = ∇xf(xk, yTk )−∇x∇yg(xk, yTk )vNk • ITD: compute ∇̂Φ(xk) = ∂f(xk,y T k )\nxk via backpropagation w.r.t. xk\n8: Update xk+1 = xk − β ∂f(xk,y T k ) ∂xk 9: end for\n2018; Yu & Zhu, 2020). Recently, various bilevel optimization algorithms have been proposed in the context of hyperparameter optimization, which include implicit differentiation based methods (Pedregosa, 2016), dynamical system based methods via reverse or forward gradient computation (Franceschi et al., 2017; 2018; Shaban et al., 2019), etc. This paper demonstrates the superior efficiency of the proposed stocBiO algorithm in hyperparameter optimization." }, { "heading": "2 ALGORITHMS", "text": "In this section, we describe two popular types of deterministic bilevel optimization algorithms, and propose a new algorithms for stochastic bilevel optimization." }, { "heading": "2.1 ALGORITHMS FOR DETERMINISTIC BILEVEL OPTIMIZATION", "text": "As shown in Algorithm 1, we describe two popular types of deterministic bilevel optimizers respectively based on AID and ITD (referred to as AID-BiO and ITD-BiO) for solving the problem eq. (1).\nBoth AID-BiO and ITD-BiO update in a nested-loop manner. In the inner loop, both of them run T steps of gradient decent (GD) to find an approximation point yTk close to y\n∗(xk). Note that we choose the initialization y0k of each inner loop as the output y T k−1 of the preceding inner loop rather than a random start. Such a warm start allows us to backpropagate the tracking error ‖yTk −y∗(xk)‖ to previous loops, and yields an improved computational complexity.\nAt the outer loop, AID-BiO first solves vNk from a linear system ∇2yg(xk, yTk )v = ∇yf(xk, yTk )1 usingN steps of conjugate-gradient (CG) starting from v0k (where we also adopt a warm start scheme here by setting v0k = v N k−1), and then constructs\n∇̂Φ(xk) = ∇xf(xk, yTk )−∇x∇yg(xk, yTk )vNk (3) as an estimate of the true hypergradient∇Φ(xk), whose form is given by the following proposition. Proposition 1. Recalling the definition Φ(x) := f(x, y∗(x)), it holds that\n∇Φ(xk) =∇xf(xk, y∗(xk))−∇x∇yg(xk, y∗(xk))v∗k, (4)\nwhere v∗k is the solution of the linear system∇2yg(xk, y∗(xk))v = ∇yf(xk, y∗(xk)).\nAs shown in Domke (2012); Grazzi et al. (2020), the construction of eq. (3) involves only Hessianvector products in solving vN via CG and Jacobian-vector product ∇x∇yg(xk, yTk )vNk , which can be efficiently computed and stored via existing automatic differentiation packages.\nAs a comparison, the outer loop of ITD-BiO computes the gradient ∂f(xk,y T k (xk))\n∂xk as an approxima-\ntion of the hyper-gradient ∇Φ(xk) = ∂f(xk,y ∗(xk))\n∂xk via backpropagation, where we write yTk (xk)\n1This is equivalent to solve a quadratic programing minv 12v T∇2yg(xk, yTk )v − vT∇yf(xk, yTk ).\nAlgorithm 2 Stochastic bilevel optimizer (stocBiO) 1: Input: Inner- and outer-loop stepsizes α, β > 0, initializations x0 and y0. 2: for k = 0, 1, 2, ...,K do 3: Set y0k = y T k−1 if k > 0 and y0 otherwise\n4: for t = 1, ...., T do 5: Draw a sample batch St−1 6: Update ytk = y t−1 k − α∇yG(xk, y t−1 k ;St−1) 7: end for 8: Draw sample batch DF , and compute v0 = ∇yF (xk, yTk ;DF ) 9: Draw sample batch DH , and construct vQ via Algorithm 3\n10: Draw sample batch DG, and compute Jacobian-vector product∇x∇yG(xk, yTk ;DG)vQ 11: Compute gradient estimate ∇̂Φ(xk) via eq. (6) 12: Update xk+1 = xk − β∇̂Φ(xk) 13: end for\nAlgorithm 3 Construct vQ given v0 1: Input: An integer Q, data samples DH = {Bj}Qj=1 and a constant η > 0. 2: for j = 1, 2, ..., Q do 3: Sample Bj and compute gradient Gj(y) = y − η∇yG(x, y;Bj) 4: end for 5: Set rQ = v0 6: for i = Q, ..., 1 do 7: ri−1 = ∂ ( Gi(y)ri ) /∂y = ri − η∇2yG(x, y;Bi)ri via automatic differentiation\n8: end for 9: Return vQ = η ∑Q i=0 ri\nbecause the output yTk of the inner loop has a dependence on xk through the inner-loop iterative GD updates. The explicit form of the estimate ∂f(xk,y T k (xk))\n∂xk is given by the following proposition via the chain rule. For notation simplification, let ∏T−1 j=T (·) = I .\nProposition 2. The gradient ∂f(xk,y T k (xk))\n∂xk takes the following analytical form:\n∂f(xk, y T k )\n∂xk = ∇xf(xk, yTk )− α T−1∑ t=0 ∇x∇yg(xk, ytk) T−1∏ j=t+1 (I − α∇2yg(xk, y j k))∇yf(xk, y T k ).\nProposition 2 shows that the differentiation involves the computations of second-order derivatives such as Hessian ∇2yg(·, ·). Since efficient Hessian-free methods such as CG have been successfully deployed in the existing automatic differentiation tools, computing these second-order derivatives reduces to more efficient computations of Jacobian- and Hessian-vector products." }, { "heading": "2.2 ALGORITHM FOR STOCHASTIC BILEVEL OPTIMIZATION", "text": "We propose a new stochastic bilevel optimizer (stocBiO) in Algorithm 2 to solve the problem eq. (2). It has a double-loop structure similar to Algorithm 1, but runs T steps of stochastic gradient decent (SGD) at the inner loop to obtain an approximated solution yTk . Based on the output y T k of the inner loop, stocBiO first computes a gradient ∇yF (xk, yTk ;DF ) over a sample batch DF , and then computes a vector vQ via Algorithm 3, which takes a form of\nvQ = η Q−1∑ q=−1 Q∏ j=Q−q (I − η∇2yG(xk, yTk ;Bj))∇yF (xk, yTk ;DF ), (5)\nwhere {Bj , j = 1, ..., Q} are mutually-independent sample sets, Q and η are constants, and we let∏Q Q+1(·) = I for notational simplification. Note that our construction of vQ, i.e., Algorithm 3, is\nmotived by the Neumann series ∑∞ i=0 U\nk = (I −U)−1, and involves only Hessian-vector products rather than Hessians, and hence is computationally and memory efficient. Then, we construct\n∇̂Φ(xk) =∇xF (xk, yTk ;DF )−∇x∇yG(xk, yTk ;DG)vQ (6)\nas an estimate of hypergradient ∇Φ(xk) given by Proposition 1. An important component of our algorithm is vQ, which serves as an estimate of v∗k in eq. (4) . Compared to the deterministic case, designing a sample-efficient Hypergradient estimator in the stochastic case is more challenging. For example, instead of choosing the same batch sizes for all Bj , j = 1, ..., Q in eq. (5), our analysis captures the different impact of components ∇2yG(xk, yTk ;Bj), j = 1, ..., Q on the Hypergradient estimation variance, and inspires an adaptive and more efficient choice by setting |BQ−j | to decay exponentially with j from 0 to Q− 1. By doing so, we achieve an improved complexity." }, { "heading": "3 DEFINITIONS AND ASSUMPTIONS", "text": "Let z = (x, y) denote all parameters. For simplicity, suppose sample sets St for all t = 0, ..., T − 1, DG and DF have the sizes of S, Dg and Df , respectively. In this paper, we focus on the following types of loss functions for both the deterministic and stochastic cases. Assumption 1. The lower-level function g(x, y) is µ-strongly-convex w.r.t. y and the total objective function Φ(x) = f(x, y∗(x)) is nonconvex w.r.t. x. For the stochastic setting, the same assumptions hold for G(x, y; ζ) and Φ(x), respectively. Since the objective function Φ(x) is nonconvex, algorithms are expected to find an -accurate stationary point defined as follows. Definition 1. We say x̄ is an -accurate stationary point for the objective function Φ(x) in eq. (2) if E‖∇Φ(x̄)‖2 ≤ , where x̄ is the output of an algorithm.\nIn order to compare the performance of different bilevel algorithms, we adopt the following metrics of computational complexity. Definition 2. For a function f(x, y) and a vector v, let Gc(f, ) be the number of the partial gradient ∇xf or ∇yf , and let JV(g, ) and HV(g, ) be the number of Jacobian-vector products ∇x∇yg(x, y)v. and Hessian-vector products ∇2yg(x, y)v. For the stochastic case, similar metrics are adopted but w.r.t. the stochastic function F (x, y; ξ).\nWe take the following standard assumptions on the loss functions in eq. (2), which have been widely adopted in bilevel optimization (Ghadimi & Wang, 2018; Ji et al., 2020a). Assumption 2. The loss function f(z) and g(z) satisfy\n• f(z) is M -Lipschitz, i.e., for any z, z′, |f(z)− f(z′)| ≤M‖z − z′‖.\n• Gradients∇f(z) and ∇f(z) are L-Lipschitz, i.e., for any z, z′, ‖∇f(z)−∇f(z′)‖ ≤ L‖z − z′‖, ‖∇g(z)−∇g(z′)‖ ≤ L‖z − z′‖.\nFor the stochastic case, the same assumptions hold for F (z; ξ) and G(z; ζ) for any given ξ and ζ.\nAs shown in Proposition 1, the gradient of the objective function Φ(x) involves the second-order derivatives ∇x∇yg(z) and ∇2yg(z). The following assumption imposes the Lipschitz conditions on such high-order derivatives, as also made in Ghadimi & Wang (2018). Assumption 3. Suppose the derivatives∇x∇yg(z) and ∇2yg(z) are τ - and ρ- Lipschitz, i.e.,\n• For any z, z′, ‖∇x∇yg(z)−∇x∇yg(z′)‖ ≤ τ‖z − z′‖.\n• For any z, z′, ‖∇2yg(z)−∇2yg(z′)‖ ≤ ρ‖z − z′‖.\nFor the stochastic case, the same assumptions hold for∇x∇yG(z; ζ) and∇2yG(z; ζ) for any ζ.\nAs typically adopted in the analysis for stochastic optimization, we make the following boundedvariance assumption for the lower-level stochastic function G(z; ζ). Assumption 4. ∇G(z; ζ) has a bounded variance, i.e., Eξ‖∇G(z; ζ)−∇g(z)‖2 ≤ σ2 for some σ." }, { "heading": "4 MAIN RESULTS FOR BILEVEL OPTIMIZATION", "text": "" }, { "heading": "4.1 DETERMINISTIC BILEVEL OPTIMIZATION", "text": "We first characterize the convergence and complexity performance of the AID-BiO algorithm. Let κ = Lµ denote the condition number.\nTheorem 1 (AID-BiO). Suppose Assumptions 1, 2, 3 hold. Define a smoothness parameter LΦ = L+ 2L 2+τM2\nµ + ρLM+L3+τML µ2 + ρL2M µ3 = Θ(κ 3), choose the stepsizes α ≤ 1L , β = 1 8LΦ , and set\nthe inner-loop iteration number T ≥ Θ(κ) and the CG iteration number N ≥ Θ( √ κ), where the detailed forms of T,N can be found in Appendix E. Then, the outputs of AID-BiO satisfy\n1\nK K−1∑ k=0 ‖∇Φ(xk)‖2 ≤ 64LΦ(Φ(x0)− infx Φ(x)) + 5∆0 K , (7)\nwhere ∆0 = ‖y0 − y∗(x0)‖2 + ‖v∗0 − v0‖2 > 0. In order to achieve an -accurate stationary point, we have\n• Gradient complexity: Gc(f, ) = O(κ3 −1),Gc(g, ) = O(κ4 −1). • Jacobian- and Hessian-vector product: JV(g, ) = O ( κ3 −1 ) ,HV(g, ) = O ( κ3.5 −1 ) .\nIt can be seen from Table 1 that the complexities Gc(f, ),Gc(g, ), JV(g, ) and HV(g, ) of our analysis improves that of Ghadimi & Wang (2018) (eq. (2.30) therein) by the order of κ, κ −1/4, κ and κ. Such an improvement is achieved by a refined analysis with a constant number of innerloop steps, and by a warm start strategy to backpropagate the tracking errors ‖yTk − y∗(xk)‖ and ‖vNk − v∗k‖ to previous loops, as also demonstrated by our meta-learning experiments. We next characterize the convergence and complexity performance of the ITD-BiO algorithm. Theorem 2 (ITD-BiO). Suppose Assumptions 1, 2, and 3 hold. Define the parameter LΦ as in Theorem 1, and choose α ≤ 1L , β = 1 4LΦ\nand T ≥ Θ(κ log 1 ), where the detailed form of T can be found in Appendix F. Then, the outputs of ITD-BiO satisfy\n1\nK K−1∑ k=0 ‖∇Φ(xk)‖2 ≤ 16LΦ(Φ(x0)− infx Φ(x)) K + 2 3 .\nIn order to achieve an -accurate stationary point, we have\n• Gradient complexity: Gc(f, ) = O(κ3 −1),Gc(g, ) = O(κ4 −1 log( 1 )).\n• Jacobian- and Hessian-vector product complexity: JV(g, ) = O ( κ4 −1 log −1 ) ,HV(g, ) = O ( κ4 −1 log −1 ) .\nBy comparing Theorem 1 and Theorem 2, it can be seen that the complexities Gc(g, ), JV(g, ), and HV(g, ) of AID-BiO are better than those of ITD-BiO by the order of log( 1 ), κ log( 1 ) and κ0.5 log( 1 ). This is in consistence with the comparison in Grazzi et al. (2020) that AID-BiO often has a lower memory cost than ITD-BiO." }, { "heading": "4.2 STOCHASTIC BILEVEL OPTIMIZATION", "text": "We first characterize the bias and variance of an important component vQ in eq. (5).\nProposition 3. Suppose Assumptions 1, 2 and 3 hold. Let the constant η ≤ 1L and choose the batch sizes |BQ+1−j | = BQ(1−ηµ)j−1 for j = 1, ..., Q, whereB ≥ 1Q(1−ηµ)Q−1 . Then, the bias satisfies∥∥EvQ − [∇2yg(xk, yTk )]−1∇yf(xk, yTk )∥∥ ≤ µ−1(1− ηµ)Q+1M. (8) Furthermore, the estimation variance is given by\nE‖vQ − [∇2yg(xk, yTk )]−1∇yf(xk, yTk )‖2 ≤ 4η2L2M2 µ2 1 B + 4(1− ηµ)2Q+2M2 µ2 + 2M2 µ2Df . (9)\nProposition 3 shows that if we choose Q and B at the order level of O(log 1 ) and O(1/ ), the bias and variance are smaller than O( ), and the required number of samples is ∑Q j=1BQ(1 −\nηµ)j−1 = O ( −1 log 1 ) . Note that the chosen batch size |BQ+1−j | exponentially decays w.r.t. j.\nIn comparison, the uniform choice of all |Bj | would yield a worse complexity of O ( −1(log 1 ) 2 ) .\nWe next analyze stocBiO when the objective function Φ(x) := f(x, y∗(x)) is nonconvex.\nTheorem 3. Suppose Assumptions 1, 2, 3 and 4 hold. Define parameter LΦ = L + 2L 2+τM2\nµ + ρLM+L3+τML\nµ2 + ρL2M µ3 = O ( κ3 ) , choose stepsize β = 14LΦ , and set η < 1 L in Algorithm 3. Set\nT ≥ max { log ( 12+ 48β 2L2 µ2 (L+L 2 µ + Mτ µ + LMρ µ2 )2 )\n2 log(L+µL−µ ) ,\nlog (√ β(L+L 2\nµ + Mτ µ +\nLMρ µ2 ) )\nlog(L+µL−µ )\n} . Then, we have\n1\nK K−1∑ k=0 E‖∇Φ(xk)‖2 ≤ 32LΦ(Φ(x0)− infx Φ(x) + 52‖y0 − y ∗(x0)‖2) K + 72κ2M2(1− ηµ)2Q\n+ 40 ( L+ L 2 µ + Mτ µ + LMρ µ2 )2 Lµ σ2 S + 16κ2M2 Dg + (8 + 32κ2)M2 Df + 64κ2M2 B . (10)\nIn order to achieve an -accurate stationary point, we have\n• Gradient complexity: Gc(F, ) = O(κ5 −2),Gc(G, ) = O(κ9 −2).\n• Jacobian- and Hessian-vector product: JV(G, ) = O(κ5 −2),HV(G, ) = Õ(κ6 −2).\nTheorem 3 shows that stocBiO converges sublinearly with the convergence error decaying exponentially w.r.t. Q and sublinearly w.r.t. the batch sizes S,Dg, Df for gradient estimation and B for Hessian inverse estimation. In addition, it can be seen that the total number T of the inner-loop steps is chosen at nearly a constant level, rather than a typical choice of Θ(log( 1 )).\nAs shown in Table 2, the gradient complexities of our proposed algorithm in terms of F and G improve those of BSA in Ghadimi & Wang (2018) by an order of κ and −1, respectively. In addition, the Jacobian-vector product complexity JV(G, ) of our algorithm improves that of BSA by the order of κ. In terms of the accuracy , our gradient, Jacobian- and Hessian-vector product complexities improve those of TTSA in Hong et al. (2020) all by an order of −0.5." }, { "heading": "5 APPLICATIONS TO META-LEARNING", "text": "" }, { "heading": "5.1 META-LEARNING WITH COMMON EMBEDDING MODEL", "text": "Consider the few-shot meta-learning problem with m tasks {Ti, i = 1, ...,m} sampled from distribution PT . Each task Ti has a loss function L(φ,wi; ξ) over each data sample ξ, where φ are the parameters of an embedding model shared by all tasks, and wi are the task-specific parameters. The goal of this framework is to find good parameters φ for all tasks, and building on the embedded features, each task then adapts its own parameters wi by minimizing its loss.\nThe model training takes a bilevel procedure. In the lower-level stage, building on the embedded features, the base learner of task Ti searches w∗i as the minimizer of its loss function over a training set Si. In the upper-level stage, the meta-learner evaluates the minimizers w∗i , i = 1, ...,m on held-out test sets, and optimizes φ of the embedding model over all tasks. Specifically, let w̃ = (w1, ..., wm) denote all task-specific parameters. Then, the objective function is given by\nmin φ LD(φ, w̃∗) :=\n1\nm m∑ i=1\n1 |Di| ∑ ξ∈Di\nL(φ,w∗i ; ξ)︸ ︷︷ ︸ LDi (φ,w ∗ i ): task-specific upper-level loss\ns.t. w̃∗ = arg min w̃ LS(φ, w̃) = arg min (w1,...,wm)\n1\nm m∑ i=1 ( 1 |Si| ∑ ξ∈Si\nL(φ,wi; ξ) +R(wi)︸ ︷︷ ︸ LSi (φ,wi): task-specific lower-level loss\n) , (11)\nwhere Si and Di are the training and test datasets of task Ti, and R(wi) is a strongly-convex regularizer, e.g., L2. Note that the lower-level problem is equivalent to solving each w∗i as a minimizer of the task-specific loss LSi(φ,wi) for i = 1, ...,m. In practice, wi often corresponds to the parameters of the last linear layer of a neural network and φ are the parameters of the remaining layers (e.g., 4 convolutional layers in Bertinetto et al. (2018); Ji et al. (2020a)), and hence the lower-level function is strongly-convex w.r.t. w̃ and the upper-level function LD(φ, w̃∗(φ)) is generally nonconvex w.r.t. φ. In addition, due to the small sizes of datasets Di and Si in few-shot learning, all\nupdates for each task Ti use full gradient descent without data resampling. As a result, AID-BiO and ITD-BiO in Algorithm 1 can be applied here. In some applications where the number m of tasks is large, it is more efficient to sample a batch B of i.i.d. tasks from {Ti, i = 1, ...,m} at each meta (outer) iteration, and optimizes the mini-batch versions LD(φ, w̃;B) = 1|B| ∑ i∈B LDi(φ,wi)\nand LS(φ, w̃;B) = 1|B| ∑ i∈B LSi(φ,wi) instead. The following theorem provides the convergence analysis of ITD-BiO for this case. Theorem 4. Suppose Assumptions 1, 2 and 3 hold and suppose each task loss LSi(φ,wi) is µstrongly-convex w.r.t. wi. Choose the same parameters β, T as in Theorem 2. Then, we have\n1\nK K−1∑ k=0 E‖∇Φ(φk)‖2 ≤ 16LΦ(Φ(φ0)− infφ Φ(φ)) K + 2 3 + ( 1 + L µ )2 M2 8|B| .\nTheorem 4 shows that compared to the full batch (i.e., without task sampling) case in eq. (11), the task sampling introduces a variance termO( 1|B| ) due to the stochastic nature of the algorithm. Using an approach similar to Theorem 4, we can derive a similar result for AID-BiO." }, { "heading": "5.2 EXPERIMENTS", "text": "To validate our theoretical results for deterministic bilevel optimization, we compare the performance among the following four algorithms: ITD-BiO, AID-BiO-constant (AID-BiO with a constant number of inner-loop steps as in our analysis), AID-BiO-increasing (AID-BiO with an increasing number of inner-loop steps under analysis in Ghadimi & Wang (2018)), and two popular meta-learning algorithms MAML2 (Finn et al., 2017) and ANIL3 (Raghu et al., 2019). We conduct experiments over a 5-way 5-shot task on two benchmark datasets: FC100 and miniImageNet, and the results are averaged over 10 trials with different random seeds. Due to the space limitations, we provide the model architectures, hyperparameter settings and additional experiments in Appendix B.\nIt can be seen from Figure 1 that for both the miniImageNet and FC100 datasets, AID-BiO-constant converges faster than AID-BiO-increasing in terms of both the training accuracy and test accuracy, and achieves a better final test accuracy than ANIL and MAML. This demonstrates the superior improvement of our developed analysis over existing analysis in Ghadimi & Wang (2018) for AIDBiO algorithm. Moreover, it can be observed that AID-BiO is slightly faster than ITD-BiO in terms of the training accuracy and test accuracy. This is also in consistence with our theoretical results." }, { "heading": "6 CONCLUSION", "text": "In this paper, we develop a general and enhanced finite-time analysis for the nonconvex-stronglyconvex bilevel deterministic optimization, and propose a novel algorithm for the stochastic setting whose computational complexity outperforms the best known results order-wisely. We also provide the theoretical guarantee of various bilevel optimizers in meta-learning and hyperparameter optimization. The experiments validate our theoretical results and demonstrate the effectiveness of the proposed algorithm. We anticipate that the finite-time analysis that we develop will be useful for analyzing other bilevel optimization problems with different loss geometries, and the proposed algorithms will be useful for other applications such as reinforcement learning and Stackelberg game.\n2MAML consists of an inner loop for task adaptation and an outer loop for meta initialization training. 3ANIL refers to almost no inner loop, which is an efficient MAML variant with task-specific adaption on\nthe last-layer of parameters." }, { "heading": "Supplementary Materials", "text": "" }, { "heading": "A APPLICATION TO HYPERPARAMETER OPTIMIZATION", "text": "" }, { "heading": "A.1 HYPERPARAMETER OPTIMIZATION", "text": "The goal of hyperparameter optimization (Franceschi et al., 2018; Feurer & Hutter, 2019) is to search for representation or regularization parameters λ to minimize the validation error evaluated over the learner’s parameters w∗, where w∗ is the minimizer of the inner-loop regularized training error. Mathematically, the objective function is given by\nmin λ LDval(λ) =\n1 |Dval| ∑ ξ∈Dval L(w∗(λ); ξ)\ns.t. w∗(λ) = arg min w\nLDtr(w, λ) := 1 |Dtr| ∑ ξ∈Dtr ( L(w, λ; ξ) +R(w, λ) ) , (12)\nwhere Dval and Dtr are validation and training data, L is the loss, andR(w, λ) is a regularizer. In practice, the lower-level function LDtr(w, λ) is often strongly-convex w.r.t. w. For example, for the data hyper-cleaning application proposed by Franceschi et al. (2018); Shaban et al. (2019), the predictor is modeled by a linear classifier, and the loss function L(w; ξ) is convex w.r.t. w and R(w, λ) is a strongly-convex regularizer, e.g., L2 regularization. In addition, the sample sizes of Dval and Dtr are often large, and stochastic algorithms are preferred for achieving better efficiency. As a result, the above hyperparameter optimization falls into the stochastic bilevel optimization we study in eq. (2), and we can apply the proposed stocBiO algorithm here and Theorem 3 establishes its finite-time performance guarantee." }, { "heading": "A.2 EXPERIMENTS", "text": "We compare our proposed stocBiO with the following baseline bilevel optimization algorithms.\n• BSA (Ghadimi & Wang, 2018): implicit gradient based stochastic bilevel optimizer via singlesample data sampling.\n• TTSA (Hong et al., 2020): two-time-scale stochastic optimizer via single-sample data sampling. • HOAG (Pedregosa, 2016): a hyperparameter optimization algorithm with approximate gradient.\nWe use the implementation in the repository https://github.com/fabianp/hoag. • reverse (Franceschi et al., 2017): an iterative differentiation based method that approximates\nthe hypergradient via backpropagation. We use its implementation in https://github.com/ prolearner/hypertorch.\n• AID-FP (Grazzi et al., 2020): AID with the fixed-point method. We use its implementation in https://github.com/prolearner/hypertorch\n• AID-CG (Grazzi et al., 2020): AID with the conjugate gradient method. We use its implementation in https://github.com/prolearner/hypertorch.\nWe demonstrate the effectiveness of the proposed stocBiO algorithm on two experiments: data hyper-cleaning and logistic regression.\nLogistic Regression on 20 Newsgroup: We compare the performance of our algorithm stocBiO with the existing baseline algorithms reverse, AID-FP, AID-CG and HOAG over a logistic regression problem on 20 Newsgroup dataset Grazzi et al. (2020). The objective function of such a problem is given by\nmin λ E(λ,w∗) =\n1 |Dval| ∑\n(xi,yi)∈Dval\nL(xiw ∗, yi)\ns.t. w∗ = arg min w∈Rp×c ( 1 |Dtr| ∑ (xi,yi)∈Dtr L(xiw, yi) + 1 cp c∑ i=1 p∑ j=1 exp(λj)w 2 ij ) ,\nwhere L is the cross-entropy loss, c = 20 is the number of topics, and p = 101631 is the feature dimension. Following Grazzi et al. (2020), we use SGD as the optimizer for the outer-loop update for all algorithms. For reverse, AID-FP, AID-CG, we use the suggested and well-tuned hyperparameter setting in their implementations https://github.com/prolearner/hypertorch on this application. In specific, they choose the inner- and outer-loop stepsizes as 100, the number of inner loops as 10, the number of CG steps as 10. For HOAG, we use the same parameters as reverse, AIDFP, AID-CG. For stocBiO, we use the same parameters as reverse, AID-FP, AID-CG, and choose η = 0.5, Q = 10. We use stocBiO-B as a shorthand of stocBiO with a batch size of B.\nAs shown in Figure 2, the proposed stocBiO achieves the fastest convergence rate as well as the best test accuracy among all comparison algorithms. This demonstrates the practical advantage of our proposed algorithm stocBiO. Note that we do not include BSA and TTSA in the comparison, because they converge too slowly with a large variance, and are much worse than the other competing algorithms. In addition, we investigate the impact of the batch size on the performance of our stocBiO in Figure 3. It can be seen that stocBiO outperforms HOAG under the batch sizes of 100, 500, 1000, 2000. This shows that the performance of stocBiO is not very sensitive to the batch size, and hence the tuning of the batch size is easy to handle in practice.\nData Hyper-Cleaning on MNIST. We first compare the performance of our proposed algorithm stocBiO with other baseline algorithms BSA, TTSA, HOAG4 on a hyperparameter optimization problem: data hyper-cleaning (Shaban et al., 2019) on a dataset derived from MNIST (LeCun et al., 1998), which consists of 20000 images for training, 5000 images for validation, and 10000 images for testing. Data hyper-cleaning is to train a classifier in a corrupted setting where each label of training data is replaced by a random class number with a probability p (i.e., the corruption rate).\n4We do not include reverse, AID-CG and AID-FG because they perform similarly to HOAG.\nThe objective function is given by\nmin λ E(λ,w∗) =\n1 |Dval| ∑\n(xi,yi)∈Dval\nL(w∗xi, yi)\ns.t. w∗ = arg min w L(w, λ) := 1 |Dtr| ∑ (xi,yi)∈Dtr σ(λi)L(wxi, yi) + Cr‖w‖2,\nwhere L is the cross-entropy loss, σ(·) is the sigmoid function, Cr is a regularization parameter. Following Shaban et al. (2019), we choose Cr = 0.001. All results are averaged over 10 trials with different random seeds. We adopt Adam (Kingma & Ba, 2014) as the optimizer for the outer-loop update for all algorithms. For stochastic algorithms, we set the batch size as 50 for stocBiO, and 1 for BSA and TTSA because they use the single-sample data sampling. For all algorithms, we use a grid search to choose the inner-loop stepsize from {0.01, 0.1, 1, 10}, the outer-loop stepsize from {10i, i = −4,−3,−2,−1, 0, 1, 2, 3, 4}, and the number T of inner-loop steps from {1, 10, 50, 100, 200, 1000}, where values that achieve the lowest loss after a fixed running time are selected. For stocBiO, BSA, and TTSA, we choose η from {0.5 × 2i, i = −3,−2,−1, 0, 1, 2, 3}, and Q from {3× 2i, i = 0, 1, 2, 3}.\nIt can be seen from Figure 4 that our proposed stocBiO algorithm achieves the fastest convergence rate among all competing algorithms in terms of both the training loss and the test loss. In addition, it is observed that such an improvement is more significant when the corruption rate p is smaller. We note that the stochastic algorithm TTSA converges very slowly with a large variance. This is because TTSA updates the costly outer loop more frequently than other algorithms, and has a larger variance due to the single-sample data sampling. As a comparison, our stocBiO achieves a much lower variance for hypergradient estimation as well as a much faster convergence rate. This verifies our theoretical results in Theorem 3." }, { "heading": "B FURTHER SPECIFICATIONS ON META-LEARNING EXPERIMENTS", "text": "" }, { "heading": "B.1 DATASETS AND MODEL ARCHITECTURES", "text": "FC100 (Oreshkin et al., 2018) is a dataset derived from CIFAR-100 (Krizhevsky & Hinton, 2009), and contains 100 classes with each class consisting of 600 images of size 32. Following Oreshkin et al. (2018), these 100 classes are split into 60 classes for meta-training, 20 classes for metavalidation, and 20 classes for meta-testing. For all comparison algorithms, we use a 4-layer convolutional neural networks (CNN) with four convolutional blocks, in which each convolutional block\ncontains a 3× 3 convolution (padding = 1, stride = 2), batch normalization, ReLU activation, and 2× 2 max pooling. Each convolutional layer has 64 filters. The miniImageNet dataset (Vinyals et al., 2016) is generated from ImageNet Russakovsky et al. (2015), and consists of 100 classes with each class containing 600 images of size 84×84. Following the repository Arnold et al. (2019), we partition these classes into 64 classes for meta-training, 16 classes for meta-validation, and 20 classes for meta-testing. Following the repository (Arnold et al., 2019), we use a four-layer CNN with four convolutional blocks, where each block sequentially consists of a 3× 3 convolution, batch normalization, ReLU activation, and 2× 2 max pooling. Each convolutional layer has 32 filters.\nB.2 IMPLEMENTATIONS AND HYPERPARAMETER SETTINGS\nWe adopt the existing implementations in the repository (Arnold et al., 2019) for ANIL and MAML. For all algorithms, we adopt Adam (Kingma & Ba, 2014) as the optimizer for the outer-loop update.\nParameter selection for the experiments in Figure 1(a): For ANIL and MAML, we adopt the suggested hyperparameter selection in the repository (Arnold et al., 2019). In specific, for ANIL, we choose the inner-loop stepsize as 0.1, the outer-loop (meta) stepsize as 0.002, the task sampling size as 32, and the number of inner-loop steps as 5L. For MAML, we choose the inner-loop stepsize as 0.5, the outer-loop stepsize as 0.003, the task sampling sizeas 32, and the number of inner-loop steps as 3. For ITD-BiO, AID-BiO-constant and AID-BiO-increasing, we use a grid search to choose the inner-loop stepsize from {0.01, 0.1, 1, 10}, the task sampling size from {32, 128, 256}, and the outer-loop stepsize from {10i, i = −3,−2,−1, 0, 1, 2, 3}, where values that achieve the lowest loss after a fixed running time are selected. For ITD-BiO and AID-BiO-constant, we choose the number of inner-loop steps from {5, 10, 15, 20, 50}, and for AID-BiO-increasing, we choose the number of inner-loop steps as dc(k + 1)1/4e as adopted by the analysis in Ghadimi & Wang (2018), where we choose c from {0.5, 2, 5, 10, 50}. For both AID-BiO-constant and AID-BiO-increasing, we choose the number N of CG steps for solving the linear system from {5, 10, 15}. Parameter selection for the experiments in Figure 1(b): For ANIL and MAML, we adopt the suggested hyperparameter selection in the repository (Arnold et al., 2019). Specifically, for ANIL, we choose the inner-loop stepsize as 0.1, the outer-loop (meta) stepsize as 0.001, the task sampling size as 32 and the number of inner-loop steps as 10. For MAML, we choose the inner-loop stepsize as 0.5, the outer-loop stepsize as 0.001, the task samling size as 32, and the number of inner-loop steps as 3. For ITD-BiO, AID-BiO-constant and AID-BiO-increasing, we adopt the same procedure as in the experiments in Figure 1(a)." }, { "heading": "B.3 ADDITIONAL RESULTS FOR META LEARNING", "text": "In this subsection, we compare the robustness between bilevel optimizer ITD-BiO (AID-BiO performs similarly to ITD-BiO in terms of the convergence rate) and ANIL (ANIL outperforms MAML in general) to the number of inner-loop steps. For the experiments in Figure 5, we choose the innerloop stepsize as 0.05, the outer-loop (meta) stepsize as 0.002, the mini-batch size as 32, and the number T of inner-loop steps as 10 for both ANIL and ITD-BiO. For the experiments in Figure 6, we choose the inner-loop stepsize as 0.1, the outer-loop (meta) stepsize as 0.001, the mini-batch size as 32, and the number T of inner-loop steps as 20 for both ANIL and ITD-BiO.\nIt can be seen from Figure 5 and Figure 6 that when the number of inner-loop steps become larger, i.e., T = 10 for miniImageNet and T = 20 for FC100, the bilevel optimizer ITD-BiO converges stably with a small variance, whereas ANIL suffers from a sudden descent at 1500s on miniImageNet and even diverges after 2000s on FC100." }, { "heading": "C SUPPORTING LEMMAS", "text": "In this section, we provide some auxiliary lemmas used for proving the main convergence results.\nFirst note that the Lipschitz properties in Assumption 2 imply the following lemma. Lemma 1. Suppose Assumption 2 holds. Then, the stochastic derivatives ∇F (z; ξ), ∇G(z; ξ), ∇x∇yG(z; ξ) and∇2yG(z; ξ) have bounded variances, i.e., for any z and ξ,\n• Eξ ‖∇F (z; ξ)−∇f(z)‖2 ≤M2.\n• Eξ ‖∇x∇yG(z; ξ)−∇x∇yg(z)‖2 ≤ L2.\n• Eξ ∥∥∇2yG(z; ξ)−∇2yg(z)∥∥2 ≤ L2.\nRecall that Φ(x) = f(x, y∗(x)) in eq. (2). Then, we use the following lemma to characterize the Lipschitz properties of∇Φ(x), which is adapted from Lemma 2.2 in Ghadimi & Wang (2018). Lemma 2. Suppose Assumptions 1, 2 and 3 hold. Then, we have, for any x, x′ ∈ Rp,\n‖∇Φ(x)−∇Φ(x′)‖ ≤ LΦ‖x− x′‖, where the constant LΦ is given by\nLΦ = L+ 2L2 + τM2 µ + ρLM + L3 + τML µ2 + ρL2M µ3 . (13)" }, { "heading": "D PROOF OF PROPOSITIONS IN SECTION 2", "text": "In this section, we provide the proofs for Proposition 1 and Proposition 2 in Section 2." }, { "heading": "D.1 PROOF OF PROPOSITION 1", "text": "Using the chain rule over the gradient∇Φ(xk) = ∂f(xk,y ∗(xk))\n∂xk , we have\n∇Φ(xk) = ∇xf(xk, y∗(xk)) + ∂y∗(xk)\n∂xk ∇yf(xk, y∗(xk)). (14)\nBased on the optimality of y∗(xk), we have ∇yg(xk, y∗(xk)) = 0, which, using the implicit differentiation w.r.t. xk, yields\n∇x∇yg(xk, y∗(xk)) + ∂y∗(xk)\n∂xk ∇2yg(xk, y∗(xk)) = 0. (15)\nLet v∗k be the solution of the linear system ∇2yg(xk, y∗(xk))v = ∇yf(xk, y∗(xk)). Then, multiplying v∗k at the both sides of eq. (15), yields\n−∇x∇yg(xk, y∗(xk))v∗k = ∂y∗(xk)\n∂xk ∇2yg(xk, y∗(xk))v∗k =\n∂y∗(xk)\n∂xk ∇yf(xk, y∗(xk)),\nwhich, in conjunction with eq. (14) , yields the proof." }, { "heading": "D.2 PROOF OF PROPOSITION 2", "text": "Based on the iterative update of line 5 in Algorithm 1, we have yTk = y 0 k − α ∑T−1 t=0 ∇yg(xk, ytk), which, combined with the fact that ∇yg(xk, ytk) is differentiable w.r.t. xk, indicates that the inner output yTk is differentiable w.r.t. xk. Then, based on the chain rule, we have\n∂f(xk, y T k )\n∂xk = ∇xf(xk, yTk ) + ∂yTk ∂xk ∇yf(xk, yTk ). (16)\nBased on the iterative updates that ytk = y t−1 k − α∇yg(xk, y t−1 k ) for t = 1, ..., T , we have\n∂ytk ∂xk = ∂yt−1k ∂xk − α∇x∇yg(xk, yt−1k )− α ∂yt−1k ∂xk ∇2yg(xk, yt−1k )\n= ∂yt−1k ∂xk (I − α∇2yg(xk, yt−1k ))− α∇x∇yg(xk, y t−1 k ).\nTelescoping the above equality over t from 1 to T yields\n∂yTk ∂xk = ∂y0k ∂xk T−1∏ t=0 (I − α∇2yg(xk, ytk))− α T−1∑ t=0 ∇x∇yg(xk, ytk) T−1∏ j=t+1 (I − α∇2yg(xk, y j k))\n(i) = − α T−1∑ t=0 ∇x∇yg(xk, ytk) T−1∏ j=t+1 (I − α∇2yg(xk, y j k)). (17)\nwhere (i) follows from the fact that ∂y 0 k\n∂xk = 0. Combining eq. (16) and eq. (17) finishes the proof." }, { "heading": "E CONVERGENCE PROOFS FOR AID-BIO IN SECTION 4.1", "text": "For notation simplification, we define the following quantities.\nΓ =3L2 + 3τ2M2\nµ2 + 6L2\n( 1 + √ κ )2( κ+ ρM µ2 )2 , δT,N = Γ(1− αµ)T + 6L2κ (√κ− 1√ κ+ 1 )2N Ω =8 ( βκ2 + 2βML\nµ2 +\n2βLMκ\nµ2\n)2 , ∆0 = ‖y0 − y∗(x0)‖2 + ‖v∗0 − v0‖2. (18)\nWe first provide some supporting lemmas. The following lemma characterizes the Hypergradient estimation error ‖∇̂Φ(xk)−∇Φ(xk)‖, where ∇̂Φ(xk) is given by eq. (3) via implicit differentiation. Lemma 3. Suppose Assumptions 1, 2 and 3 hold. Then, we have\n‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤Γ(1− αµ)T ‖y∗(xk)− y0k‖2 + 6L2κ (√κ− 1√\nκ+ 1\n)2N ‖v∗k − v0k‖2.\nwhere Γ is given by eq. (18).\nProof of Lemma 3. Based on the form of∇Φ(xk) given by Proposition 1, we have\n‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤3‖∇xf(xk, y∗(xk))−∇xf(xk, yTk )‖2 + 3‖∇x∇yg(xk, yTk )‖2‖v∗k − vNk ‖2\n+ 3‖∇x∇yg(xk, y∗(xk))−∇x∇yg(xk, yTk )‖2‖v∗k‖2,\nwhich, in conjunction with Assumptions 1, 2 and 3, yields\n‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤ 3L2‖y∗(xk)− yTk ‖2 + 3L2‖v∗k − vNk ‖2 + 3τ2‖v∗k‖2‖yTk − y∗(xk)‖2\n(i) ≤3L2‖y∗(xk)− yTk ‖2 + 3L2‖v∗k − vNk ‖2 + 3τ2M2\nµ2 ‖yTk − y∗(xk)‖2. (19)\nwhere (i) follows from the fact that ‖v∗k‖ ≤ ‖(∇2yg(xk, y∗(xk)))−1‖‖∇yf(xk, y∗(xk))‖ ≤ Mµ .\nFor notation simplification, let v̂k = (∇2yg(xk, yTk ))−1∇yf(xk, yTk ). We next upper-bound ‖v∗k − vNk ‖ in eq. (19). Based on the convergence result of CG for the quadratic programing, e.g., eq. (17)\nin Grazzi et al. (2020), we have ‖vNk − v̂k‖ ≤ √ κ (√ κ−1√ κ+1 )N ‖v0k − v̂k‖. Based on this inequality, we further have\n‖v∗k − vNk ‖ ≤‖v∗k − v̂k‖+ ‖vNk − v̂k‖ ≤ ‖v∗k − v̂k‖+ √ κ (√κ− 1√\nκ+ 1\n)N ‖v0k − v̂k‖\n≤ ( 1 + √ κ (√κ− 1√\nκ+ 1\n)N) ‖v∗k − v̂k‖+ √ κ (√κ− 1√\nκ+ 1\n)N ‖v∗k − v0k‖. (20)\nNext, based on the definitions of v∗k and v̂k, we have\n‖v∗k − v̂k‖ =‖(∇2yg(xk, yTk ))−1∇yf(xk, yTk )− (∇2yg(xk, y∗(xk))−1∇yf(xk, y∗(xk))‖ ≤ ( κ+ ρM\nµ2\n) ‖yTk − y∗(xk)‖. (21)\nCombining eq. (19), eq. (20), eq. (21) yields ‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤ ( 3L2 + 3τ2M2\nµ2\n) ‖y∗(xk)− yTk ‖2 + 6L2κ (√κ− 1√ κ+ 1 )2N ‖v∗k − v0k‖2\n+ 6L2 ( 1 + √ κ (√κ− 1√\nκ+ 1\n)N)2( κ+ ρM\nµ2\n)2 ‖yTk − y∗(xk)‖2,\nwhich, in conjunction with ‖yTk −y∗(xk)‖ ≤ (1−αµ) T 2 ‖y0k−y∗(xk)‖ and the notations in eq. (18), finishes the proof.\nLemma 4. Suppose Assumptions 1, 2 and 3 hold. Choose\nT ≥ log (36κ(κ+ ρM µ2 )2 + 16(κ2 + 4LMκ µ2 )2β2Γ)/ log 1 1− α = Θ(κ)\nN ≥1 2 log(8κ+ 48(κ2 + 2ML µ2 + 2LMκ µ2 )2β2L2κ)/ log √ κ+ 1√ κ− 1 = Θ( √ κ), (22)\nwhere Γ is given by eq. (18). Then, we have\n‖y0k − y∗(xk)‖2+‖v∗k − v0k‖2 ≤ (1\n2\n)k ∆0 + Ω k−1∑ j=0 (1 2 )k−1−j ‖∇Φ(xj)‖2, (23)\nwhere Ω and ∆0 are given by eq. (18).\nProof of Lemma 4. Recall that y0k = yTk−1. Then, we have\n‖y0k − y∗(xk)‖2 ≤2‖yTk−1 − y∗(xk−1)‖2 + 2‖y∗(xk)− y∗(xk−1)‖2\n(i) ≤2(1− αµ)T ‖y0k−1 − y∗(xk−1)‖2 + 2κ2β2‖∇̂Φ(xk−1)‖2\n≤2(1− αµ)T ‖y0k−1 − y∗(xk−1)‖2 + 4κ2β2‖∇Φ(xk−1)− ∇̂Φ(xk−1)‖2\n+ 4κ2β2‖∇Φ(xk−1)‖2\n(ii) ≤ ( 2(1− αµ)T + 4κ2β2Γ(1− αµ)T ) ‖y∗(xk−1)− y0k−1‖2\n+ 24κ4L2β2 (√κ− 1√\nκ+ 1\n)2N ‖v∗k−1 − v0k−1‖2 + 4κ2β2‖∇Φ(xk−1)‖2. (24)\nwhere (i) follows from Lemma 2.2 in Ghadimi & Wang (2018) and (ii) follows from Lemma 3. In addition, note that\n‖v∗k − v0k‖2 =‖v∗k − vNk−1‖2 ≤ 2‖v∗k−1 − vNk−1‖2 + 2‖v∗k − v∗k−1‖2\n(i) ≤4 ( 1 + √ κ )2( κ+ ρM\nµ2\n)2 (1− αµ)T ‖y0k−1 − y∗(xk−1)‖2\n+ 4κ (√κ− 1√\nκ+ 1\n)2N ‖v∗k−1 − v0k−1‖2 + 2‖v∗k − v∗k−1‖2, (25)\nwhere (i) follows from eq. (20). Combining eq. (25) with ‖v∗k−v∗k−1‖ ≤ (κ2+ 2MLµ2 + 2LMκ µ2 )‖xk− xk−1‖, we have\n‖v∗k − v0k‖2 (i) ≤ ( 16κ ( κ+ ρM\nµ2\n)2 + 4 ( κ2 + 4LMκ\nµ2\n)2 β2Γ ) (1− αµ)T ‖y0k−1 − y∗(xk−1)‖2\n+ ( 4κ+ 48 ( κ2 + 2ML\nµ2 +\n2LMκ\nµ2\n)2 β2L2κ )(√κ− 1√ κ+ 1 )2N ‖v∗k−1 − v0k−1‖2\n+ 4 ( κ2 + 2ML\nµ2 +\n2LMκ\nµ2\n)2 β2‖∇Φ(xk−1)‖2, (26)\nwhere (i) follows from Lemma 3. Combining eq. (24) and eq. (26) yields\n‖y0k − y∗(xk)‖2 + ‖v∗k − v0k‖2 ≤ ( 18κ ( κ+ ρM\nµ2\n)2 + 8 ( κ2 + 4LMκ\nµ2\n)2 β2Γ ) (1− αµ)T ‖y0k−1 − y∗(xk−1)‖2\n+ ( 4κ+ 24 ( κ2 + 2ML\nµ2 +\n2LMκ\nµ2\n)2 β2L2κ )(√κ− 1√ κ+ 1 )2N ‖v∗k−1 − v0k−1‖2\n+ 8 ( κ2 + 2ML\nµ2 +\n2LMκ\nµ2\n)2 β2‖∇Φ(xk−1)‖2,\nwhich, in conjunction with eq. (22),\n‖y0k − y∗(xk)‖2 + ‖v∗k − v0k‖2 ≤ 1\n2 (‖y0k−1 − y∗(xk−1)‖2 + ‖v∗k−1 − v0k−1‖2)\n+ 8 ( βκ2 + 2βML\nµ2 +\n2βLMκ\nµ2\n)2 ‖∇Φ(xk−1)‖2. (27)\nTelescoping eq. (27) over k and using the notations in eq. (18), we finish the proof.\nLemma 5. Under the same setting as in Lemma 4, we have\n‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤δT,N (1\n2\n)k ∆0 + δT,NΩ k−1∑ j=0 (1 2 )k−1−j ‖∇Φ(xj)‖2.\nwhere δT,N , Ω and ∆0 are given by eq. (18).\nProof of Lemma 5. Based on Lemma 3, eq. (18) and using ab+cd ≤ (a+c)(b+d) for any positive a, b, c, d, we have\n‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤δT,N (‖y∗(xk)− y0k‖2 + ‖v∗k − v0k‖2),\nwhich, in conjunction with Lemma 4, finishes the proof." }, { "heading": "E.1 PROOF OF THEOREM 1", "text": "In this subsection, provide the proof for Theorem 1 based on the supporting Lemma 5. Based on the smoothness of the function Φ(x) established in Lemma 2, we have\nΦ(xk+1) ≤Φ(xk) + 〈∇Φ(xk), xk+1 − xk〉+ LΦ 2 ‖xk+1 − xk‖2\n≤Φ(xk)− β〈∇Φ(xk), ∇̂Φ(xk)−∇Φ(xk)〉 − β‖∇Φ(xk)‖2 + β2LΦ‖∇Φ(xk)‖2\n+ β2LΦ‖∇Φ(xk)− ∇̂Φ(xk)‖2 ≤Φ(xk)− (β\n2 − β2LΦ\n) ‖∇Φ(xk)‖2 + (β 2 + β2LΦ ) ‖∇Φ(xk)− ∇̂Φ(xk)‖2, (28)\nwhich, combined with Lemma 5, yields Φ(xk+1) ≤Φ(xk)− (β\n2 − β2LΦ\n) ‖∇Φ(xk)‖2 + (β 2 + β2LΦ ) δT,N (1 2 )k ∆0\n+ (β\n2 + β2LΦ\n) δT,NΩ k−1∑ j=0 (1 2 )k−1−j ‖∇Φ(xj)‖2. (29)\nTelescoping eq. (29) over k from 0 to K − 1 yields(β 2 − β2LΦ )K−1∑ k=0 ‖∇Φ(xk)‖2 ≤ Φ(x0)− inf x Φ(x) + (β 2 + β2LΦ ) δT,N∆0\n+ (β\n2 + β2LΦ\n) δT,NΩ K−1∑ k=1 k−1∑ j=0 (1 2 )k−1−j ‖∇Φ(xj)‖2,\nwhich, using the fact that ∑K−1 k=1 ∑k−1 j=0 ( 1 2 )k−1−j ‖∇Φ(xj)‖2 ≤ ∑K−1 k=0 1 2k ∑K−1 k=0 ‖∇Φ(xk)‖\n2 ≤ 2 ∑K−1 k=0 ‖∇Φ(xk)‖\n2, yields(β 2 − β2LΦ − ( βΩ + 2Ωβ2LΦ ) δT,N )K−1∑ k=0 ‖∇Φ(xk)‖2\n≤ Φ(x0)− inf x\nΦ(x) + (β\n2 + β2LΦ\n) δT,N∆0. (30)\nChoose N and T such that ( Ω + 2ΩβLΦ ) δT,N ≤ 1\n4 , δT,N ≤ 1. (31)\nNote that based on the definition of δT,N in eq. (18), it suffices to choose T ≥ Θ(κ) andN ≥ Θ( √ κ) to satisfy eq. (31). Then, substituting eq. (31) into eq. (30) yields(β 4 − β2LΦ )K−1∑ k=0 ‖∇Φ(xk)‖2 ≤ Φ(x0)− inf x Φ(x) + (β 2 + β2LΦ ) ∆0, (32)\nwhich, in conjunction with β ≤ 18LΦ , yields\n1\nK K−1∑ k=0 ‖∇Φ(xk)‖2 ≤ 64LΦ(Φ(x0)− infx Φ(x)) + 5∆0 K . (33)\nIn order to achieve an -accurate stationary point, we obtain from eq. (33) that AID-BiO requires at most the total number K = O(κ3 −1) of outer iterations. Then, based on eq. (3), we have the following complexity results.\n• Gradient complexity: Gc(f, ) = 2K = O(κ3 −1),Gc(g, ) = KT = O ( κ4 −1 ) .\n• Jacobian- and Hessian-vector product complexities: JV(g, ) = K = O ( κ3 −1 ) ,HV(g, ) = KN = O ( κ3.5 −1 ) .\nThen, the proof is complete." }, { "heading": "F CONVERGENCE PROOFS FOR ITD-BIO IN SECTION 4.1", "text": "We first characterize an important estimation property of the outer-loop gradient estimator ∂f(xk,y T k ) ∂xk in ITD-BiO for approximating the true gradient∇Φ(xk) based on Proposition 2. Lemma 6. Suppose Assumptions 1, 2 and 3 hold. Choose α ≤ 1L . Then, we have∥∥∥∂f(xk, yTk )\n∂xk −∇Φ(xk) ∥∥∥ ≤(L(L+ µ)(1− αµ)T2 µ + 2M (τµ+ Lρ) µ2 (1− αµ) T−1 2 ) ‖y0k − y∗(xk)‖\n+ LM(1− αµ)T\nµ . Lemma 6 shows that the gradient estimation error ∥∥∂f(xk,yTk )\n∂xk − ∇Φ(xk) ∥∥ decays exponentially w.r.t. the number T of the inner-loop steps. We note that Grazzi et al. (2020) proved a similar result via a fixed point based approach. As a comparison, our proof of Lemma 6 directly characterizes the rate of the sequence ( ∂ytk ∂xk , t = 0, ..., T ) converging to ∂y ∗(xk) ∂xk\nvia the differentiation over all corresponding points along the inner-loop GD path as well as the optimality of the point y∗(xk).\nProof of Lemma 6. Using∇Φ(xk) = ∇xf(xk, y∗(xk)) + ∂y ∗(xk) ∂xk ∇yf(xk, y∗(xk)) and eq. (16) , and using the triangle inequality, we have∥∥∥∂f(xk, yTk )\n∂xk −∇Φ(xk) ∥∥∥ =‖∇xf(xk, yTk )−∇xf(xk, y∗(xk))‖+ ∥∥∥∥∂yTk∂xk − ∂y ∗(xk) ∂xk\n∥∥∥∥ ‖∇yf(xk, yTk )‖ + ∥∥∥∂y∗(xk)\n∂xk ∥∥∥∥∥∇yf(xk, yTk )−∇yf(xk, y∗(xk))∥∥ (i)\n≤L‖yTk − y∗(xk)‖+M ∥∥∥∥∂yTk∂xk − ∂y ∗(xk) ∂xk ∥∥∥∥+ L∥∥∥∂y∗(xk)∂xk ∥∥∥‖yTk − y∗(xk)‖, (34)\nwhere (i) follows from Assumption 2. Our next step is to upper-bound ∥∥∥∂yTk∂xk − ∂y∗(xk)∂xk ∥∥∥ in eq. (34).\nBased on the updates ytk = y t−1 k −α∇yg(xk, y t−1 k ) for t = 1, ..., T in ITD-BiO and using the chain rule, we have ∂ytk ∂xk = ∂yt−1k ∂xk − α ( ∇x∇yg(xk, yt−1k ) + ∂yt−1k ∂xk ∇2yg(xk, yt−1k ) ) . (35) Based on the optimality of y∗(xk), we have ∇yg(xk, y∗(xk)) = 0, which, in conjunction with the implicit differentiation theorem, yields\n∇x∇yg(xk, y∗(xk)) + ∂y∗(xk)\n∂xk ∇2yg(xk, y∗(xk)) = 0. (36)\nSubstituting eq. (36) into eq. (35) yields ∂ytk ∂xk − ∂y ∗(xk) ∂xk\n= ∂yt−1k ∂xk − ∂y ∗(xk) ∂xk − α\n( ∇x∇yg(xk, yt−1k ) +\n∂yt−1k ∂xk\n∇2yg(xk, yt−1k ) )\n+ α ( ∇x∇yg(xk, y∗(xk)) + ∂y∗(xk)\n∂xk ∇2yg(xk, y∗(xk)) ) = ∂yt−1k ∂xk − ∂y ∗(xk) ∂xk − α ( ∇x∇yg(xk, yt−1k )−∇x∇yg(xk, y ∗(xk)) )\n− α ( ∂yt−1k ∂xk − ∂y ∗(xk) ∂xk ) ∇2yg(xk, yt−1k ) + α ∂y∗(xk)\n∂xk\n( ∇2yg(xk, y∗(xk))−∇2yg(xk, yt−1k ) ) . (37)\nCombining eq. (36) and Assumption 2 yields∥∥∥∥∂y∗(xk)∂xk ∥∥∥∥ = ∥∥∥∇x∇yg(xk, y∗(xk)) [∇2yg(xk, y∗(xk))]−1∥∥∥ ≤ Lµ . (38)\nThen, combining eq. (37) and eq. (38) yields∥∥∥ ∂ytk ∂xk − ∂y ∗(xk) ∂xk ∥∥∥ (i)≤∥∥∥I − α∇2yg(xk, yt−1k )∥∥∥∥∥∥∂yt−1k∂xk − ∂y ∗(xk) ∂xk ∥∥∥ + α ( τ + Lρ\nµ\n) ‖yt−1k − y ∗(xk)‖\n(ii) ≤ (1− αµ) ∥∥∥∂yt−1k ∂xk − ∂y ∗(xk) ∂xk ∥∥∥+ α(τ + Lρ µ ) ‖yt−1k − y ∗(xk)‖, (39)\nwhere (i) follows from Assumption 3 and (ii) follows from the strong-convexity of g(x, ·). Based on the strong-convexity of the lower-level function g(x, ·), we have\n‖yt−1k − y ∗(xk)‖ ≤ (1− αµ) t−1 2 ‖y0k − y∗(xk)‖. (40)\nSubstituting eq. (40) into eq. (39) and telecopting eq. (39) over t from 1 to T , we have∥∥∥∂yTk ∂xk − ∂y ∗(xk) ∂xk ∥∥∥ ≤(1− αµ)T∥∥∥ ∂y0k ∂xk − ∂y ∗(xk) ∂xk ∥∥∥ + α ( τ + Lρ\nµ ) T−1∑ t=0 (1− αµ)T−1−t(1− αµ) t2 ‖y0k − y∗(xk)‖\n=(1− αµ)T ∥∥∥ ∂y0k ∂xk − ∂y ∗(xk) ∂xk ∥∥∥+ 2 (τµ+ Lρ) µ2 (1− αµ) T−1 2 ‖y0k − y∗(xk)‖\n≤L(1− αµ) T\nµ +\n2 (τµ+ Lρ)\nµ2 (1− αµ)\nT−1 2 ‖y0k − y∗(xk)‖, (41)\nwhere the last inequality follows from ∂y 0 k\n∂xk = 0 and eq. (38). Then, combining eq. (34), eq. (38),\neq. (40) and eq. (41) completes the proof." }, { "heading": "F.1 PROOF OF THEOREM 2", "text": "Based on the characterization on the estimation error of the gradient estimate ∂f(xk,y T k )\n∂xk in Lemma 6,\nwe now prove Theorem 2.\nRecall the notation that ∇̂Φ(xk) = ∂f(xk,y T k )\n∂xk . Using an approach similar to eq. (28), we have Φ(xk+1) ≤Φ(xk)− (β\n2 − β2LΦ\n) ‖∇Φ(xk)‖2 + (β 2 + β2LΦ ) ‖∇Φ(xk)− ∇̂Φ(xk)‖2, (42)\nwhich, in conjunction with Lemma 6 and use ‖y0k − y∗(xk)‖2 ≤ ∆, yields Φ(xk+1) ≤Φ(xk)− (β\n2 − β2LΦ\n) ‖∇Φ(xk)‖2\n+ 3∆ (β\n2 + β2LΦ )(L2(L+ µ)2 µ2 (1− αµ)T + 4M 2 (τµ+ Lρ) 2 µ4 (1− αµ)T−1 ) + 3 (β\n2 + β2LΦ )L2M2(1− αµ)2T µ2 . (43)\nTelescoping eq. (43) over k from 0 to K − 1 yields\n1\nK K−1∑ k=0 (1 2 − βLΦ ) ‖∇Φ(xk)‖2 ≤ Φ(x0)− infx Φ(x) βK + 3 (1 2 + βLΦ )L2M2(1− αµ)2T µ2\n+3∆ (1\n2 + βLΦ )(L2(L+ µ)2 µ2 (1− αµ)T + 4M 2 (τµ+ Lρ) 2 µ4 (1− αµ)T−1 ) . (44)\nSubstuting β = 14LΦ and T = log ( max { 3LM µ , 9∆L 2(1+ Lµ ) 2, 36∆M 2(τµ+Lρ)2 (1−αµ)µ4 } 9 2 ) / log 11−αµ = Θ(κ log 1 ) in eq. (44) yields\n1\nK K−1∑ k=0 ‖∇Φ(xk)‖2 ≤ 16LΦ(Φ(x0)− infx Φ(x)) K + 2 3 . (45)\nIn order to achieve an -accurate stationary point, we obtain from eq. (45) that ITD-BiO requires at most the total number K = O(κ3 −1) of outer iterations. Then, based on the gradient form given by Proposition 2, we have the following complexity results.\n• Gradient complexity: Gc(f, ) = 2K = O(κ3 −1),Gc(g, ) = KT = O ( κ4 −1 log 1 ) .\n• Jacobian- and Hessian-vector product complexities:\nJV(g, ) = KT = O ( κ4 −1 log 1 ) ,HV(g, ) = KT = O ( κ4 −1 log 1 ) .\nThen, the proof is complete." }, { "heading": "G PROOFS OF MAIN RESULTS FOR STOCHASTIC CASE IN SECTION 4.2", "text": "In this section, we provide proofs for the convergence and complexity results of the proposed algorithm for the stochastic case." }, { "heading": "G.1 PROOF OF PROPOSITION 3", "text": "Based on the definition of vQ in eq. (5) and conditioning on xk, yTk , we have\nEvQ =Eη Q−1∑ q=−1 Q∏ j=Q−q (I − η∇2yG(xk, yTk ;Bj))∇yF (xk, yTk ;DF ),\n= η Q∑ q=0 (I − η∇2yg(xk, yTk ))q∇yf(xk, yTk )\n= η ∞∑ q=0 (I − η∇2yg(xk, yTk ))q∇yf(xk, yTk )− η ∞∑ q=Q+1 (I − η∇2yg(xk, yTk ))q∇yf(xk, yTk )\n= η(η∇2yg(xk, yTk ))−1∇yf(xk, yTk )− η ∞∑\nq=Q+1\n(I − η∇2yg(xk, yTk ))q∇yf(xk, yTk ),\nwhich, in conjunction with the strong-convexity of function g(x, ·), yields\n∥∥EvQ − [∇2yg(xk, yTk )]−1∇yf(xk, yTk )∥∥ ≤ η ∞∑ q=Q+1 (1− ηµ)qM ≤ (1− ηµ) Q+1M µ . (46)\nThis finishes the proof for the estimation bias. We next prove the variance bound. Note that\nE ∥∥∥∥η Q−1∑\nq=−1 Q∏ j=Q−q (I − η∇2yG(xk, yTk ;Bj))∇yF (xk, yTk ;DF )− (∇2yg(xk, yTk ))−1∇yf(xk, yTk ) ∥∥∥∥2\n(i) ≤2E ∥∥∥∥ η Q−1∑\nq=−1 Q∏ j=Q−q (I − η∇2yG(xk, yTk ;Bj))− (∇2yg(xk, yTk ))−1 ∥∥∥∥2M2 + 2M2µ2Df\n≤4E ∥∥∥∥η Q−1∑\nq=−1 Q∏ j=Q−q (I − η∇2yG(xk, yTk ;Bj))− η Q∑ q=0 (I − η∇2yg(xk, yTk ))q ∥∥∥∥2M2\n+ 4E ∥∥∥∥η Q∏\nq=0\n(I − η∇2yg(xk, yTk ))q)− (∇2yg(xk, yTk ))−1 ∥∥∥∥2M2 + 2M2µ2Df\n(ii) ≤ 4η2E ∥∥∥∥ Q∑ q=0 Q∏ j=Q+1−q (I − η∇2yG(xk, yTk ;Bj))− Q∑ q=0 (I − η∇2yg(xk, yTk ))q ∥∥∥∥2M2 + 4(1− ηµ)2Q+2M2µ2 + 2M2µ2Df\n(iii) ≤ 4η2M2QE Q∑ q=0 ∥∥∥∥ Q∏ j=Q+1−q (I − η∇2yG(xk, yTk ;Bj))− (I − η∇2yg(xk, yTk ))q ∥∥∥∥2︸ ︷︷ ︸\nMq\n+ 4(1− ηµ)2Q+2M2\nµ2 +\n2M2 µ2Df (47)\nwhere (i) follows from Lemma 1, (ii) follows from eq. (46), and (iii) follows from the CauchySchwarz inequality.\nOur next step is to upper-bound Mq in eq. (47). For simplicity, we define a general quantity Mi for by replacing q in Mq with i. Then, we have\nEMi =E ∥∥∥∥(I − η∇2yg(xk, yTk )) Q∏\nj=Q+2−i (I − η∇2yG(xk, yTk ;Bj))− (I − η∇2yg(xk, yTk ))i\n∥∥∥∥2\n+ E ∥∥∥∥η(∇2yg(xk, yTk )−∇2yG(xk, yTk ;BQ+1−i)) Q∏\nj=Q+2−i (I − η∇2yG(xk, yTk ;Bj))\n∥∥∥∥2\n+ 2E 〈 (I − η∇2yg(xk, yTk )) Q∏\nj=Q+2−i (I − η∇2yG(xk, yTk ;Bj))− (I − η∇2yg(xk, yTk ))i,\nη(∇2yg(xk, yTk )−∇2yG(xk, yTk ;BQ+1−i)) Q∏\nj=Q+2−i (I − η∇2yG(xk, yTk ;Bj)) 〉 (i) =E ∥∥∥∥(I − η∇2yg(xk, yTk )) Q∏\nj=Q+2−i (I − η∇2yG(xk, yTk ;Bj))− (I − η∇2yg(xk, yTk ))i\n∥∥∥∥2\n+ E ∥∥∥∥η(∇2yg(xk, yTk )−∇2yG(xk, yTk ;BQ+1−i)) Q∏\nj=Q+2−i (I − η∇2yG(xk, yTk ;Bj)) ∥∥∥∥2 (ii)\n≤ (1− ηµ)2EMi−1 + η2(1− ηµ)2i−2E‖∇2yg(xk, yTk )−∇2yG(xk, yTk ;BQ+1−i)‖2\n(iii) ≤ (1− ηµ)2EMi−1 + η2(1− ηµ)2i−2 L2\n|BQ+1−i| , (48)\nwhere (i) follows from that fact that EBQ+1−i∇2yG(xk, yTk ;BQ+1−i) = ∇2yg(xk, yTk ), (ii) follows from the strong-convexity of function G(x, ·; ξ), and (iii) follows from Lemma 1.\nThen, telescoping eq. (48) over i from 2 to q yields\nEMq ≤ L2η2(1− ηµ)2q−2 q∑ j=1\n1\n|BQ+1−j | ,\nwhich, in conjunction with the choice of |BQ+1−j | = BQ(1− ηµ)j−1 for j = 1, ..., Q, yields\nEMq ≤η2(1− ηµ)2q−2 q∑ j=1 L2 BQ ( 1 1− ηµ )j−1\n= η2L2\nBQ (1− ηµ)2q−2\n( 1\n1−ηµ\n)q−1 − 1\n1 1−ηµ − 1\n≤ ηL 2 (1− ηµ)µ 1 BQ (1− ηµ)q. (49)\nSubstituting eq. (49) into eq. (47) yields\nE ∥∥∥∥η Q−1∑\nq=−1 Q∏ j=Q−q (I − η∇2yG(xk, yTk ;Bj))∇yF (xk, yTk ;DF )− (∇2yg(xk, yTk ))−1∇yf(xk, yTk ) ∥∥∥∥2\n≤4η2M2Q Q∑ q=0\nηL2\n(1− ηµ)µ 1 BQ (1− ηµ)q + 4(1− ηµ) 2Q+2M2 µ2 + 2M2 µ2Df\n≤4η 2L2M2 µ2 1 B + 4(1− ηµ)2Q+2M2 µ2 + 2M2 µ2Df , (50)\nwhere the last inequality follows from the fact that ∑S q=0 x q ≤ 11−x . Then, the proof is complete." }, { "heading": "G.2 AUXILIARY LEMMAS FOR PROVING THEOREM 3", "text": "We first use the following lemma to characterize the first-moment error of the gradient estimate ∇̂Φ(xk), whose form is given by eq. (6). Lemma 7. Suppose Assumptions 1, 2 and 3 hold. Then, conditioning on xk and yTk , we have∥∥E∇̂Φ(xk)−∇Φ(xk)∥∥2 ≤ 2(L+ L2\nµ + Mτ µ + LMρ µ2\n)2 ‖yTk − y∗(xk)‖2 + 2L2M2(1− ηµ)2Q\nµ2 .\nProof of Lemma 7. To simplify notations, we define\n∇̃ΦT (xk) = ∇xf(xk, yTk )−∇x∇yg(xk, yTk ) [ ∇2yg(xk, yTk ) ]−1∇yf(xk, yTk ). (51) Based on the definition of ∇̂Φ(xk) in eq. (6) and conditioning on xk and yTk , we have\nE∇̂Φ(xk) =∇xf(xk, yTk )−∇x∇yg(xk, yTk )EvQ =∇̃ΦT (xk)−∇x∇yg(xk, yTk )(EvQ − [∇2yg(xk, yTk )]−1∇yf(xk, yTk )),\nwhich further implies that∥∥E∇̂Φ(xk)−∇Φ(xk)∥∥2 ≤2E‖∇̃ΦT (xk)−∇Φ(xk)‖2 + 2‖E∇̂Φ(xk)− ∇̃ΦT (xk)‖2\n≤2E‖∇̃ΦT (xk)−∇Φ(xk)‖2 + 2L2‖EvQ − [∇2yg(xk, yTk )]−1∇yf(xk, yTk )‖2\n≤2E‖∇̃ΦT (xk)−∇Φ(xk)‖2 + 2L2M2(1− ηµ)2Q+2\nµ2 , (52)\nwhere the last inequality follows from Proposition 3. Our next step is to upper-bound the first term at the right hand side of eq. (52). Using the fact that ∥∥∇2yg(x, y)−1∥∥ ≤ 1µ and based on Assumptions 2\nand 3, we have\n‖∇̃ΦT (xk)−∇Φ(xk)‖ ≤‖∇xf(xk, yTk )−∇xf(xk, y∗(xk))‖\n+ L2\nµ ‖yTk − y∗(xk)‖+\nMτ\nµ ‖yTk − y∗(xk)‖\n+ LM ∥∥∇2yg(xk, yTk )−1 −∇2yg(xk, y∗(xk))−1∥∥\n≤ ( L+ L2\nµ + Mτ µ + LMρ µ2\n) ‖yTk − y∗(xk)‖, (53)\nwhere the last inequality follows from the inequality ‖M−11 −M −1 2 ‖ ≤ ‖M −1 1 M −1 2 ‖‖M1 −M2‖ for any two matrices M1 and M2. Combining eq. (52) and eq. (53) yields∥∥E∇̂Φ(xk)−∇Φ(xk)∥∥2 ≤ 2(L+ L2 µ + Mτ µ + LMρ µ2 )2 ‖yTk − y∗(xk)‖2 + 2L2M2(1− ηµ)2Q µ2 ,\nwhich completes the proof.\nThen, we use the following lemma to characterize the variance of the estimator ∇̂Φ(xk). Lemma 8. Suppose Assumptions 1, 2 and 3 hold. Then, we have\nE‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤ 4L2M2 µ2Dg + (8L2 µ2 + 2 )M2 Df + 16η2L4M2 µ2 1 B + 16L2M2(1− ηµ)2Q µ2\n+ ( L+ L2\nµ + Mτ µ + LMρ µ2\n)2 E‖yTk − y∗(xk)‖2.\nProof of Lemma 8. Based on the definitions of ∇Φ(xk) and ∇̃ΦT (xk) in eq. (4) and eq. (51) and conditioning on xk and yTk , we have\nE‖∇̂Φ(xk)−∇Φ(xk)‖2\n(i) =E‖∇̂Φ(xk)− ∇̃ΦT (xk)‖2 + ‖∇̃ΦT (xk)−∇Φ(xk)‖2\n(ii) ≤ 2E ∥∥∇x∇yG(xk, yTk ;DG)vQ −∇x∇yg(xk, yTk )[∇2yg(xk, yTk )]−1∇yf(xk, yTk )∥∥2 + 2M2Df\n+ ( L+ L2\nµ + Mτ µ + LMρ µ2\n)2 ‖yTk − y∗(xk)‖2\n(iii) ≤ 4M 2\nµ2 E‖∇x∇yG(xk, yTk ;DG)−∇x∇yg(xk, yTk )‖2 + 4L2E‖vQ −\n[ ∇2yg(xk, yTk ) ]−1∇yf(xk, yTk )‖2 + ( L+ L2\nµ + Mτ µ + LMρ µ2\n)2 ‖yTk − y∗(xk)‖2 + 2M2\nDf , (54)\nwhere (i) follows from the fact that EDG,DH ,DF ∇̂Φ(xk) = ∇̃ΦT (xk), (ii) follows from Lemma 1 and eq. (53), and (iii) follows from the Young’s inequality and Assumption 2.\nUsing Lemma 1 and Proposition 3 in eq. (54), yields\nE‖∇̂Φ(xk)−∇Φ(xk)‖2 ≤ 4L2M2\nµ2Dg +\n16η2L4M2\nµ2 1 B +\n16(1− ηµ)2QL2M2\nµ2 +\n8L2M2\nµ2Df + ( L+ L2\nµ + Mτ µ + LMρ µ2\n)2 ‖yTk − y∗(xk)‖2 + 2M2\nDf , (55)\nwhich, unconditioning on xk and yTk , completes the proof.\nIt can be seen from Lemmas 7 and 8 that the upper bounds on both the estimation error and bias depend on the tracking error ‖yTk − y∗(xk)‖2. The following lemma provides an upper bound on such tracking error ‖yTk − y∗(xk)‖2.\nLemma 9. Suppose Assumptions 1, 2 and 4 hold. Define constants λ = (L− µ L+ µ )2T( 2 + 4β2L2 µ2 ( L+ L2 µ + Mτ µ + LMρ µ2 )2) ∆ = 4L2M2\nµ2Dg + (8L2 µ2 + 2 )M2 Df + 16η2L4M2 µ2 1 B + 16L2M2(1− ηµ)2Q µ2\nω = 4β2L2\nµ2 (L− µ L+ µ )2T . (56)\nChoose T such that λ < 1 and set inner-loop stepsize α = 2L+µ . Then, we have\nE‖yTk − y∗(xk)‖2 ≤λk ((\nL− µ L+ µ\n)2T ‖y0 − y∗(x0)‖2 + σ2\nLµS\n) + ω\nk−1∑ j=0 λk−1−jE‖∇Φ(xj)‖2 + ω∆ + σ 2 LµS 1− λ .\nProof of Lemma 9. First note that for an integer t ≤ T ‖yt+1k − y ∗(xk)‖2\n=‖yt+1k − y t k‖2 + 2〈yt+1k − y t k, y t k − y∗(xk)〉+ ‖ytk − y∗(xk)‖2\n=α2‖∇yG(xk, ytk;St)‖2 − 2α〈∇yG(xk, ytk;St), ytk − y∗(xk)〉+ ‖ytk − y∗(xk)‖2. (57) Conditioning on ytk and taking expectation in eq. (57), we have\nE‖yt+1k − y ∗(xk)‖2\n(i) ≤α2 (σ2 S + ‖∇yg(xk, ytk)‖2 ) − 2α〈∇yg(xk, ytk), ytk − y∗(xk)〉\n+ ‖ytk − y∗(xk)‖2\n(ii) ≤ α 2σ2\nS + α2‖∇yg(xk, ytk)‖2 − 2α\n( Lµ\nL+ µ ‖ytk − y∗(xk)‖2 +\n‖∇yg(xk, ytk)‖2\nL+ µ ) + ‖ytk − y∗(xk)‖2\n= α2σ2\nS − α\n( 2\nL+ µ − α\n) ‖∇yg(xk, ytk)‖2 + ( 1− 2αLµ\nL+ µ\n) ‖ytk − y∗(xk)‖2 (58)\nwhere (i) follows from the third item in Assumption 2, (ii) follows from the strong-convexity and smoothness of the function g. Since α = 2L+µ , we obtain from eq. (58) that\nE‖yt+1k − y ∗(xk)‖2 ≤ ( L− µ L+ µ )2 ‖ytk − y∗(xk)‖2 +\n4σ2\n(L+ µ)2S . (59)\nUnconditioning on ytk in eq. (59) and telescoping eq. (59) over t from 0 to T − 1 yields E‖yTk − y∗(xk)‖2 ≤ ( L− µ L+ µ )2T E‖y0k − y∗(xk)‖2 + σ2 LµS\n= ( L− µ L+ µ )2T E‖yTk−1 − y∗(xk)‖2 + σ2 LµS , (60)\nwhere the last inequality follows from Algorithm 2 that y0k = y T k−1. Note that\nE‖yTk−1 − y∗(xk)‖2 ≤2E‖yTk−1 − y∗(xk−1)‖2 + 2E‖y∗(xk−1)− y∗(xk)‖2\n(i) ≤2E‖yTk−1 − y∗(xk−1)‖2 + 2L2\nµ2 E‖xk − xk−1‖2\n≤2E‖yTk−1 − y∗(xk−1)‖2 + 2β2L2\nµ2 E‖∇̂Φ(xk−1)‖2\n≤2E‖yTk−1 − y∗(xk−1)‖2 + 4β2L2\nµ2 E‖∇Φ(xk−1)‖2\n+ 4β2L2\nµ2 E‖∇̂Φ(xk−1)−∇Φ(xk−1)‖2, (61)\nwhere (i) follows from Lemma 2.2 in Ghadimi & Wang (2018). Using Lemma 8 in eq. (61) yields\nE‖yTk−1 − y∗(xk)‖2 ≤ ( 2 + 4β2L2\nµ2\n( L+ L2\nµ + Mτ µ + LMρ µ2\n)2) E‖yTk−1 − y∗(xk−1)‖2 + 4β2L2\nµ2 E‖∇Φ(xk−1)‖2\n+ 4β2L2\nµ2\n( 4L2M2 µ2Dg + (8L2 µ2 + 2 )M2 Df + 16η2L4M2 µ2 1 B + 16L2M2(1− ηµ)2Q µ2 ) . (62)\nCombining eq. (60) and eq. (62) yields\nE‖yTk − y∗(xk)‖2 ≤ (L− µ L+ µ )2T( 2 + 4β2L2 µ2 ( L+ L2 µ + Mτ µ + LMρ µ2 )2) E‖yTk−1 − y∗(xk−1)‖2\n+ (L− µ L+ µ )2T 4β2L2 µ2 ( 4L2M2 µ2Dg + (8L2 µ2 + 2 )M2 Df + 16η2L4M2 µ2 1 B + 16L2M2(1− ηµ)2Q µ2 ) + 4β2L2\nµ2 (L− µ L+ µ )2T E‖∇Φ(xk−1)‖2 + σ2 LµS . (63)\nBased on the definitions of λ, ω,∆ in eq. (56), we obtain from eq. (63) that\nE‖yTk − y∗(xk)‖2 ≤λE‖yTk−1 − y∗(xk−1)‖2 + ω∆ + σ2\nLµS + ωE‖∇Φ(xk−1)‖2. (64)\nTelescoping eq. (64) over k yields\nE‖yTk − y∗(xk)‖2\n≤λkE‖yT0 − y∗(x0)‖2 + ω k−1∑ j=0 λk−1−jE‖∇Φ(xj)‖2 + ω∆ + σ 2 LµS 1− λ\n≤λk ((\nL− µ L+ µ\n)2T ‖y0 − y∗(x0)‖2 + σ2\nLµS\n) + ω\nk−1∑ j=0 λk−1−jE‖∇Φ(xj)‖2 + ω∆ + σ 2 LµS 1− λ ,\nwhich completes the proof." }, { "heading": "G.3 PROOF OF THEOREM 3", "text": "In this subsection, we provide the proof for Theorem 3, based on the supporting lemmas we develop in Appendix G.2.\nBased on the smoothness of the function Φ(x) in Lemma 2, we have\nΦ(xk+1) ≤Φ(xk) + 〈∇Φ(xk), xk+1 − xk〉+ LΦ 2 ‖xk+1 − xk‖2\n≤Φ(xk)− β〈∇Φ(xk), ∇̂Φ(xk)〉+ β2LΦ‖∇Φ(xk)‖2 + β2LΦ‖∇Φ(xk)− ∇̂Φ(xk)‖2. For simplicity, let Ek = E(· |xk, yTk ). Note that we choose β = 14Lφ . Then, taking expectation over the above inequality, we have\nEΦ(xk+1) ≤EΦ(xk)− βE〈∇Φ(xk),Ek∇̂Φ(xk)〉+ β2LΦE‖∇Φ(xk)‖2\n+ β2LΦE‖∇Φ(xk)− ∇̂Φ(xk)‖2\n(i) ≤EΦ(xk) + β\n2 E‖Ek∇̂Φ(xk)−∇Φ(xk)‖2 −\nβ 4 E‖∇Φ(xk)‖2 + β 4 E‖∇Φ(xk)− ∇̂Φ(xk)‖2\n(ii)\n≤ EΦ(xk)− β\n4 E‖∇Φ(xk)‖2 +\nβL2M2(1− ηµ)2Q\nµ2\n+ β\n4\n( 4L2M2 µ2Dg + (8L2 µ2 + 2 )M2 Df + 16η2L4M2 µ2 1 B + 16L2M2(1− ηµ)2Q µ2 ) + 5β\n4\n( L+ L2\nµ + Mτ µ + LMρ µ2\n)2 E‖yTk − y∗(xk)‖2 (65)\nwhere (i) follows from Cauchy-Schwarz inequality, and (ii) follows from Lemma 7 and Lemma 8. To simplify notations, Let\nν = 5\n4\n( L+ L2\nµ + Mτ µ + LMρ µ2\n)2 . (66)\nThen, applying Lemma 9 in eq. (65) and using the definitions of ω,∆, λ in eq. (56), we have\nEΦ(xk+1) ≤EΦ(xk)− β\n4 E‖∇Φ(xk)‖2 +\nβL2M2(1− ηµ)2Q\nµ2\n+ β\n4 ∆ + βνλk (( L− µ L+ µ )2T ‖y0 − y∗(x0)‖2 + σ2 LµS )\n+ βνω k−1∑ j=0 λk−1−jE‖∇Φ(xj)‖2 + βν(ω∆ + σ 2 LµS ) 1− λ ,\nTelescoping the above inequality over k from 0 to K − 1 yields\nEΦ(xK) ≤ Φ(x0)− β\n4 K−1∑ k=0 E‖∇Φ(xk)‖2 + βνω K−1∑ k=1 k−1∑ j=0 λk−1−jE‖∇Φ(xj)‖2\n+ Kβ∆ 4 + ((L− µ L+ µ )2T ‖y0 − y∗(x0)‖2 + σ2 LµS ) βν 1− λ\n+ KβL2M2(1− ηµ)2Q µ2 + Kβν(ω∆ + σ\n2\nLµS )\n1− λ ,\nwhich, using the fact that\nK−1∑ k=1 k−1∑ j=0 λk−1−jE‖∇Φ(xj)‖2 ≤ ( K−1∑ k=0 λk ) K−1∑ k=0 E‖∇Φ(xk)‖2 < 1 1− λ K−1∑ k=0 E‖∇Φ(xk)‖2,\nyields\n(1 4 − νω 1− λ ) 1 K K−1∑ k=0 E‖∇Φ(xk)‖2\n≤Φ(x0)− infx Φ(x) βK\n+ ν ( (L−µL+µ ) 2T ‖y0 − y∗(x0)‖2 + σ 2 LµS ) K(1− λ) + ∆ 4 + L2M2(1− ηµ)2Q µ2\n+ ν(ω∆ + σ\n2\nLµS )\n1− λ . (67)\nSince β = 14LΦ and T ≥ log ( 12+ 48β 2L2 µ2 (L+L 2 µ + Mτ µ + LMρ µ2 )2 ) 2 log(L+µL−µ ) , we have λ ≤ 16 , and hence eq. (67) is further simplified to\n(1 4 −6 5 νω ) 1 K K−1∑ k=0 E‖∇Φ(xk)‖2\n≤Φ(x0)− infx Φ(x) βK\n+ 2ν ( (L−µL+µ ) 2T ‖y0 − y∗(x0)‖2 + σ 2 LµS ) K + ∆ 4 + L2M2(1− ηµ)2Q µ2\n+ 2ν ( ω∆ + σ2\nLµS\n) . (68)\nBy the definitions of ω in eq. (56) and ν in eq. (66) and T ≥ log ( 12+ 48β 2L2 µ2 (L+L 2 µ + Mτ µ + LMρ µ2 )2 )\n2 log(L+µL−µ ) , we\nhave\nνω = 5β2L2\nµ2 (L− µ L+ µ )2T( L+ L2 µ + Mτ µ + LMρ µ2 )2 < 5β2L2 µ2 ( L+ L 2 µ + Mτ µ + LMρ µ2 )2 12 + 48β 2L2\nµ2 (L+ L2 µ + Mτ µ + LMρ µ2 )\n2 ≤ 5 48 . (69)\nIn addition, since T > log (√ β ( L+L 2 µ + Mτ µ + LMρ µ2 )) log(L+µL−µ ) , we have\nν (L− µ L+ µ )2T = 5 4 (L− µ L+ µ )2T( L+ L2 µ + Mτ µ + LMρ µ2 )2 < 5 4β . (70)\nSubstituting eq. (69) and eq. (70) in eq. (68) yields\n1\nK K−1∑ k=0 E‖∇Φ(xk)‖2 ≤ 8(Φ(x0)− infx Φ(x) + 52‖y0 − y ∗(x0)‖2) βK + ( 1 + 1 K )16νσ2 LµS\n+ 11\n3 ∆ +\n8L2M2\nµ2 (1− ηµ)2Q, (71)\nwhich, in conjunction with eq. (56) and eq. (66), yields eq. (10) in Theorem 3.\nThen, based on eq. (10), in order to achieve an -accurate stationary point, i.e., E‖∇Φ(x̄)‖2 ≤ with x̄ chosen from x0, ..., xK−1 uniformly at random, it suffices to choose\nK = 32LΦ(Φ(x0)− infx Φ(x) + 52‖y0 − y ∗(x0)‖2) = O (κ3 ) , T = Θ(κ)\nQ =κ log κ2 , S = O\n(κ5 ) , Dg = O ( κ2 ) , Df = O ( κ2 ) , B = O ( κ2 ) .\nNote that the above choices of Q and B satisfy the condition that B ≥ 1 Q(1−ηµ)Q−1 required in Proposition 3.\nThen, the gradient complexity is given by Gc(F, ) = KDf = O(κ5 −2),Gc(G, ) = KTS = O(κ9 −2). In addition, the Jacobian- and Hessian-vector product complexities are given by JV(G, ) = KDg = O(κ5 −2) and\nHV(G, ) = K Q∑ j=1 BQ(1− ηµ)j−1 = KBQ ηµ ≤ O ( κ6 2 log κ2 ) .\nThen, the proof is complete." }, { "heading": "H PROOF OF THEOREM 4 ON META-LEARNING", "text": "To prove Theorem 4, we first establish the following lemma to characterize the estimation variance EB ∥∥∂LD(φk,w̃Tk ;B)\n∂φk − ∂LD(φk,w̃ T k ) ∂φk ∥∥2, where w̃Tk is the output of T inner-loop steps of gradient descent at the kth outer loop.\nLemma 10. Suppose Assumptions 2 and 3 are satisfied and suppose each task loss LSi(φ,wi) is µ-strongly-convex w.r.t. wi. Then, we have\nEB ∥∥∥∂LD(φk, w̃Tk ;B)\n∂φk − ∂LD(φk, w̃ T k ) ∂φk ∥∥∥2 ≤ (1 + L µ )2M2 |B| .\nProof. Let w̃Tk = (w T 1,k, ..., w T m,k) be the output of T inner-loop steps of gradient descent at the k th outer loop. Using Proposition 2, we have, for task Ti,∥∥∥∂LDi(φk, wTi,k) ∂φk\n∥∥∥ ≤‖∇φLDi(φk, wTi,k)‖ + ∥∥∥α T−1∑\nt=0 ∇φ∇wiLSi(φk, wti,k) T−1∏ j=t+1 (I − α∇2wiLSi(φk, w j i,k))∇wiLDi(φk, w T i.k) ∥∥∥\n(i) ≤M + αLM T−1∑ t=0 (1− αµ)T−t−1 = M + LM µ , (72)\nwhere (i) follows from assumptions 2 and strong-convexity of LSi(φ, ·). Then, using the definition of LD(φ, w̃;B) = 1|B| ∑ i∈B LDi(φ,wi), we have\nEB ∥∥∥∂LD(φk, w̃Tk ;B)\n∂φk − ∂LD(φk, w̃ T k ) ∂φk ∥∥∥2 = 1|B|Ei∥∥∥∂LDi(φk, wTi,k)∂φk − ∂LD(φk, w̃ T k ) ∂φk ∥∥∥2 (i)\n≤ 1 |B|\nEi ∥∥∥∂LDi(φk, wTi,k)\n∂φk ∥∥∥2 (ii)\n≤ ( 1 + L\nµ )2M2 |B| . (73)\nwhere (i) follows from Ei ∂LDi (φk,w T i,k)\n∂φk = ∂LD(φk,w̃Tk ) ∂φk\nand (ii) follows from eq. (72). Then, the proof is complete.\nProof of Theorem 4. Recall Φ(φ) := LD(φ, w̃∗(φ)) be the objective function, and let ∇̂Φ(φk) = ∂LD(φk,w̃Tk )\n∂φk . Using an approach similar to eq. (42), we have\nΦ(φk+1) ≤Φ(φk) + 〈∇Φ(φk), φk+1 − φk〉+ LΦ 2 ‖φk+1 − φk‖2\n≤Φ(φk)− β 〈 ∇Φ(φk),\n∂LD(φk, w̃Tk ;B) ∂φk\n〉 + β2LΦ\n2 ∥∥∥∂LD(φk, w̃Tk ;B) ∂φk ∥∥∥2. (74) Taking the expectation of eq. (74) yields\nEΦ(φk+1) (i) ≤EΦ(φk)− βE 〈 ∇Φ(φk), ∇̂Φ(φk) 〉 + β2LΦ\n2 E‖∇̂Φ(φk)‖2\n+ β2LΦ 2 E ∥∥∥∇̂Φ(φk)− ∂LD(φk, w̃Tk ;B)\n∂φk ∥∥∥2 (ii)\n≤ EΦ(φk)− βE 〈 ∇Φ(φk), ∇̂Φ(φk) 〉 + β2LΦ\n2 E‖∇̂Φ(φk)‖2 + β2LΦ 2\n( 1 + L\nµ )2M2 |B|\n≤EΦ(φk)− (β\n2 − β2LΦ\n) E‖∇Φ(φk)‖2 + (β 2 + β2LΦ ) E‖∇Φ(φk)− ∇̂Φ(φk)‖2\n+ β2LΦ\n2\n( 1 + L\nµ )2M2 |B| , (75)\nwhere (i) follows from EBLD(φk, w̃Tk ;B) = LD(φk, w̃Tk ) and (ii) follows from Lemma 10. Using Lemma 6 in eq. (75) and rearranging the terms, we have\n1\nK K−1∑ k=0 (1 2 − βLΦ ) E‖∇Φ(φk)‖2\n≤Φ(φ0)− infφ Φ(φ) βK\n+ 3 (1\n2 + βLΦ )L2M2(1− αµ)2T µ2 + βLΦ 2 ( 1 + L µ )2M2 |B|\n+ 3∆ (1\n2 + βLΦ )(L2(L+ µ)2 µ2 (1− αµ)T + 4M 2 (τµ+ Lρ) 2 µ4 (1− αµ)T−1 ) ,\nwhere ∆ = maxk ‖w̃0k− w̃∗(φk)‖2 <∞. Choose the same parameters β, T as in Theorem 2. Then, we have\n1\nK K−1∑ k=0 E‖∇Φ(φk)‖2 ≤ 16LΦ(Φ(φ0)− infφ Φ(φ)) K + 2 3 + ( 1 + L µ )2 M2 8|B| .\nThen, the proof is complete." } ]
2,020
null
SP:2c5537aa2c173582e193c903eb85dd63aabc7366
[ "In this paper, the authors propose a novel manifold learning method, via adding a locally isometric smoothness constraint, which preserves topological and geometric properties of data manifold. Empirical results demonstrate the efficacy of their approach. The authors also show that the reliability of tangent space approximated by its local neighborhood is essential to the success of manifold learning approaches." ]
It is widely believed that a dimension reduction (DR) process drops information inevitably in most practical scenarios. Thus, most methods try to preserve some essential information of data after DR, as well as manifold based DR methods. However, they usually fail to yield satisfying results, especially in high-dimensional cases. In the context of manifold learning, we think that a good low-dimensional representation should preserve the topological and geometric properties of data manifolds, which involve exactly the entire information of the data manifolds. In this paper, we define the problem of information-lossless NLDR with the manifold assumption and propose a novel two-stage NLDR method, called invertible manifold learning (inv-ML), to tackle this problem. A local isometry constraint of preserving local geometry is applied under this assumption in inv-ML. Firstly, a homeomorphic sparse coordinate transformation is learned to find the lowdimensional representation without losing topological information. Secondly, a linear compression is performed on the learned sparse coding, with the trade-off between the target dimension and the incurred information loss. Experiments are conducted on seven datasets with a neural network implementation of inv-ML, called i-ML-Enc, which demonstrate that the proposed inv-ML not only achieves invertible NLDR in comparison with typical existing methods but also reveals the characteristics of the learned manifolds through linear interpolation in latent space. Moreover, we find that the reliability of tangent space approximated by the local neighborhood on real-world datasets is key to the success of manifold based DR algorithms. The code will be made available soon.
[]
[ { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical japanese literature", "venue": "arXiv preprint arXiv:1812.01718,", "year": 2018 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: non-linear independent components estimation", "venue": "In 3rd International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real NVP", "venue": "In 5th International Conference on Learning Representations (ICLR). OpenReview.net,", "year": 2017 }, { "authors": [ "David L. Donoho" ], "title": "Compressed sensing", "venue": "IEEE Trans. Inf. Theory,", "year": 2006 }, { "authors": [ "Simon Hawe", "Matthias Seibert", "Martin Kleinsteuber" ], "title": "Separable dictionary learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2013 }, { "authors": [ "Matthias Hein", "Jean-Yves Audibert" ], "title": "Intrinsic dimensionality estimation of submanifolds", "venue": "in r. pp", "year": 2005 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Jörn-Henrik Jacobsen", "Arnold W.M. Smeulders", "Edouard Oyallon" ], "title": "i-revnet: Deep invertible networks", "venue": "In Proceedings of 6th International Conference on Learning Representations (ICLR). OpenReview.net,", "year": 2018 }, { "authors": [ "William B. Johnson", "JohnsonJoram Lindenstrauss" ], "title": "Extensions of lipschitz maps into a hilbert space", "venue": "Contemporary Mathematics, 26:189–206,", "year": 1984 }, { "authors": [ "Samuel Kaski", "Jarkko Venna" ], "title": "Visualizing gene interaction graphs with local multidimensional scaling", "venue": "In European Symposium on Artificial Neural Networks,", "year": 2006 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of 3rd International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In 2nd International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Stan Z. Li", "Zelin Zhang", "Lirong Wu" ], "title": "Markov-lipschitz deep learning", "venue": "arXiv preprint arXiv:2006.08256,", "year": 2020 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "James McQueen", "Marina Meila", "Dominique Joncas" ], "title": "Nearly isometric embedding by relaxation", "venue": "In Proceedings of the 29th Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Jiaqiang Mei" ], "title": "Introduction to Manifold and Geometry", "venue": "Beijing Science Press,", "year": 2013 }, { "authors": [ "Michael Moor", "Max Horn", "Bastian Rieck", "Karsten Borgwardt" ], "title": "Topological autoencoders", "venue": "In Proceedings of the 37th International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research,", "year": 2020 }, { "authors": [ "John Nash" ], "title": "The imbedding problem for riemannian manifolds", "venue": "Annals of Mathematics,", "year": 1956 }, { "authors": [ "Sameer Nene", "Shree Nayar", "H. Murase" ], "title": "Columbia object image library (coil-100)", "venue": "Technical report,", "year": 1996 }, { "authors": [ "Sameer A. Nene", "Shree K. Nayar", "Hiroshi Murase" ], "title": "Columbia object image library (coil-20)", "venue": "Technical report, Columbia University,", "year": 1996 }, { "authors": [ "The-Gia Leo Nguyen", "Lynton Ardizzone", "Ullrich Köthe" ], "title": "Training invertible neural networks as autoencoders", "venue": "In Proceedings of 41st German Conference of Pattern Recognition (GCPR),", "year": 2019 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg", "Jake Vanderplas", "Alexandre Passos", "David Cournapeau", "Matthieu Brucher", "Matthieu Perrot", "Édouard Duchesnay" ], "title": "Scikit-learn: Machine learning in python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Sam T Roweis", "Lawrence K Saul" ], "title": "Nonlinear dimensionality reduction by locally linear embedding", "venue": "science, 290:2323–2326,", "year": 2000 }, { "authors": [ "Harish Seshadri", "Kaushal Verma" ], "title": "The embedding theorems of whitney and nash", "venue": "Resonance, pp", "year": 2016 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Xian Wei", "Martin Kleinsteuber", "Hao Shen" ], "title": "Invertible nonlinear dimensionality reduction via joint dictionary learning", "venue": "Notes in Computer Science,", "year": 2015 }, { "authors": [ "Xian Wei", "Hao Shen", "Yuanxiang Li", "Xuan Tang", "Fengxiang Wang", "Martin Kleinsteuber", "Yi Lu Murphey" ], "title": "Reconstructible nonlinear dimensionality reduction via joint dictionary learning", "venue": "IEEE Trans. Neural Networks Learn. Syst.,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Zhenyue Zhang", "Jing Wang" ], "title": "Mlle: Modified locally linear embedding using multiple weights", "venue": "In Advances in Neural Information Processing systems,", "year": 2007 }, { "authors": [ "Zhenyue Zhang", "Hongyuan Zha" ], "title": "Principal manifolds and nonlinear dimensionality reduction via tangent space alignment", "venue": "SIAM journal on scientific computing,", "year": 2004 }, { "authors": [ "Li" ], "title": "2020) are compared for reconstructible manifold learning. We report the L-th layer of i-ML-Enc (the first stage) for the NLDR quality and the (L− 1)-th layer", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "In real-world scenarios, it is widely believed that the loss of data information is inevitable after dimension reduction (DR), though the goal of DR is to preserve as much information as possible in the low-dimensional space. In the case of linear DR, compressed sensing (Donoho, 2006) breaks this common sense with practical sparse conditions of the given data. In the case of nonlinear dimension reduction (NLDR), however, it has not been clearly discussed, e.g. what is the structure within data and how to maintain these structures after NLDR? From the perspective of manifold learning, the manifold assumption is widely adopted, but classical manifold based DR methods usually fail to yield good results in the many practical case. Therefore, what is the gap between theoretical and real-world applications of manifold based DR? Here, we give the first detailed discussion of these two problems in the context of manifold learning. We think that a good low-dimensional representation should preserve the topology and geometry of input data, which require the NLDR transformation to be homeomorphic. Thus, we propose an invertible NLDR process, called inv-ML, combining sparse coordinate transformation and local isometry constraint which preserve the property of topology and geometry, to explain the information-lossless NLDR in manifold learning theoretically. We instantiate inv-ML as a neural network called i-ML-Enc via a cascade of equidimensional layers and a linear transform layer. Sufficient experiments are conduct to validate invertible NLDR abilities of i-ML-Enc and analyze learned representations to reveal inherent difficulties of classical manifold learning.\nTopology preserving dimension reduction. To start, we first make out the theoretical definition of information-lossless DR on a manifold. The topological property is what is invariant under a homeomorphism, and thus what we want to achieve is to construct a homeomorphism for dimension\nreduction, removing the redundant dimensions while preserving invariant topology. To be more specific, f :Md0 → Rm is a smooth mapping of a differential manifold into another, and if f is a homeomorphism ofMd0 intoMd1 = f(Md0) ⊂ Rm, we call f is an embedding ofMd0 into Rm. Assume that the data set X = {xj |1 ≤ j ≤ n} sampled from the compact manifoldMd1 ⊂ Rm which we call the data manifold and is homeomorphic toMd0. For the sample points we get are represented in the coordinate after inclusion mapping i1, we can only regard them as points from Euclidean space Rm without any prior knowledge, and learn to approximate the data manifold in the latent space Z. According to the Whitney Embedding Theorem (Seshadri & Verma, 2016),Md0 is can be embedded smoothly into R2d by a homeomorphism g. Rather than to find the f−1 :Md1 →Md0, our goal is to seek a smooth map h :Md1 → Rs ⊂ R2d, where h = g ◦ f−1 is a homeomorphism ofMd1 intoMd2 = h(Md1) and d ≤ s ≤ 2d m, and thus the dim(h(X )) = s, which achieves the DR while preserving the topology. Owing to the homeomorphism h we seek as a DR mapping, the data manifoldMd1 is reconstructible viaMd1 = h−1 ◦ h(Md1), by which we mean h a topology preserving DR as well as information-lossless DR.\nGeometry preserving dimension reduction. While the topology of the data manifoldMd1 can be preserved by the homeomorphism h discussed above, it may distort the geometry. To preserve the local geometry of the data manifold, the map should be isometric on the tangent space TpMd1 for every p ∈ Md1, indicating that dMd1 (u, v) = dMd2 (h(u), h(v)), ∀u, v ∈ TpM d 1. By Nash’s Embedding Theorem (Nash, 1956), any smooth manifold of class Ck with k ≥ 3 and dimension d can be embedded isometrically in the Euclidean space Rs with s polynomial in d.\nNoise perturbation. In the real-world scenarios, sample points are not lied on the ideal manifold strictly due to the limitation of sampling, e.g. non-uniform sampling noises. When the DR method is very robust to the noise, it is reasonable to ignore the effects of the noise and learn the representation Z from the given data. Therefore, the intrinsic dimension of X is approximate to d, resulting in the lowest isometric embedding dimension is larger than s." }, { "heading": "2 RELATED WORK", "text": "Manifold learning. Most classical linear or nonlinear DR methods aim to preserve the geometric properties of manifolds. The Isomap (Tenenbaum et al., 2000) based methods aim to preserve the global metric between every pair of sample points. For example, McQueen et al. (2016) can be regarded as such methods based on the push-forward Riemannian metric. For the other aspect, LLE (Roweis & Saul, 2000) based methods try to preserve local geometry after DR, whose derivatives like LTSA (Zhang & Zha, 2004), MLLE (Zhang & Wang, 2007), etc. have been widely used but usually fail in the high-dimensional case. Recently, based on local properties of manifolds, MLDL (Li et al., 2020) was proposed as a robust NLDR method implemented by a neural network, preserving the local geometry but abandoning the retention of topology. In contrast, our method takes the preservation of both geometry and topology into consideration, trying to maintain these properties of manifolds even in cases of excessive dimension reduction when the target dimension s′ is smaller than s.\nInvertible model. From AutoEncoder (AE) (Hinton & Salakhutdinov, 2006), the fundamental neural network based model, having achieved DR and cut information loss by minimizing the reconstruction loss, some AE based generative models like VAE (Kingma & Welling, 2014) and manifold-based NLDR models like TopoAE (Moor et al., 2020) has emerged. These methods cannot avoid information loss after NLDR, and thus, some invertible models consist of a series of\nequidimensional layers have been proposed, some of which aim to generate samples by density estimation through layers (Dinh et al., 2015) (Dinh et al., 2017) (Behrmann et al., 2019), and the other of which are established for other targets, e.g. validating the mutual information bottleneck (Jacobsen et al., 2018). Different from methods mentioned above, our proposed i-ML-Enc is a neural network based encoder, with NLDR as well as maintaining structures of raw data points based on manifold assumption via a series of equidimensional layers.\nCompressed sensing. The JohnsonLindenstrauss Theorem (Johnson & Lindenstrauss, 1984) provides the lower bound of target dimension for linear DR with the pairwise distance loss. Given a small constant ∈ (0, 1) and n samples {xi}ni=1 in Rm, a linear projection W : Rm → Rs, s > O( logm 2 ) can be found, which embeds samples into a s-dimensional space with (1 + ) distortion of any sample pairs (xi,xj). It adopts a prior assumption that the given samples in high-dimensional space have a relevant low-dimensional structure constraint which can be maintained by keeping the pairwise distance. Further, compressed sensing (CS) provides strict sparse conditions of linear DR with great probability to recover the compressed signal, which usually cooperates with sparse dictionary learning (Hawe et al., 2013). The core of CS is Restricted Isometry Property (RIP) condition, which reads\n(1− )‖x1 − x2‖2 ≤ ‖W (x1 − x2)‖2 ≤ (1 + )‖x1 − x2‖2, (1) where ∈ (0, 1) is a rather small constant and W is a linear measurement of signal x1 and x2. Given a signal x ∈ Rm with s-sparse representation α = Φx on an m-dimensional orthogonal basis Φ, α can be recovered from the linear measurement y = Wα with great probability by the sparse optimization if Wm×s satisfies the RIP condition: arg minã ||α̃||0, s.t. y = Wα̃. The linear measurement is rewritten as y = ΨΦα = Ψx where Ψ is a low-dimensional orthogonal basis and Φ can be found by the nonlinear dictionary learning. Some reconstructible CS-based NLDR methods (Wei et al., 2015) (Wei et al., 2019) are proposed, which are achieved by preserving local geometry on AE-based networks, but usually with unsatisfying embedding qualities." }, { "heading": "3 PROPOSED METHOD", "text": "We will specifically discuss the proposed two-stage invertible NLDR process inv-ML as the first stage in Sec 3.1, in which a s-dimensional representation is learned by a homeomorphism transformation while keeping all topological and geometric structure of the data manifold; then give applicable conditions in real-world scenarios as the second stage in Sec 3.2, in which the dimension is further compressed to s′. We instantiate the proposed inv-ML as a neural network i-ML-Enc in Sec 3.3." }, { "heading": "3.1 TOPOLOGY AND GEOMETRY PRESERVATION", "text": "Canonical embedding for homeomorphism. To seek the smooth homeomorphism h, we turn to the theorem of local canonical form of immersion (Mei, 2013). Let f :M→N an immersion, and for any p ∈ M, there exist local coordinate systems (U, φ) around p and (V, ψ) around f(p) such that ψ ◦ f ◦ φ−1 : φ(U)→ ψ(V ) is a canonical embedding, which reads\nψ ◦ f ◦ φ−1(x1, x2, · · · , xd) = (x1, x2, · · · , xd, 0, 0, · · · , 0). (2)\nIn our case, letM = Md2, and N = Md1, any point z = (z1, z2, · · · , zs) ∈ Md1 ⊂ Rs can be mapped to a point in Rm by the canonical embedding\nψ ◦ h−1(z1, z2, · · · , zs) = (z1, z2, · · · , zs, 0, 0, · · · , 0). (3)\nFor the point z is regarded as a point in Rs, φ = I is an identity mapping, and for h = g ◦ f−1 is a homeomorphism, h−1 is continuous. The Eq. (3) can be written as\n(z1, z2, · · · , zs) = h ◦ ψ−1(z1, z2, · · · , zs, 0, 0, · · · , 0) = h(x1, x2, · · · , xm). (4)\nTherefore, to reduce dim(X ) = m to s, we can decompose h into ψ and h ◦ ψ−1, by firstly finding a homeomorphic coordinate transformation ψ to map x = (x1, x2, · · · , xm) into ψ(x) = (z1, z2, · · · , zs, 0, 0, · · · , 0), which is called a sparse coordinate transformation, and h ◦ ψ−1 can be easily obtained by Eq. (3). We denote h ◦ ψ−1 by h0 and call it a sparse compression. The theorem holds for any manifold, while in our case, we aims to find the mapping of X ⊂ Rm into Rs, so the local coordinate systems can be extended to the whole space of Rm.\nLocal isometry constraint. The prior local isometry constraint is applied under the manifold assumption, which aims to preserve distances (or some other metrics) locally so that dMd1 (u, v) = dMd2 (h(u), h(v)), ∀u, v ∈ TpM d 1." }, { "heading": "3.2 LINEAR COMPRESSION", "text": "With the former discussed method, manifold-based NLDR can be achieved with topology and geometry preserved, i.e. s-sparse representation in Rm. However, the target dimension s′ may be even less than s, further compression can be performed through the linear compression h′0 : Rm → Rs ′ instead of sparse compression, where h′0(z) = Wm×s′z, with minor information loss. In general, the sparse compression is a particular case of linear compression with h0(z) = h′0(z) = Λz, where Λ = (δi,j)m×s and δi,j is the Kronecker delta. We discusses the information loss caused by a linear compression under different target dimensions s′ as following cases.\nIdeal case. In the case of d ≤ s ≤ s′, based on compressed sensing, we can reconstruct the raw input data after NLDR process without loss of any information by solving the sparse optimization problem mentioned in Sec. 2 when the transformation matrix Wm×s′ has full rank of the column. In the case of d ≤ s′ < s, it is inevitable to drop the topological properties because the two spaces before and after NLDR are not homeomorphic, and it is reduced to local geometry preservation by LIS constraint. However, in the case of s′ ≤ d < s, both topological and geometric information is lost to varying degrees. Therefore, we can only try to retain as much geometric structure as possible.\nPractical case. In real-world scenarios, the target dimension s′ is usually lower than s, even lower than d. Meanwhile, the data sampling rate is quite low, and the clustering effect is extremely significant, indicating that it is possible to approximateM1 by low-dimensional hyperplane in the Euclidean space. In the case of s′ < s, we can retain as the prior Euclidean topological structure as additional topological information of raw data points. It is reduced to replace the global topology with some relative structures between each cluster." }, { "heading": "3.3 NETWORK FOR IMPLEMENTATION", "text": "Based on Sec 3.1 and Sec 3.2, we propose a neural network i-ML-Enc which achieves two-stage NLDR preserving both topology and geometry, as shown in Fig. 2. In this section, we will introduce the function of network structures and loss functions respectively, including the orthogonal loss, padding loss and extra heads for the first stage, and the LIS loss, push-away loss for the second stage.\nCascade of homeomorphisms. Since the sparse coordinate transformation ψ (and its inverse) can be highly nonlinear and complex, we decompose it into a cascade of L−1 isometric homeomorphisms ψ = ψ(L−1) ◦ · · · ◦ψ(2) ◦ψ(1), which can be achieved by L−1 equidimensional network layers. For each ψ(l), it is a sparse coordinate transformation, where ψl(z1,(l), z2,(l), · · · , zsl,(l), 0, · · · , 0) = (z1,(l+1), z2,(l+1), · · · , zsl+1,(l+1), 0, · · · , 0) with sl+1 < sl and sL−1 = s. The layer-wise transformation Z(l+1) = ψ(l)(Z(l)) and its inverse can be written as\nZ(l+1) = σ(WlX (l)), Z(l)\n′\n= W−1l (σ −1(Z(l+1)\n′\n)), (5) in which Wl is the l-th weight matrix of the neural network to be learned, and σ(.) is a nonlinear activation. The bias term is removed here to facilitate its simple inverse structure.\nOrthogonal loss. Each layer-wise transformation is thought to be a homeomorphism between Z(l) and Z(l+1), and we want it to be a nearly isometric. We force each Wl to be an orthogonal matrix, which allows simple calculation of the inverse of Wl. Based on RIP condition, the orthogonal constraint of the weight matrix in the first L− 1 layers can be obtained as\nLorth = L−1∑ l=1 α(l)ρ(WTl Wl − I), (6)\nwhere {α(l)} are the loss weights. Notice that ρ(W ) = supz∈Rm,z 6=0 |Wz| |z| is the spectral norm of W , and the loss term can be written as ρ(WTl Wl − I) = supz∈Rm,z 6=0 | |Wz| |z| | which is equivalent to RIP condition in Eq. (1).\nPadding loss. To force sparsity from the second to (L− 1)-th layers, we add a zero padding loss to each of these layers. For the l-th layer whose target dimension is sl, pad the last m− sl elements of z(l+1) with zeros and panish these elements with L1 norm loss:\nLpad = L−1∑ l=2 β(l) m∑ i=s(l) |z(l+1)i |, (7)\nwhere {β(l)} are loss weights. The target dimension sl can be set heuristically. Linear transformation head. We use the linear transformation head to achieve the linear compression step in our NLDR process, which is a transformation between the orthogonal basis of high dimension and lower dimension. Thus, we apply the row orthogonal constraint to WL.\nLIS loss. Since the linear DR is applied at the end of the NLDR process, we apply locally isometric smoothness (LIS) constraint (Li et al., 2020) to preserve the local geometric properties. Take the LIS loss in the l-th layer as an example:\nLLIS = n∑ i=1 ∑ j∈Nki ∥∥∥dX(xi,xj)− dZ(z(l)i , z(l)j )∥∥∥ , (8) where N ki is a set of xi’s k-nearest neighborhood in the input space, and dX and dZ are the distance of the input and the latent space, which can be approximated by Euclidean distance in local open sets.\nPush-away loss. In the real case discussed in Sec 3.2, the latent space of the (L− 1)-th layer can approximately to be a hyperplane in Euclidean space, so that we introduce push-away loss to repel the non-adjacent sample points of each xi in its B-radius neighbourhood in the latent space. It deflates the manifold locally when acting together with LLIS in the linear DR. Similarly, Lpush is applied after the linear transformation in the l-th layer:\nLpush = − n∑ i=1 ∑ j∈Nki 1 dZ(z (l) i ,z (l) j )<B log ( 1 + dZ(z (l) i , z (l) j ) ) , (9)\nwhere 1(.) ∈ {0, 1} is the indicator function for the bound of B. Extra heads. In order to force the first L − 1 layers of the network to achieve NLDR gradually, we introduce auxiliary DR branchs, called extra head, at layers from the second to the (L− 1)-th. The structure of each extra head is same as the linear transformation head and will be discarded after training. Lextra is written as\nLextra = L−1∑ l=1 γ(l)(LLIS + µ (l)Lpush), (10)\nwhere {γ(l)} and {µ(l)} are loss weights which can be set based on {sl}. Inverse process. The inverse process is the decoder directly obtained by the first L − 1 layers of the encoder given by Eq. (5), which does not involved in the training process. When the target dimension s′ is equal to s, the inverse of the layer-L can be solved by some existing methods such as compressed sensing or eigenvalue decomposition." }, { "heading": "4 EXPERIMENT", "text": "In this section, we first evaluate the proposed invertible NLDR achieved by i-ML-Enc in Sec 4.1, then investigate the property of data manifolds with i-ML-Enc in Sec 4.2. The properties of i-ML-Enc are further studied in Sec 4.3. We carry out experiments on seven datasets: (i) Swiss roll (Pedregosa et al., 2011), (ii) Spheres (Moor et al., 2020) and Half Spheres, (iii) USPS (Hull, 1994), (iv) MNIST (LeCun et al., 1998), (v) KMNIST (Clanuwat et al., 2018), (vi) FMNIST (Xiao et al., 2017), (vii) COIL-20 (Nene et al., 1996b). The implementation is based on the PyTorch 1.3.0 library running on NVIDIA v100 GPU. The following settings of i-ML-Enc are used for all datasets: LeakyReLU with α = 0.1; Adam optimizer (Kingma & Ba, 2015) with learning rate lr = 0.001 for 8000 epochs; the local neighborhood is determined by kNN with k = 15; L layers neural network as shown in Fig. 2." }, { "heading": "4.1 METHODS COMPARISON", "text": "To verify the invertible NLDR ability of i-ML-Enc and analyze different cases of NLDR, we compare it with several typical methods in NLDR and inverse scenarios on both synthetic (Swiss roll, Spheres and Half Spheres) and real-world datasets (USPS, MNIST, FMNIST and COIL-20). Six methods for manifold learning: MLLE (Zhang & Wang, 2007), t-SNE (Maaten & Hinton, 2008) and ML-Enc (Li et al., 2020) are compared for NLDR; three AE-based methods VAE (Kingma & Welling, 2014), TopoAE (Moor et al., 2020) and ML-AE (Li et al., 2020) are compared for reconstructible manifold learning. Three methods for inverse models: INN (Nguyen et al., 2019), i-RevNet (Jacobsen et al., 2018), and i-ResNet (Behrmann et al., 2019) are compared for bijective property. Among them, i-RevNet and i-ResNet are supervised algorithms while the rest are unsupervised. For a fair comparison in this experiment, we adopt 8 layers neural network for all the network-based methods except i-RevNet and i-ResNet. Hyperparameter values of i-ML-Enc and configurations of these datasets such as the input and target dimension are provided in Appendix A.2.\nEvalution metrics. We evaluate an invertible NLDR algorithm from three aspects: (1) Invertible property. Reconstruction MSE (RMSE) and maximum norm error (MNE) measure the difference between the input data and reconstruction results by norm-based errors. (2) NLDR quality. Trustworthiness (Trust) and Continuity (Cont) (Kaski & Venna, 2006), latent MSE (l-MSE), Minimum (Kmin) and Maximum (Kmax) local Lipschitz constant (Li et al., 2020) are used to evaluate the quality of the low-dimensional representation. (3) Generalization ability of the representation. Mean accuracy (Acc) of linear classification on the representation measures models’ generalization ability to downstream tasks. Their exact definitions and purpose are given in Appendix A.1.\nConclusion. Table 1 compares the i-ML-Enc with the related methods on MNIST, more results and detailed analysis on other datasets are given in Appendix A.2. The process of invertible NLDR of i-ML-Enc and comparing results of typical methods are visualized in Fig. 4. We can conclude: (1) i-ML-Enc achieves invertible NLDR in the first stage with great NLDR and generalization qualities. The representation in the L − 1-th layer of i-ML-Enc mostly outperforms all comparing methods for both invertible and NLDR metrics without losing information of the data manifold, while other methods drop geometric and topological information to some extent. (2) i-ML-Enc tries to keep more geometric and topological structure in the second stage in the case of s ′ < d ≤ s. Though the representation of the L-th layer of i-ML-Enc achieves the second best in NLDR metrics, it shows high consistency with the L− 1-th layer in visualization results." }, { "heading": "4.2 LATENT SPACE INTERPOLATION", "text": "Since the first stage of i-ML-Enc is nearly homeomorphism, we carry out linear interpolation experiments on the discrete data points in both the input space and the (L − 1)-th layer latent space to analyze the intrinsic continuous manifold, and verify the latent results by its inverse process. A good low-dimensional representation of the manifold should not only preserve the local properties, but also be flatter and denser than the high-dimensional input with lower curvature. Thus, we expect that the local linear interpolation results in the latent space should be more reliable than in the input space. The complexity of data manifolds increases from USPS(256), MNIST(256), MNIST(784), KMNIST(784) to FMNIST(784), which is analyzed in Appendix A.3.1. K-nearest neighbor interpolation. We first verify the reliability of the low-dimensional repre-\nsentation in a small local system by kNN interpolation. Given a sample xi, randomly select xj in xi’s k-nearest neighborhood in the latent space to form a sample pair (xi,xj). Perform linear interpolation of the latent representation of the pair and get reconstruction results for evaluation as: x̂ti,j = ψ\n−1(tψ(xi) + (1 − t)ψ(xj)), t ∈ [0, 1]. The experiment is performed on i-ML-Enc with L = 6 and K = 15, training with 8000 samples for USPS and MNIST(256), 20000 sapmles for MNIST(784), KMNIST, FMNIST.\nEvaluation. (1) Calculate the MSE loss between reconstruction results of the latent interpolation x̂ti,j and the input space result x t i,j which is the corresponding interpolation results in the local neighborhood of the input space with xti,j = txi + (1 − t)xj . Fig. 5 shows the results of k = 1, 2, ..., 10. (2) Visualize the typical results of the input space and the latent space for comparison, as shown in Fig. 6. More results and detailed analysis are given in Appendix A.3.2.\nGeodesic interpolation. Based on 4.2.1, we further employ a more reasonable method to generate the sampling points between two distant samples pairs in the latent space. Given a sample pair (xi, xj) with k ≥ 45 from different clusters, we select the three intermediate sample pairs (xi, xi1), (xi1 , xi2), (xi2 , xj) with k ≤ 20 along the geodesic path in latent space for piece-wise linear interpolation in both space. Visualization results are given in Appendix A.3.2.\nConclusion. Compared with results of the kNN and geodesic interpolation, we can conclude: (1) Because of the sparsity of the high-dimensional latent space, noises are inevitable on the latent results indicating the limitation of linear approximation. Empirically, the reliability of the latent interpolation decreases with the expansion of the local neighborhood on the same dataset. (2) We will get worse latent results in the following cases: on the similar manifolds, the sampling rate is lower or the input dimension is higher indicated by USPS(256), MNIST(256) and MNIST(784); with the same sampling rate and input dimension, the manifold is more complex indicated by MNIST(784), KMNIST to FMNIST. They indicate that the confidence of the tangent space estimated by local\nneighborhood decreases on more complex manifolds with sparse sampling. (3) The interpolation between two samples in latent space is smoother than that in the input space, validating the flatness and density of the lower-dimensional representation learned by i-ML-Enc. Overall, we infer that the unreliable approximation of the local tangent space by the local neighborhood is the basic reason for the manifold learning fails in the real-world case, because the geometry should be preserved in the first place. To come up with this common situation, it is necessary to import other prior assumption or knowledge when the sampling rate of the data manifold is quite low, e.g. the Euclidean space assumption, semantic information of down-steam tasks." }, { "heading": "4.3 ABLATION STUDY", "text": "Analysis on loss terms. We perform an ablation study on MNIST, USPS, KMNIST, FMNIST and COIL-20 to evaluate the effects of the proposed network structure and loss terms in i-ML-Enc for invertible manifold learning. Based on ML-Enc, three proposed parts are added: the extra head (Ex), the orthogonal loss Lorth (Orth), the zero padding loss Lpad (Pad). Besides the previous 8 indicators, we introduce the rank of the output matrix of the layer L− 1 as r(ZL−1), to measure the sparsity of the high-dimensional representation. We conclude that the combination Ex+Orth+Pad is the best to achieve invertible NLDR of s-sparse by a series of equidimensional layers. The detailed analysis of experiment results are given in Appendix A.4.1.\nOrthogonality and sparsity. We further discuss the orthogonality of weight matrices and learned s-sparse representations in the first stage of i-ML-Enc. We find that the first L− 1 layers of i-ML-Enc are nearly strict orthogonal mappings and the output from the L − 1-th layer can be converted to s-dimensional representation without information loss. The detailed analysis are provided in Appendix A.4.2. Thus, we conclude that an invertible NLDR of data manifolds can be learned by i-ML-Enc in the sparse coordinate transformation." }, { "heading": "5 CONCLUSION", "text": "A novel invertible NLDR process inv-ML and a neural network implementation inv-ML-Enc are proposed to tackle two problems of manifold-based DR in practical scenarios, i.e., the condition for information-lossless NLDR and the key issue of manifold learning. Firstly, the sparse coordinate transformation is learned to find a flatter and denser low-dimensional representation with preservation of geometry and topology of data manifolds. Secondly, we discuss the information loss with different target dimensions in linear compression. Experiment results of i-ML-Enc on seven datasets validate its invertibility. Further, the interpolation experiments reveal that finding a reliable tangent space by the local neighborhood on real-world datasets is the inherent defect of manifold based DR methods." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DEFINITIONS OF PERFORMANCE METRICS", "text": "As for NLDR tasks, We adopt the performance metrics used in MLDL (Li et al., 2020) and TopoAE (Moor et al., 2020) to measure topology-based manifold learning, and add a new indicator to evaluate the generalization ability of the latent space. Essentially, the related indicators are defined based on comparisons of the local neighborhood of the input space and the latent representation. As for the invertible property, we adopted the norm-based reconstruction metrics, i.e. the L2 and L∞ norm errors, which are also based on the inputs. The following notations are used in the definitions d(l)i,j is the pairwise distance in space Z(l); N (l)i,k is the set of indices to the k-nearest neighbors (k-NN) of z(l)i in latent space, and Ni,k is the set of indices to the k-NN of xi in input space; r (l) i,j is the closeness rank of z(l)j in the k-NN of z (l) i . The evaluation metrics are defined below:\n(1) RMSE (invertible quality). This indicator is commonly used to measure reconstruction quality. Based on the input x and the reconstruction output x̂, the mean square error (MSE) of the L2 norm is defined as:\nRMSE = ( 1\nN2 N∑ i=1 (xi − zi)2) 1 2 .\n(2) MNE (invertible quality). This indicator is designed to evaluate the bijective property of a L layers neural network model. Specifically, taking each invertible unit in the network, calculate the L∞ norm error of the input and reconstruction output of each corresponding layer, and choose the maximum value among all units. If a model is bijective, this indicator can reflect the stability of the model:\nMNE = max 1≤l≤L−1\n‖zl − ẑl‖∞, l = 1, 2, ...L.\n(3) Trust (embedding quality). This indicator measures how well neighbors are preserved between the two spaces. The k nearest neighbors of a point are preserved when going from the input space X to space Z(l):\nTrust = 1\nk2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l)i,k,j 6∈Ni,k (r (l) i,j − k) where k1 and k2 are the bounds of the number of nearest neighbors, so averaged for different k-NN numbers.\n(4) Cont (embedding quality). This indicator is asymmetric to Trust. It checks to what extent neighbors are preserved from the latent space Z(l) to the input space X:\nCont = 1\nk2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈Ni,k,j 6∈N (l)i,k (r (l) i,j − k) (5) Kmin and Kmax (embedding quality). Those indicators are the minimum and maximum\nof the local bi-Lipschitz constant for the homeomorphism between input space and the l-th layer, with respect to the given neighborhood system:\nKmin = min 1≤i≤M max j∈N (l)i,k Ki,j , Kmax = max 1≤i≤M max j∈N (l)i,k Ki,j ,\nwhere k is that for k-NN used in defining Ni and\nKi,j = max\n{ d (l) i,j\nd (l′) i,j\n, d (l′) i,j\nd (l) i,j\n} .\n(6) l-MSE (embedding quality). This indicator is to evaluate the distance disturbance between the input space and latent space with L2 norm-based error.\nlMSE = ( 1\nN2 N∑ i=1 N∑ j=1 ‖dX(xi,xj)− dZ(h(xi), h(xj))‖) 1 2 .\n(7) ACC (generalization ability). In general, a good representation should have a good generation ability to downstream tasks. To measure this ability, logistic regression (Pedregosa et al., 2011) is performed after the learned latent representation. We report the mean accuracy on the test set for 10-fold cross-validation." }, { "heading": "A.2 METHOD COMPARISON", "text": "Configurations of datasets. The NLDR performance and its inverse process are verified on both synthetic and real-world datasets. As shown in Table 2, we list the type of the dataset, the class number of clusters, the input dimension m, the target dimension s′, the intrinsic dimension d which is only an approximation for the real-world dataset, the number of train and test samples, and the logistic classification performance on the raw input space. Among them, Swiss roll serves as an ideal example of information-lossless NLDR; Spheres, whose target dimension s ′ is lower than the intrinsic dimension s, serves as an excessive case of NLDR compared to Half-spheres; four image datasets with increasing difficulties are used to analyze complex situations in real-world scenarios. Additionally, the lower bound and upper bound of the intrinsic dimension of real-world datasets are approximated by (Hein & Audibert, 2005) and AE-based INN (Nguyen et al., 2019). Specifically, the upper bound can be found by the grid search of different bottlenecks of the INN, and we report the bottleneck size of each dataset when the reconstruction MSE loss is almost unchanged.\nResults on toy datasets. The Table 3 compares the i-ML-Enc with other methods in 9 performance metrics on Swiss roll and Half-spheres datasets in the case of s = s′. Eight methods for manifold learning: Isomap (Tenenbaum et al., 2000), t-SNE (Maaten & Hinton, 2008), RR (McQueen et al., 2016), and ML-Enc (Li et al., 2020) are compared for NLDR; four AE-based methods AE (Hinton & Salakhutdinov, 2006), VAE (Kingma & Welling, 2014), TopoAE (Moor et al., 2020), and ML-AE (Li et al., 2020) are compared for reconstructible manifold learning. We report the L-th layer of i-ML-Enc (the first stage) for the NLDR quality and the (L− 1)-th layer (the second stage) for the" }, { "heading": "MNISTUSPS", "text": "invertible NLDR ability. ML-Enc performs best in Trust, Kmin, Kmax, and l-MSE on Swiss roll which shows its great embedding abilities. Based on ML-Enc, i-ML-Enc achieves great embedding results in the second stage on Half-spheres, which shows its advantages of preserving topological and geometric structures in the high-dimensional case. And i-ML-Enc outperforms other methods in its invertible NLDR property of the first stage.\nResults on real-world datasets. The Table 4 compares the i-ML-Enc with other methods in 9 performance metrics on USPS, FMNIST and COIL-20 datasets in the case of s > s′. Six methods for manifold learning: Isomap, t-SNE, and ML-Enc are compared for NLDR; three AE-based methods AE, ML-AE, and TopoAE are compared for reconstructible manifold learning. Three methods for inverse models: INN (Nguyen et al., 2019), i-RevNet (Jacobsen et al., 2018), and i-ResNet (Behrmann et al., 2019) are compared for bijective property. The visualization of NLDR and its inverse process of i-ML-Enc are shown in Fig. 7, together with the NLDR results of Isomap, t-SNE and, ML-Enc. The target dimension for visualization is s ′ = 2 and the high-dimensional latent space are visualized by PCA. Compared with NLDR algorithms, the representation of the L-th layer of i-ML-Enc nearly achieves the best NLDR metrics on FMNIST, and ranks second place on USPS and third place on COIL-20. The drop of performance between the (L− 1)-th and L-th layers of i-ML-Enc are caused by the sub-optimal linear transformation layer, since the representation of its first stage are quite reliable. Compared with other inverse models, i-ML-Enc outperforms in all the NLDR metrics and inverse metrics in the first stage, which indicates that a great low-dimensional representation of data manifolds can be learned by a series of equidimensional layers. However, i-ML-Enc shows larger NME on FMNIST and COIL-20 compared with inverse models, which indicates that i-MLEnc is more unstable dealing with complex datasets in the first stage. Besides, we visualize the reconstruction samples of six image datasets including COIL-100 (Nene et al., 1996a) to show the inverse quality of i-ML-Enc in Fig. 8." }, { "heading": "A.3 LATENT SPACE INTERPOLATION", "text": "" }, { "heading": "A.3.1 DATASETS COMPARISON", "text": "Here is a brief introduction to four interpolation data sets. We analyze the difficulty of dataset roughly according to dimension, sample size, image entropy, texture, and the performance of classification tasks: (1) Sampling ratio. The input dimension and sample number reflect the sampling ratio. Generally, the sample number has an exponential relationship with the input dimension in the case of sufficient sampling. Thus, the sampling ratio of USPS is higher than others. (2) Image entropy. The Shannon entropy of the histogram measures the information content of the image, and it reaches the maximum when the density estimated by the histogram is an uniform distribution. We report the mean entropy of each dataset. We conclude that USPS has richer grayscale than MNIST(256), while the information content of MNIST(784), KMNIST, and FMNIST shows an increasing trend. (3) Texture. The standard deviation (std) of the histogram reflects the texture information in the image, and we report the mean std of each dataset. Combined with the evaluation of human eyes, the texture features become rougher and rougher from USPS, MNIST to KMNIST, while FMNIST contains complex and regular texture. (4) Classification tasks. We report the mean accuracy of 10-fold cross-validation using kNN and logistic classifier (Pedregosa et al., 2011) for each data set. The credibility of the neighborhood system decreases gradually from USPS, MNIST, KMNIST to FMNIST. Combined with the visualization results of each dataset in Fig. 7, it obvious that KMNIST has the worst linear separability. Above all, we can roughly give the order of the difficulty of manifold learning on each data set: USPS<MNIST(256)<MNIST(784)<KMNIST<FMNIST." }, { "heading": "A.3.2 MORE INTERPOLATION RESULTS", "text": "kNN interpolation. We verify the reliability of the low-dimensional representation by kNN interpolation. Comparing the results of different values of k, as shown in Fig. 9, we conclude that: (1) Because the high-dimensional latent space is still quite sparse, there is some noise caused by linear approximation on the latent results. The MSE loss and noises of the latent results are increasing with the expansion of the local neighborhood on the same dataset, reflecting the reliability of the local neighborhood system. (2) In terms of the same sampling rate, the MSE loss and noises of the latent results grow from MNIST(784), KMNIST to FMNIST, which indicates that the confidence of the local homeomorphism property of the latent space decreases gradually on more difficult manifolds. (3) In terms of the similar data manifolds, USPS(256) and MNIST(256) show better latent interpolation results than MNIST(784), which demonstrates that it is harder to preserve the geometric properties on higher input dimension. (4) Though the latent results import some noise, the input results have unnatural transformations such as pseudo-contour and overlapping. Thus, the latent\nspace results are more smooth than the input space, which validates that the latent space learned by i-ML-Enc is flatter and denser than the input space. In a nutshell, we infer that the difficulty of preserving the geometric properties based on approximation of the local tangent space by the local neighborhood is the key reason for the manifold learning fails in the real-world case.\nGeodesic interpolation. We further perform the latent interpolation along the geodesic path between sample pairs when k is large to generate reliable intermediate samples. It might reflect the topological structure of data manifolds when two samples in a sample pair are in different clusters. Compared with results of MNIST, KMNIST, and FMNIST, as shown in Fig. 10, we can conclude: (1) The latent results are more reliable than those in the input space which can generate the synthetic samples between two different clusters. (2) Compared with MNIST, KMNIST, and FMNIST, the latent results of more complex datasets are more ambiguous and noisy, which indicates that it is more difficult to find a low-dimensional representation of more complex data manifolds with all geometric structure preserved." }, { "heading": "A.4 ABLATION STUDY", "text": "" }, { "heading": "A.4.1 ANALYSIS OF THE LOSS TERMS", "text": "We further conduct ablation study of the extra head (+Ex), the orthogonal loss Lorth (+Orth), and the zero padding loss Lpad (+Pad) on MNIST, USPS, KMNIST, FMNIST and COIL-20. The Table 6 reports ablation results in the 8 indicators and the r(ZL−1). We analyze and conclude: (1) The combination of Ex and Orth nearly achieve the best inverse and DR performance on MNIST, USPS, FMNIST, and COIL-20, which indicates that it is the basic factor for invertible NLDR in the first L−1 layers. (2) When only use Orth, the NLDR in the first L− 1 layer of the network will degenerate into the identity mapping, and DR is achieved with the linear project on layer L. (3) Combined with all three items Ex, Orth and Pad, i-ML-Enc obtains a sparse coordinate representation, but achieves little worse embedding quality on USPS and COIL-20 than using Ex and Orth. (4) Besides the proposed loss items, ML-AE overperforms the other combinations in the Acc metric indicating the reconstruction loss helps improve the generation ability of ML-Enc. Above all, the Ex+Orth+Pad combination, i.e. i-ML-Enc, can achieve the proposed invertible NLDR." }, { "heading": "A.4.2 ORTHOGONALITY AND SPARSITY", "text": "Orthogonal analysis. We first analyze the orthogonality of weight matrices in the first stage by evaluating the orthogonal loss ||WTl Wl − I||. Using the same experimental settings as Sec 4.1, the maximum of non-diagonal elements and the minimum of diagonal elements of each layer are calculated in the first stage of i-ML-Enc on different datasets. We find that the margin of the maximum\nvalue and the minimum value is at least 4 orders of magnitude apart, as shown in Fig. 11. We can conclude that the first L− 1 layers in i-ML-Enc are close to strict orthogonal mappings.\nDiscussion on the s-sparse representation. We first provide a possible way to decompose the learned low-dimensional representation in the L−1-th layer of i-ML-Enc, i.e. decomposing the output matrix by PCA to construct a linear subspace. Taking the 8-th layer of i-ML-Enc on the test set of MNIST which is 784-D as an example, we can construct a 171-D orthogonal base vectors in the linear\nsubspace from the data matrix after dimension reduction and reconstruct to the original space (784-D) without losing information by PCA (Pedregosa et al., 2011) and its inverse transform, as shown in Fig. 11. Compared to the matrix rank 125-D in the 784-D space, the extra 46-D in the subspace can be regarded as the machine error in the process of performing PCA because of the large margin between the first 125 eigenvalues and the rest. We notice that the s-sparse achieved by the first stage of i-ML-Enc is higher than the approximate intrinsic dimension d on each dataset, e.g. 116-sparse on USPS and 125-sparse on MNIST. We found the following reasons: (1) Because the data manifolds are usually quite complex but sampling sparsely, the lowest isometric embedding dimension are between d to 2d according to Nash Embedding Theorem and the hyper-plane hypothesis. The s obtained by i-ML-Enc on each dataset is nearly in the interval of [d, 2d], which is not the true intrinsic dimension of the manifolds. (2) The proposed i-ML-Enc is not optimized enough which serves as a simple network implementation of inv-ML. We need to design a better implementation model if we want to approach the lower embedding dimension with the preservation of both geometry and topology." } ]
2,020
null
SP:26c214e61671b012baa8824a39772738a861e44b
[ "This paper introduces a Transformer-based image recognition model that is fully built on the Transformer layers (multi-head self-attention + point-wise MLP) without any standard convolution layers. Basically, it splits an image into patches and takes as input the set of linear embeddings of the patches and their positions. For classification, a learnable class token is added to the input and a classification head (MLP) is attached to the class token output of the final Transformer layer. Extensive experiments of transfer learning show that when pretrained on a sufficiently large dataset (100~300M), the proposed Vision Transformer can outperform state-of-the-art convolutional networks with less training cost as well as less number of parameters." ]
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.1
[ { "affiliations": [], "name": "Alexey Dosovitskiy" }, { "affiliations": [], "name": "Lucas Beyer" }, { "affiliations": [], "name": "Alexander Kolesnikov" }, { "affiliations": [], "name": "Dirk Weissenborn" }, { "affiliations": [], "name": "Xiaohua Zhai" }, { "affiliations": [], "name": "Thomas Unterthiner" }, { "affiliations": [], "name": "Mostafa Dehghani" }, { "affiliations": [], "name": "Matthias Minderer" }, { "affiliations": [], "name": "Georg Heigold" }, { "affiliations": [], "name": "Sylvain Gelly" }, { "affiliations": [], "name": "Jakob Uszkoreit" }, { "affiliations": [], "name": "Neil Houlsby" } ]
[ { "authors": [ "Samira Abnar", "Willem Zuidema" ], "title": "Quantifying attention flow in transformers", "venue": "In ACL,", "year": 2020 }, { "authors": [ "Alexei Baevski", "Michael Auli" ], "title": "Adaptive input representations for neural language modeling", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "I. Bello", "B. Zoph", "Q. Le", "A. Vaswani", "J. Shlens" ], "title": "Attention augmented convolutional networks", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Lucas Beyer", "Olivier J. Hénaff", "Alexander Kolesnikov", "Xiaohua Zhai", "Aäron van den Oord" ], "title": "Are we done with imagenet? arXiv, 2020", "venue": null, "year": 2020 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2020 }, { "authors": [ "Nicolas Carion", "Francisco Massa", "Gabriel Synnaeve", "Nicolas Usunier", "Alexander Kirillov", "Sergey Zagoruyko" ], "title": "End-to-end object detection with transformers", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Mark Chen", "Alec Radford", "Rewon Child", "Jeff Wu", "Heewoo Jun" ], "title": "Generative pretraining from pixels", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey E. Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Yen-Chun Chen", "Linjie Li", "Licheng Yu", "Ahmed El Kholy", "Faisal Ahmed", "Zhe Gan", "Yu Cheng", "Jingjing Liu" ], "title": "UNITER: UNiversal Image-TExt Representation Learning", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": null, "year": 2019 }, { "authors": [ "Jean-Baptiste Cordonnier", "Andreas Loukas", "Martin Jaggi" ], "title": "On the relationship between selfattention and convolutional layers", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L. Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Josip Djolonga", "Jessica Yung", "Michael Tschannen", "Rob Romijnders", "Lucas Beyer", "Alexander Kolesnikov", "Joan Puigcerver", "Matthias Minderer", "Alexander D’Amour", "Dan Moldovan", "Sylvan Gelly", "Neil Houlsby", "Xiaohua Zhai", "Mario Lucic" ], "title": "On robustness and transferability of convolutional neural networks. arXiv, 2020", "venue": null, "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Jonathan Ho", "Nal Kalchbrenner", "Dirk Weissenborn", "Tim Salimans" ], "title": "Axial attention in multidimensional transformers", "venue": null, "year": 2019 }, { "authors": [ "Han Hu", "Jiayuan Gu", "Zheng Zhang", "Jifeng Dai", "Yichen Wei" ], "title": "Relation networks for object detection", "venue": null, "year": 2018 }, { "authors": [ "Han Hu", "Zheng Zhang", "Zhenda Xie", "Stephen Lin" ], "title": "Local relation networks for image recognition", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Zilong Huang", "Xinggang Wang", "Yunchao Wei", "Lichao Huang", "Humphrey Shi", "Wenyu Liu", "Thomas S. Huang" ], "title": "Ccnet: Criss-cross attention for semantic segmentation", "venue": "In ICCV,", "year": 2020 }, { "authors": [ "Olivier J. Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "S.M. Ali Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": null, "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Big transfer (BiT): General visual representation learning", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Y. LeCun", "B. Boser", "J. Denker", "D. Henderson", "R. Howard", "W. Hubbard", "L. Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "Dmitry Lepikhin", "HyoukJoong Lee", "Yuanzhong Xu", "Dehao Chen", "Orhan Firat", "Yanping Huang", "Maxim Krikun", "Noam Shazeer", "Zhifeng Chen" ], "title": "Gshard: Scaling giant models with conditional computation and automatic sharding", "venue": null, "year": 2020 }, { "authors": [ "Liunian Harold Li", "Mark Yatskar", "Da Yin", "Cho-Jui Hsieh", "Kai-Wei Chang" ], "title": "VisualBERT: A Simple and Performant Baseline for Vision and Language", "venue": "In Arxiv,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Dirk Weissenborn", "Thomas Unterthiner", "Aravindh Mahendran", "Georg Heigold", "Jakob Uszkoreit", "Alexey Dosovitskiy", "Thomas Kipf" ], "title": "Object-centric learning with slot attention", "venue": null, "year": 2020 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks", "venue": "In NeurIPS", "year": 2019 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": null, "year": 2018 }, { "authors": [ "M. Nilsback", "A. Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "In ICVGIP,", "year": 2008 }, { "authors": [ "Omkar M. Parkhi", "Andrea Vedaldi", "Andrew Zisserman", "C.V. Jawahar" ], "title": "Cats and dogs", "venue": "In CVPR,", "year": 2012 }, { "authors": [ "B.T. Polyak", "A.B. Juditsky" ], "title": "Acceleration of stochastic approximation by averaging", "venue": "SIAM Journal on Control and Optimization,", "year": 1992 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding with unsupervised learning", "venue": "Technical Report,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "Technical Report,", "year": 2019 }, { "authors": [ "Prajit Ramachandran", "Niki Parmar", "Ashish Vaswani", "Irwan Bello", "Anselm Levskaya", "Jon Shlens" ], "title": "Stand-alone self-attention in vision models", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": null, "year": 2019 }, { "authors": [ "Hugo Touvron", "Andrea Vedaldi", "Matthijs Douze", "Herve Jegou" ], "title": "Fixing the train-test resolution discrepancy", "venue": "In NeurIPS", "year": 2019 }, { "authors": [ "Hugo Touvron", "Andrea Vedaldi", "Matthijs Douze", "Herve Jegou" ], "title": "Fixing the train-test resolution discrepancy: Fixefficientnet", "venue": "arXiv preprint arXiv:2003.08237,", "year": 2020 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Marvin Ritter", "Aravindh Mahendran", "Neil Houlsby", "Sylvain Gelly", "Mario Lucic" ], "title": "Self-supervised learning of video-induced visual invariances", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Huiyu Wang", "Yukun Zhu", "Bradley Green", "Hartwig Adam", "Alan Yuille", "Liang-Chieh Chen" ], "title": "Axial-deeplab: Stand-alone axial-attention for panoptic segmentation", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Huiyu Wang", "Yukun Zhu", "Bradley Green", "Hartwig Adam", "Alan Yuille", "Liang-Chieh Chen" ], "title": "Axial-deeplab: Stand-alone axial-attention for panoptic segmentation", "venue": "arXiv preprint arXiv:2003.07853,", "year": 2020 }, { "authors": [ "Qiang Wang", "Bei Li", "Tong Xiao", "Jingbo Zhu", "Changliang Li", "Derek F. Wong", "Lidia S. Chao" ], "title": "Learning deep transformer models for machine translation", "venue": null, "year": 2019 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Dirk Weissenborn", "Oscar Täckström", "Jakob Uszkoreit" ], "title": "Scaling autoregressive video models", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Bichen Wu", "Chenfeng Xu", "Xiaoliang Dai", "Alvin Wan", "Peizhao Zhang", "Masayoshi Tomizuka", "Kurt Keutzer", "Peter Vajda" ], "title": "Visual transformers: Token-based image representation and processing for computer vision", "venue": "arxiv,", "year": 2020 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V. Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": "In ICCV,", "year": 2020 }, { "authors": [ "Xiaohua Zhai", "Joan Puigcerver", "Alexander Kolesnikov", "Pierre Ruyssen", "Carlos Riquelme", "Mario Lucic", "Josip Djolonga", "Andre Susano Pinto", "Maxim Neumann", "Alexey Dosovitskiy" ], "title": "A large-scale study of representation learning with the visual task adaptation benchmark", "venue": "arXiv preprint arXiv:1910.04867,", "year": 2019 }, { "authors": [ "Hengshuang Zhao", "Jiaya Jia", "Vladlen Koltun" ], "title": "Exploring self-attention for image recognition", "venue": "In CVPR,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Self-attention-based architectures, in particular Transformers (Vaswani et al., 2017), have become the model of choice in natural language processing (NLP). The dominant approach is to pre-train on a large text corpus and then fine-tune on a smaller task-specific dataset (Devlin et al., 2019). Thanks to Transformers’ computational efficiency and scalability, it has become possible to train models of unprecedented size, with over 100B parameters (Brown et al., 2020; Lepikhin et al., 2020). With the models and datasets growing, there is still no sign of saturating performance.\nIn computer vision, however, convolutional architectures remain dominant (LeCun et al., 1989; Krizhevsky et al., 2012; He et al., 2016). Inspired by NLP successes, multiple works try combining CNN-like architectures with self-attention (Wang et al., 2018; Carion et al., 2020), some replacing the convolutions entirely (Ramachandran et al., 2019; Wang et al., 2020a). The latter models, while theoretically efficient, have not yet been scaled effectively on modern hardware accelerators due to the use of specialized attention patterns. Therefore, in large-scale image recognition, classic ResNetlike architectures are still state of the art (Mahajan et al., 2018; Xie et al., 2020; Kolesnikov et al., 2020).\nInspired by the Transformer scaling successes in NLP, we experiment with applying a standard Transformer directly to images, with the fewest possible modifications. To do so, we split an image into patches and provide the sequence of linear embeddings of these patches as an input to a Transformer. Image patches are treated the same way as tokens (words) in an NLP application. We train the model on image classification in supervised fashion.\nWhen trained on mid-sized datasets such as ImageNet without strong regularization, these models yield modest accuracies of a few percentage points below ResNets of comparable size. This seemingly discouraging outcome may be expected: Transformers lack some of the inductive biases\n1Fine-tuning code and pre-trained models are available at https://github.com/ google-research/vision_transformer\ninherent to CNNs, such as translation equivariance and locality, and therefore do not generalize well when trained on insufficient amounts of data.\nHowever, the picture changes if the models are trained on larger datasets (14M-300M images). We find that large scale training trumps inductive bias. Our Vision Transformer (ViT) attains excellent results when pre-trained at sufficient scale and transferred to tasks with fewer datapoints. When pre-trained on the public ImageNet-21k dataset or the in-house JFT-300M dataset, ViT approaches or beats state of the art on multiple image recognition benchmarks. In particular, the best model reaches the accuracy of 88.55% on ImageNet, 90.72% on ImageNet-ReaL, 94.55% on CIFAR-100, and 77.63% on the VTAB suite of 19 tasks." }, { "heading": "2 RELATED WORK", "text": "Transformers were proposed by Vaswani et al. (2017) for machine translation, and have since become the state of the art method in many NLP tasks. Large Transformer-based models are often pre-trained on large corpora and then fine-tuned for the task at hand: BERT (Devlin et al., 2019) uses a denoising self-supervised pre-training task, while the GPT line of work uses language modeling as its pre-training task (Radford et al., 2018; 2019; Brown et al., 2020).\nNaive application of self-attention to images would require that each pixel attends to every other pixel. With quadratic cost in the number of pixels, this does not scale to realistic input sizes. Thus, to apply Transformers in the context of image processing, several approximations have been tried in the past. Parmar et al. (2018) applied the self-attention only in local neighborhoods for each query pixel instead of globally. Such local multi-head dot-product self attention blocks can completely replace convolutions (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020). In a different line of work, Sparse Transformers (Child et al., 2019) employ scalable approximations to global selfattention in order to be applicable to images. An alternative way to scale attention is to apply it in blocks of varying sizes (Weissenborn et al., 2019), in the extreme case only along individual axes (Ho et al., 2019; Wang et al., 2020a). Many of these specialized attention architectures demonstrate promising results on computer vision tasks, but require complex engineering to be implemented efficiently on hardware accelerators.\nMost related to ours is the model of Cordonnier et al. (2020), which extracts patches of size 2 × 2 from the input image and applies full self-attention on top. This model is very similar to ViT, but our work goes further to demonstrate that large scale pre-training makes vanilla transformers competitive with (or even better than) state-of-the-art CNNs. Moreover, Cordonnier et al. (2020) use a small patch size of 2 × 2 pixels, which makes the model applicable only to small-resolution images, while we handle medium-resolution images as well.\nThere has also been a lot of interest in combining convolutional neural networks (CNNs) with forms of self-attention, e.g. by augmenting feature maps for image classification (Bello et al., 2019) or by further processing the output of a CNN using self-attention, e.g. for object detection (Hu et al., 2018; Carion et al., 2020), video processing (Wang et al., 2018; Sun et al., 2019), image classification (Wu et al., 2020), unsupervised object discovery (Locatello et al., 2020), or unified text-vision tasks (Chen et al., 2020c; Lu et al., 2019; Li et al., 2019).\nAnother recent related model is image GPT (iGPT) (Chen et al., 2020a), which applies Transformers to image pixels after reducing image resolution and color space. The model is trained in an unsupervised fashion as a generative model, and the resulting representation can then be fine-tuned or probed linearly for classification performance, achieving a maximal accuracy of 72% on ImageNet.\nOur work adds to the increasing collection of papers that explore image recognition at larger scales than the standard ImageNet dataset. The use of additional data sources allows to achieve state-ofthe-art results on standard benchmarks (Mahajan et al., 2018; Touvron et al., 2019; Xie et al., 2020). Moreover, Sun et al. (2017) study how CNN performance scales with dataset size, and Kolesnikov et al. (2020); Djolonga et al. (2020) perform an empirical exploration of CNN transfer learning from large scale datasets such as ImageNet-21k and JFT-300M. We focus on these two latter datasets as well, but train Transformers instead of ResNet-based models used in prior works." }, { "heading": "3 METHOD", "text": "In model design we follow the original Transformer (Vaswani et al., 2017) as closely as possible. An advantage of this intentionally simple setup is that scalable NLP Transformer architectures – and their efficient implementations – can be used almost out of the box." }, { "heading": "3.1 VISION TRANSFORMER (VIT)", "text": "An overview of the model is depicted in Figure 1. The standard Transformer receives as input a 1D sequence of token embeddings. To handle 2D images, we reshape the image x ∈ RH×W×C into a sequence of flattened 2D patches xp ∈ RN×(P\n2·C), where (H,W ) is the resolution of the original image, C is the number of channels, (P, P ) is the resolution of each image patch, andN = HW/P 2 is the resulting number of patches, which also serves as the effective input sequence length for the Transformer. The Transformer uses constant latent vector size D through all of its layers, so we flatten the patches and map to D dimensions with a trainable linear projection (Eq. 1). We refer to the output of this projection as the patch embeddings.\nSimilar to BERT’s [class] token, we prepend a learnable embedding to the sequence of embedded patches (z00 = xclass), whose state at the output of the Transformer encoder (z 0 L) serves as the image representation y (Eq. 4). Both during pre-training and fine-tuning, a classification head is attached to z0L. The classification head is implemented by a MLP with one hidden layer at pre-training time and by a single linear layer at fine-tuning time.\nPosition embeddings are added to the patch embeddings to retain positional information. We use standard learnable 1D position embeddings, since we have not observed significant performance gains from using more advanced 2D-aware position embeddings (Appendix D.3). The resulting sequence of embedding vectors serves as input to the encoder.\nThe Transformer encoder (Vaswani et al., 2017) consists of alternating layers of multiheaded selfattention (MSA, see Appendix A) and MLP blocks (Eq. 2, 3). Layernorm (LN) is applied before every block, and residual connections after every block (Wang et al., 2019; Baevski & Auli, 2019).\nThe MLP contains two layers with a GELU non-linearity.\nz0 = [xclass; x 1 pE; x 2 pE; · · · ; xNp E] +Epos, E ∈ R(P 2·C)×D, Epos ∈ R(N+1)×D (1) z′` = MSA(LN(z`−1)) + z`−1, ` = 1 . . . L (2)\nz` = MLP(LN(z ′ `)) + z ′ `, ` = 1 . . . L (3)\ny = LN(z0L) (4)\nInductive bias. We note that Vision Transformer has much less image-specific inductive bias than CNNs. In CNNs, locality, two-dimensional neighborhood structure, and translation equivariance are baked into each layer throughout the whole model. In ViT, only MLP layers are local and translationally equivariant, while the self-attention layers are global. The two-dimensional neighborhood structure is used very sparingly: in the beginning of the model by cutting the image into patches and at fine-tuning time for adjusting the position embeddings for images of different resolution (as described below). Other than that, the position embeddings at initialization time carry no information about the 2D positions of the patches and all spatial relations between the patches have to be learned from scratch.\nHybrid Architecture. As an alternative to raw image patches, the input sequence can be formed from feature maps of a CNN (LeCun et al., 1989). In this hybrid model, the patch embedding projection E (Eq. 1) is applied to patches extracted from a CNN feature map. As a special case, the patches can have spatial size 1x1, which means that the input sequence is obtained by simply flattening the spatial dimensions of the feature map and projecting to the Transformer dimension. The classification input embedding and position embeddings are added as described above." }, { "heading": "3.2 FINE-TUNING AND HIGHER RESOLUTION", "text": "Typically, we pre-train ViT on large datasets, and fine-tune to (smaller) downstream tasks. For this, we remove the pre-trained prediction head and attach a zero-initialized D × K feedforward layer, where K is the number of downstream classes. It is often beneficial to fine-tune at higher resolution than pre-training (Touvron et al., 2019; Kolesnikov et al., 2020). When feeding images of higher resolution, we keep the patch size the same, which results in a larger effective sequence length. The Vision Transformer can handle arbitrary sequence lengths (up to memory constraints), however, the pre-trained position embeddings may no longer be meaningful. We therefore perform 2D interpolation of the pre-trained position embeddings, according to their location in the original image. Note that this resolution adjustment and patch extraction are the only points at which an inductive bias about the 2D structure of the images is manually injected into the Vision Transformer." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate the representation learning capabilities of ResNet, Vision Transformer (ViT), and the hybrid. To understand the data requirements of each model, we pre-train on datasets of varying size and evaluate many benchmark tasks. When considering the computational cost of pre-training the model, ViT performs very favourably, attaining state of the art on most recognition benchmarks at a lower pre-training cost. Lastly, we perform a small experiment using self-supervision, and show that self-supervised ViT holds promise for the future." }, { "heading": "4.1 SETUP", "text": "Datasets. To explore model scalability, we use the ILSVRC-2012 ImageNet dataset with 1k classes and 1.3M images (we refer to it as ImageNet in what follows), its superset ImageNet-21k with 21k classes and 14M images (Deng et al., 2009), and JFT (Sun et al., 2017) with 18k classes and 303M high-resolution images. We de-duplicate the pre-training datasets w.r.t. the test sets of the downstream tasks following Kolesnikov et al. (2020). We transfer the models trained on these dataset to several benchmark tasks: ImageNet on the original validation labels and the cleaned-up ReaL labels (Beyer et al., 2020), CIFAR-10/100 (Krizhevsky, 2009), Oxford-IIIT Pets (Parkhi et al., 2012), and Oxford Flowers-102 (Nilsback & Zisserman, 2008). For these datasets, pre-processing follows Kolesnikov et al. (2020).\nWe also evaluate on the 19-task VTAB classification suite (Zhai et al., 2019b). VTAB evaluates low-data transfer to diverse tasks, using 1 000 training examples per task. The tasks are divided into three groups: Natural – tasks like the above, Pets, CIFAR, etc. Specialized – medical and satellite imagery, and Structured – tasks that require geometric understanding like localization.\nModel Variants. We base ViT configurations on those used for BERT (Devlin et al., 2019), as summarized in Table 1. The “Base” and “Large” models are directly adopted from BERT and we add the larger “Huge” model. In what follows we use brief notation to indicate the model size and the input patch size: for instance, ViT-L/16 means the “Large” variant with 16×16 input patch size. Note that the Transformer’s sequence length is inversely proportional to the square of the patch size, thus models with smaller patch size are computationally more expensive.\nFor the baseline CNNs, we use ResNet (He et al., 2016), but replace the Batch Normalization layers (Ioffe & Szegedy, 2015) with Group Normalization (Wu & He, 2018), and used standardized convolutions (Qiao et al., 2019). These modifications improve transfer (Kolesnikov et al., 2020), and we denote the modified model “ResNet (BiT)”. For the hybrids, we feed the intermediate feature maps into ViT with patch size of one “pixel”. To experiment with different sequence lengths, we either (i) take the output of stage 4 of a regular ResNet50 or (ii) remove stage 4, place the same number of layers in stage 3 (keeping the total number of layers), and take the output of this extended stage 3. Option (ii) results in a 4x longer sequence length, and a more expensive ViT model.\nTraining & Fine-tuning. We train all models, including ResNets, using Adam (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999, a batch size of 4096 and apply a high weight decay of 0.1, which we found to be useful for transfer of all models (Appendix D.1 shows that, in contrast to common practices, Adam works slightly better than SGD for ResNets in our setting). We use a linear learning rate warmup and decay, see Appendix B.1 for details. For fine-tuning we use SGD with momentum, batch size 512, for all models, see Appendix B.1.1. For ImageNet results in Table 2, we fine-tuned at higher resolution: 512 for ViT-L/16 and 518 for ViT-H/14, and also used Polyak & Juditsky (1992) averaging with a factor of 0.9999 (Ramachandran et al., 2019; Wang et al., 2020b).\nMetrics. We report results on downstream datasets either through few-shot or fine-tuning accuracy. Fine-tuning accuracies capture the performance of each model after fine-tuning it on the respective dataset. Few-shot accuracies are obtained by solving a regularized least-squares regression problem that maps the (frozen) representation of a subset of training images to {−1, 1}K target vectors. This formulation allows us to recover the exact solution in closed form. Though we mainly focus on fine-tuning performance, we sometimes use linear few-shot accuracies for fast on-the-fly evaluation where fine-tuning would be too costly." }, { "heading": "4.2 COMPARISON TO STATE OF THE ART", "text": "We first compare our largest models – ViT-H/14 and ViT-L/16 – to state-of-the-art CNNs from the literature. The first comparison point is Big Transfer (BiT) (Kolesnikov et al., 2020), which performs supervised transfer learning with large ResNets. The second is Noisy Student (Xie et al., 2020), which is a large EfficientNet trained using semi-supervised learning on ImageNet and JFT300M with the labels removed. Currently, Noisy Student is the state of the art on ImageNet and BiT-L on the other datasets reported here. All models were trained on TPUv3 hardware, and we report the number of TPUv3-core-days taken to pre-train each of them, that is, the number of TPU v3 cores (2 per chip) used for training multiplied by the training time in days.\nTable 2 shows the results. The smaller ViT-L/16 model pre-trained on JFT-300M outperforms BiT-L (which is pre-trained on the same dataset) on all tasks, while requiring substantially less computational resources to train. The larger model, ViT-H/14, further improves the performance, especially on the more challenging datasets – ImageNet, CIFAR-100, and the VTAB suite. Interestingly, this\nmodel still took substantially less compute to pre-train than prior state of the art. However, we note that pre-training efficiency may be affected not only by the architecture choice, but also other parameters, such as training schedule, optimizer, weight decay, etc. We provide a controlled study of performance vs. compute for different architectures in Section 4.4. Finally, the ViT-L/16 model pre-trained on the public ImageNet-21k dataset performs well on most datasets too, while taking fewer resources to pre-train: it could be trained using a standard cloud TPUv3 with 8 cores in approximately 30 days.\nFigure 2 decomposes the VTAB tasks into their respective groups, and compares to previous SOTA methods on this benchmark: BiT, VIVI – a ResNet co-trained on ImageNet and Youtube (Tschannen et al., 2020), and S4L – supervised plus semi-supervised learning on ImageNet (Zhai et al., 2019a). ViT-H/14 outperforms BiT-R152x4, and other methods, on the Natural and Structured tasks. On the Specialized the performance of the top two models is similar." }, { "heading": "4.3 PRE-TRAINING DATA REQUIREMENTS", "text": "The Vision Transformer performs well when pre-trained on a large JFT-300M dataset. With fewer inductive biases for vision than ResNets, how crucial is the dataset size? We perform two series of experiments.\nFirst, we pre-train ViT models on datasets of increasing size: ImageNet, ImageNet-21k, and JFT300M. To boost the performance on the smaller datasets, we optimize three basic regularization parameters – weight decay, dropout, and label smoothing. Figure 3 shows the results after finetuning to ImageNet (results on other datasets are shown in Table 5)2. When pre-trained on the smallest dataset, ImageNet, ViT-Large models underperform compared to ViT-Base models, despite (moderate) regularization. With ImageNet-21k pre-training, their performances are similar. Only with JFT-300M, do we see the full benefit of larger models. Figure 3 also shows the performance\n2Note that the ImageNet pre-trained models are also fine-tuned, but again on ImageNet. This is because the resolution increase during fine-tuning improves the performance.\nregion spanned by BiT models of different sizes. The BiT CNNs outperform ViT on ImageNet, but with the larger datasets, ViT overtakes.\nSecond, we train our models on random subsets of 9M, 30M, and 90M as well as the full JFT300M dataset. We do not perform additional regularization on the smaller subsets and use the same hyper-parameters for all settings. This way, we assess the intrinsic model properties, and not the effect of regularization. We do, however, use early-stopping, and report the best validation accuracy achieved during training. To save compute, we report few-shot linear accuracy instead of full finetuning accuracy. Figure 4 contains the results. Vision Transformers overfit more than ResNets with comparable computational cost on smaller datasets. For example, ViT-B/32 is slightly faster than ResNet50; it performs much worse on the 9M subset, but better on 90M+ subsets. The same is true for ResNet152x2 and ViT-L/16. This result reinforces the intuition that the convolutional inductive bias is useful for smaller datasets, but for larger ones, learning the relevant patterns directly from data is sufficient, even beneficial.\nOverall, the few-shot results on ImageNet (Figure 4), as well as the low-data results on VTAB (Table 2) seem promising for very low-data transfer. Further analysis of few-shot properties of ViT is an exciting direction of future work." }, { "heading": "4.4 SCALING STUDY", "text": "We perform a controlled scaling study of different models by evaluating transfer performance from JFT-300M. In this setting data size does not bottleneck the models’ performances, and we assess performance versus pre-training cost of each model. The model set includes: 7 ResNets, R50x1, R50x2 R101x1, R152x1, R152x2, pre-trained for 7 epochs, plus R152x2 and R200x3 pre-trained for 14 epochs; 6 Vision Transformers, ViT-B/32, B/16, L/32, L/16, pre-trained for 7 epochs, plus L/16 and H/14 pre-trained for 14 epochs; and 5 hybrids, R50+ViT-B/32, B/16, L/32, L/16 pretrained for 7 epochs, plus R50+ViT-L/16 pre-trained for 14 epochs (for hybrids, the number at the end of the model name stands not for the patch size, but for the total dowsampling ratio in the ResNet backbone).\nFigure 5 contains the transfer performance versus total pre-training compute (see Appendix D.4 for details on computational costs). Detailed results per model are provided in Table 6 in the Appendix. A few patterns can be observed. First, Vision Transformers dominate ResNets on the performance/compute trade-off. ViT uses approximately 2 − 4× less compute to attain the same performance (average over 5 datasets). Second, hybrids slightly outperform ViT at small computational budgets, but the difference vanishes for larger models. This result is somewhat surprising, since one might expect convolutional local feature processing to assist ViT at any size. Third, Vision Transformers appear not to saturate within the range tried, motivating future scaling efforts.\n4.5 INSPECTING VISION TRANSFORMER\nTo begin to understand how the Vision Transformer processes image data, we analyze its internal representations. The first layer of the Vision Transformer linearly projects the flattened patches into a lower-dimensional space (Eq. 1). Figure 7 (left) shows the top principal components of the the learned embedding filters. The components resemble plausible basis functions for a low-dimensional representation of the fine structure within each patch.\nAfter the projection, a learned position embedding is added to the patch representations. Figure 7 (center) shows that the model learns to encode distance within the image in the similarity of position embeddings, i.e. closer patches tend to have more similar position embeddings. Further, the row-column structure appears; patches in the same row/column have similar embeddings. Finally, a sinusoidal structure is sometimes apparent for larger grids (Appendix D). That the position embeddings learn to represent 2D image topology explains why hand-crafted 2D-aware embedding variants do not yield improvements (Appendix D.3).\nSelf-attention allows ViT to integrate information across the entire image even in the lowest layers. We investigate to what degree the network makes use of this capability. Specifically, we compute the average distance in image space across which information is integrated, based on the attention weights (Figure 7, right). This “attention distance” is analogous to receptive field size in CNNs.\nWe find that some heads attend to most of the image already in the lowest layers, showing that the ability to integrate information globally is indeed used by the model. Other attention heads have consistently small attention distances in the low layers. This highly localized attention is less pronounced in hybrid models that apply a ResNet before the Transformer (Figure 7, right), suggesting that it may serve a similar function as early convolutional layers in CNNs. Further, the attention distance increases with network depth. Globally, we find that the model attends to image regions that are semantically relevant for classification (Figure 6)." }, { "heading": "4.6 SELF-SUPERVISION", "text": "Transformers show impressive performance on NLP tasks. However, much of their success stems not only from their excellent scalability but also from large scale self-supervised pre-training (Devlin\net al., 2019; Radford et al., 2018). We also perform a preliminary exploration on masked patch prediction for self-supervision, mimicking the masked language modeling task used in BERT. With self-supervised pre-training, our smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training. Appendix B.1.2 contains further details. We leave exploration of contrastive pre-training (Chen et al., 2020b; He et al., 2020; Bachman et al., 2019; Hénaff et al., 2020) to future work." }, { "heading": "5 CONCLUSION", "text": "We have explored the direct application of Transformers to image recognition. Unlike prior works using self-attention in computer vision, we do not introduce image-specific inductive biases into the architecture apart from the initial patch extraction step. Instead, we interpret an image as a sequence of patches and process it by a standard Transformer encoder as used in NLP. This simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large datasets. Thus, Vision Transformer matches or exceeds the state of the art on many image classification datasets, whilst being relatively cheap to pre-train.\nWhile these initial results are encouraging, many challenges remain. One is to apply ViT to other computer vision tasks, such as detection and segmentation. Our results, coupled with those in Carion et al. (2020), indicate the promise of this approach. Another challenge is to continue exploring selfsupervised pre-training methods. Our initial experiments show improvement from self-supervised pre-training, but there is still large gap between self-supervised and large-scale supervised pretraining. Finally, further scaling of ViT would likely lead to improved performance." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The work was performed in Berlin, Zürich, and Amsterdam. We thank many colleagues at Google for their help, in particular Andreas Steiner for crucial help with the infrastructure and the opensource release of the code; Joan Puigcerver and Maxim Neumann for help with the large-scale training infrastructure; Dmitry Lepikhin, Aravindh Mahendran, Daniel Keysers, Mario Lučić, Noam Shazeer, and Colin Raffel for useful discussions." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A MULTIHEAD SELF-ATTENTION", "text": "Standard qkv self-attention (SA, Vaswani et al. (2017)) is a popular building block for neural architectures. For each element in an input sequence z ∈ RN×D, we compute a weighted sum over all values v in the sequence. The attention weights Aij are based on the pairwise similarity between two elements of the sequence and their respective query qi and key kj representations.\n[q,k,v] = zUqkv Uqkv ∈ RD×3Dh , (5) A = softmax ( qk>/ √ Dh ) A ∈ RN×N , (6)\nSA(z) = Av . (7)\nMultihead self-attention (MSA) is an extension of SA in which we run k self-attention operations, called “heads”, in parallel, and project their concatenated outputs. To keep compute and number of parameters constant when changing k, Dh (Eq. 5) is typically set to D/k.\nMSA(z) = [SA1(z); SA2(z); · · · ; SAk(z)]Umsa Umsa ∈ Rk·Dh×D (8)" }, { "heading": "B EXPERIMENT DETAILS", "text": "" }, { "heading": "B.1 TRAINING", "text": "Table 3 summarizes our training setups for our different models. We found strong regularization to be key when training models from scratch on ImageNet. Dropout, when used, is applied after every dense layer except for the the qkv-projections and directly after adding positional- to patch embeddings. Hybrid models are trained with the exact setup as their ViT counterparts. Finally, all training is done on resolution 224." }, { "heading": "B.1.1 FINE-TUNING", "text": "We fine-tune all ViT models using SGD with a momentum of 0.9. We run a small grid search over learning rates, see learning rate ranges in Table 4. To do so, we use small sub-splits from the training set (10% for Pets and Flowers, 2% for CIFAR, 1% ImageNet) as development set and train on the remaining data. For final results we train on the entire training set and evaluate on the respective test data. For fine-tuning ResNets and hybrid models we use the exact same setup, with the only exception of ImageNet where we add another value 0.06 to the learning rate sweep. Additionally,\nfor ResNets we also run the setup of Kolesnikov et al. (2020) and select the best results across this run and our sweep. Finally, if not mentioned otherwise, all fine-tuning experiments run at 384 resolution (running fine-tuning at different resolution than training is common practice (Kolesnikov et al., 2020)).\nWhen transferring ViT models to another dataset, we remove the whole head (two linear layers) and replace it by a single, zero-initialized linear layer outputting the number of classes required by the target dataset. We found this to be a little more robust than simply re-initializing the very last layer.\nFor VTAB we follow the protocol in Kolesnikov et al. (2020), and use the same hyperparameter setting for all tasks. We use a learning rate of 0.01 and train for 2500 steps (Tab. 4). We chose this setting by running a small sweep over two learning rates and two schedules, and selecting the setting with the highest VTAB score on the 200-example validation sets. We follow the pre-processing used in Kolesnikov et al. (2020), except that we do not use task-specific input resolutions. Instead we find that Vision Transformer benefits most from a high resolution (384× 384) for all tasks." }, { "heading": "B.1.2 SELF-SUPERVISION", "text": "We employ the masked patch prediction objective for preliminary self-supervision experiments. To do so we corrupt 50% of patch embeddings by either replacing their embeddings with a learnable [mask] embedding (80%), a random other patch embedding (10%) or just keeping them as is (10%). This setup is very similar to the one used for language by Devlin et al. (2019). Finally, we predict the 3-bit, mean color (i.e., 512 colors in total) of every corrupted patch using their respective patch representations.\nWe trained our self-supervised model for 1M steps (ca. 14 epochs) with batch size 4096 on JFT. We use Adam, with a base learning rate of 2 ·10−4, warmup of 10k steps and cosine learning rate decay. As prediction targets for pretraining we tried the following settings: 1) predicting only the mean, 3bit color (i.e., 1 prediction of 512 colors), 2) predicting a 4 × 4 downsized version of the 16 × 16 patch with 3bit colors in parallel (i.e., 16 predictions of 512 colors), 3) regression on the full patch using L2 (i.e., 256 regressions on the 3 RGB channels). Surprisingly, we found that all worked quite well, though L2 was slightly worse. We report final results only for option 1) because it has shown best few-shot performance. We also experimented with 15% corruption rate as used by Devlin et al. (2019) but results were also slightly worse on our few-shot metrics.\nLastly, we would like to remark that our instantiation of masked patch prediction doesn’t require such an enormous amount of pretraining nor a large dataset such as JFT in order to lead to similar performance gains on ImageNet classification. That is, we observed diminishing returns on downstream performance after 100k pretraining steps, and see similar gains when pretraining on ImageNet." }, { "heading": "C ADDITIONAL RESULTS", "text": "We report detailed results corresponding to the figures presented in the paper. Table 5 corresponds to Figure 3 from the paper and shows transfer performance of different ViT models pre-trained on datasets of increasing size: ImageNet, ImageNet-21k, and JFT-300M. Table 6 corresponds to\nFigure 5 from the paper and shows the transfer performance of ViT, ResNet, and hybrid models of varying size, as well as the estimated computational cost of their pre-training." }, { "heading": "D ADDITIONAL ANALYSES", "text": "" }, { "heading": "D.1 SGD VS. ADAM FOR RESNETS", "text": "ResNets are typically trained with SGD and our use of Adam as optimizer is quite unconventional. Here we show the experiments that motivated this choice. Namely, we compare the fine-tuning performance of two ResNets – 50x1 and 152x2 – pre-trained on JFT with SGD and Adam. For SGD, we use the hyperparameters recommended by Kolesnikov et al. (2020). Results are presented\nin Table 7. Adam pre-training outperforms SGD pre-training on most datasets and on average. This justifies the choice of Adam as the optimizer used to pre-train ResNets on JFT. Note that the absolute numbers are lower than those reported by Kolesnikov et al. (2020), since we pre-train only for 7 epochs, not 30." }, { "heading": "D.2 TRANSFORMER SHAPE", "text": "We ran ablations on scaling different dimensions of the Transformer architecture to find out which are best suited for scaling to very large models. Figure 8 shows 5-shot performance on ImageNet for different configurations. All configurations are based on a ViT model with 8 layers, D = 1024, DMLP = 2048 and a patch size of 32, the intersection of all lines. We can see that scaling the depth results in the biggest improvements which are clearly visible up until 64 layers. However, diminishing returns are already visible after 16 layers. Interestingly, scaling the width of the network seems to result in the smallest changes. Decreasing the patch size and thus increasing the effective sequence length shows surprisingly robust improvements without introducing parameters. These findings suggest that compute might be a better predictor of performance than the number of parameters, and that scaling should emphasize depth over width if any. Overall, we find that scaling all dimensions proportionally results in robust improvements." }, { "heading": "D.3 POSITIONAL EMBEDDING", "text": "We ran ablations on different ways of encoding spatial information using positional embedding. We tried the following cases:\n• Providing no positional information: Considering the inputs as a bag of patches.\n• 1-dimensional positional embedding: Considering the inputs as a sequence of patches in the raster order (default across all other experiments in this paper).\n• 2-dimensional positional embedding: Considering the inputs as a grid of patches in two dimensions. In this case, two sets of embeddings are learned, each for one of the axes, X-embedding, and Y -embedding, each with size D/2. Then, based on the coordinate on\nthe path in the input, we concatenate the X and Y embedding to get the final positional embedding for that patch.\n• Relative positional embeddings: Considering the relative distance between patches to encode the spatial information as instead of their absolute position. To do so, we use 1- dimensional Relative Attention, in which we define the relative distance all possible pairs of patches. Thus, for every given pair (one as query, and the other as key/value in the attention mechanism), we have an offset pq − pk, where each offset is associated with an embedding. Then, we simply run extra attention, where we use the original query (the content of query), but use relative positional embeddings as keys. We then use the logits from the relative attention as a bias term and add it to the logits of the main attention (content-based attention) before applying the softmax.\nIn addition to different ways of encoding spatial information, we also tried different ways of incorporating this information in our model. For the 1-dimensional and 2-dimensional positional embeddings, we tried three different cases: (1) add positional embeddings to the inputs right after the stem of them model and before feeding the inputs to the Transformer encoder (default across all other experiments in this paper); (2) learn and add positional embeddings to the inputs at the beginning of each layer; (3) add a learned positional embeddings to the inputs at the beginning of each layer (shared between layers).\nTable 8 summarizes the results from this ablation study on a ViT-B/16 model. As we can see, while there is a large gap between the performances of the model with no positional embedding and models with positional embedding, there is little to no difference between different ways of encoding positional information. We speculate that since our Transformer encoder operates on patch-level inputs, as opposed to pixel-level, the differences in how to encode spatial information is less important. More precisely, in patch-level inputs, the spatial dimensions are much smaller than the original pixel-level inputs, e.g., 14 × 14 as opposed to 224 × 224, and learning to represent the spatial relations in this resolution is equally easy for these different positional encoding strategies. Even so, the specific pattern of position embedding similarity learned by the network depends on the training hyperparameters (Figure 9)." }, { "heading": "D.4 EMPIRICAL COMPUTATIONAL COSTS", "text": "We are also interested in real-world speed of the architectures on our hardware, which is not always well predicted by theoretical FLOPs due to details like lane widths and cache sizes. For this purpose, we perform timing of inference speed for the main models of interest, on a TPUv3 accelerator; the difference between inference and backprop speed is a constant model-independent factor.\nFigure 11 (left) shows how many images one core can handle per second, across various input sizes. Every single point refers to the peak performance measured across a wide range of batch-sizes. As can be seen, the theoretical bi-quadratic scaling of ViT with image size only barely starts happening for the largest models at the largest resolutions.\nAnother quantity of interest is the largest batch-size each model can fit onto a core, larger being better for scaling to large datasets. Figure 11 (right) shows this quantity for the same set of models. This shows that large ViT models have a clear advantage in terms of memory-efficiency over ResNet models." }, { "heading": "D.5 AXIAL ATTENTION", "text": "Axial Attention (Huang et al., 2020; Ho et al., 2019) is a simple, yet effective technique to run selfattention on large inputs that are organized as multidimensional tensors. The general idea of axial attention is to perform multiple attention operations, each along a single axis of the input tensor, instead of applying 1-dimensional attention to the flattened version of the input. In axial attention, each attention mixes information along a particular axis, while keeping information along the other axes independent. Along this line, Wang et al. (2020b) proposed the AxialResNet model in which all the convolutions with kernel size 3 × 3 in a ResNet50 are replaced by axial self-attention, i.e. a row and column attention, augmented by relative positional encoding. We have implemented AxialResNet as a baseline model.3.\nMoreover, we have modified ViT to process inputs in the 2-dimensional shape, instead of a 1- dimensional sequence of patches, and incorporate Axial Transformer blocks, in which instead of\n3Our implementation is based on the open-sourced PyTorch implementation in https://github.com/ csrhddlam/axial-deeplab. In our experiments, we reproduced the scores reported in (Wang et al., 2020b) in terms of accuracy, however, our implementation, similar to the open-source implementation, is very slow on TPUs. Therefore, we were not able to use it for extensive large-scale experiments. These may be unlocked by a carefully optimized implementation.\na self-attention followed by an MLP, we have a a row-self-attention plus an MLP followed by a column-self-attention plus an MLP.\nFigure 12, present the performance of Axial ResNet, Axial-ViT-B/32 and Axial-ViT-B/16 on ImageNet 5shot linear, when pretrained on JFT dataset, verses the pretraining compute, both in terms of number of FLOPs and inference time (example per seconds). As we can see, both Axial-ViT-B/32 and Axial-ViT-B/16 do better than their ViT-B counterpart in terms of performance, but it comes at the cost of more compute. This is because in Axial-ViT models, each Transformer block with global self-attention is replaced by two Axial Transformer blocks, one with row and one with column selfattention and although the sequence length that self-attention operates on is smaller in axial case, there is a extra MLP per Axial-ViT block. For the AxialResNet, although it looks reasonable in terms of accuracy/compute trade-off (Figure 12, left), the naive implementation is extremely slow on TPUs (Figure 12, right)." }, { "heading": "D.6 ATTENTION DISTANCE", "text": "To understand how ViT uses self-attention to integrate information across the image, we analyzed the average distance spanned by attention weights at different layers (Figure 10). This “attention distance” is analogous to receptive field size in CNNs. Average attention distance is highly variable\nacross heads in lower layers, with some heads attending to much of the image, while others attend to small regions at or near the query location. As depth increases, attention distance increases for all heads. In the second half of the network, most heads attend widely across tokens." }, { "heading": "D.7 ATTENTION MAPS", "text": "To compute maps of the attention from the output token to the input space (Figures 6 and 13), we used Attention Rollout (Abnar & Zuidema, 2020). Briefly, we averaged attention weights of ViTL/16 across all heads and then recursively multiplied the weight matrices of all layers. This accounts for the mixing of attention across tokens through all layers.\nD.8 VTAB BREAKDOWN\nTable 9 shows the scores attained on each of the VTAB-1k tasks." } ]
2,021
null
SP:87507439ef121d5d243502d2cb45eafec175f2bc
[ "Temporal smoothness is a recurring feature of real-world data that has been unaccounted for when training neural networks. Much of the random sampling in training neural networks is done to remove the temporal correlations originally present when the data is collected. This work aims to propose a method to train on this 'less processed' form of data." ]
Events in the real world are correlated across nearby points in time, and we must learn from this temporally “smooth” data. However, when neural networks are trained to categorize or reconstruct single items, the common practice is to randomize the order of training items. What are the effects of temporally smooth training data on the efficiency of learning? We first tested the effects of smoothness in training data on incremental learning in feedforward nets and found that smoother data slowed learning. Moreover, sampling so as to minimize temporal smoothness produced more efficient learning than sampling randomly. If smoothness generally impairs incremental learning, then how can networks be modified to benefit from smoothness in the training data? We hypothesized that two simple brain-inspired mechanisms – leaky memory in activation units and memory-gating – could enable networks to rapidly extract useful representations from smooth data. Across all levels of data smoothness, these brain-inspired architectures achieved more efficient category learning than feedforward networks. This advantage persisted, even when leaky memory networks with gating were trained on smooth data and tested on randomly-ordered data. Finally, we investigated how these brain-inspired mechanisms altered the internal representations learned by the networks. We found that networks with multi-scale leaky memory and memory-gating could learn internal representations that “un-mixed” data sources which vary on fast and slow timescales across training samples. Altogether, we identified simple mechanisms enabling neural networks to learn more quickly from temporally smooth data, and to generate internal representations that separate timescales in the training signal.
[]
[ { "authors": [ "Christopher Baldassano", "Uri Hasson", "Kenneth A Norman" ], "title": "Representation of real-world event schemas during narrative perception", "venue": "Journal of Neuroscience,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Alberto Bernacchia", "Hyojung Seo", "Daeyeol Lee", "Xiao-Jing Wang" ], "title": "A reservoir of time constants for memory traces in cortical neurons", "venue": "Nature neuroscience,", "year": 2011 }, { "authors": [ "Ian M Bright", "Miriam LR Meister", "Nathanael A Cruzado", "Zoran Tiganj", "Elizabeth A Buffalo", "Marc W Howard" ], "title": "A temporal record of the past with a spectrum of time constants in the monkey entorhinal cortex", "venue": "Proceedings of the National Academy of Sciences,", "year": 2027 }, { "authors": [ "Hsiang-Yun Sherry Chien", "Christopher J Honey" ], "title": "Constructing and forgetting temporal context in the human cerebral cortex", "venue": null, "year": 2020 }, { "authors": [ "Sarah DuBrow", "Nina Rouhani", "Yael Niv", "Kenneth A Norman" ], "title": "Does mental context drift or shift", "venue": "Current opinion in behavioral sciences,", "year": 2017 }, { "authors": [ "Murat Dundar", "Balaji Krishnapuram", "Jinbo Bi", "R Bharat Rao" ], "title": "Learning classifiers when the training data is not iid", "venue": "In IJCAI,", "year": 2007 }, { "authors": [ "Jeffrey L Elman" ], "title": "Learning and development in neural networks: The importance of starting small", "venue": null, "year": 1993 }, { "authors": [ "Robert M French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in cognitive sciences,", "year": 1999 }, { "authors": [ "Tianxiang Gao", "Vladimir Jojic" ], "title": "Sample importance in training deep neural networks. 2016", "venue": null, "year": 2016 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Raia Hadsell", "Dushyant Rao", "Andrei A Rusu", "Razvan Pascanu" ], "title": "Embracing change: Continual learning in deep neural networks", "venue": "Trends in Cognitive Sciences,", "year": 2020 }, { "authors": [ "Charles R Harris", "K Jarrod Millman", "Stéfan J van der Walt", "Ralf Gommers", "Pauli Virtanen", "David Cournapeau", "Eric Wieser", "Julian Taylor", "Sebastian Berg", "Nathaniel J Smith" ], "title": "Array programming with numpy", "venue": null, "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Robbe LT Goris", "Eero P Simoncelli" ], "title": "Perceptual straightening of natural videos", "venue": "Nature neuroscience,", "year": 2019 }, { "authors": [ "Christopher J Honey", "Thomas Thesen", "Tobias H Donner", "Lauren J Silbert", "Chad E Carlson", "Orrin Devinsky", "Werner K Doyle", "Nava Rubin", "David J Heeger", "Uri Hasson" ], "title": "Slow cortical dynamics and the accumulation of information over long timescales", "venue": null, "year": 2012 }, { "authors": [ "Christopher J Honey", "Ehren L Newman", "Anna C Schapiro" ], "title": "Switching between internal and external modes: a multiscale learning principle", "venue": "Network Neuroscience,", "year": 2017 }, { "authors": [ "Bernd Illing", "Wulfram Gerstner", "Johanni Brea" ], "title": "Biologically plausible deep learning—but how far can we go with shallow networks", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Kazunori Iwata", "Naohiro Ishii" ], "title": "Discrepancy as a quality measure for avoiding classification bias", "venue": "In Proceedings of the IEEE Internatinal Symposium on Intelligent Control,", "year": 2002 }, { "authors": [ "Marta Kauffman", "David Crane", "Kevin S Bright" ], "title": "Friends. season", "venue": "Produced by Warner Home Video, [Online; Retrieved on 10th September,", "year": 1994 }, { "authors": [ "M Pawan Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Dong-Hyun Lee", "Saizheng Zhang", "Asja Fischer", "Yoshua Bengio" ], "title": "Difference target propagation", "venue": "In Joint european conference on machine learning and knowledge discovery in databases,", "year": 2015 }, { "authors": [ "Yong Jae Lee", "Kristen Grauman" ], "title": "Learning the easy things first: Self-paced visual category discovery", "venue": null, "year": 2011 }, { "authors": [ "Timothy P Lillicrap", "Adam Santoro" ], "title": "Backpropagation through time and the brain", "venue": "Current opinion in neurobiology,", "year": 2019 }, { "authors": [ "Shivangi Mahto", "Vy A. Vo", "Javier S. Turek", "Alexander G. Huth" ], "title": "Multi-timescale representation learning in lstm language models, 2020", "venue": null, "year": 2020 }, { "authors": [ "Siddhartha Mishra", "T Konstantin Rusch" ], "title": "Enhancing accuracy of deep learning algorithms by training with low-discrepancy sequences", "venue": "arXiv preprint arXiv:2005.12564,", "year": 2020 }, { "authors": [ "Melanie Mitchell" ], "title": "On crashing the barrier of meaning in artificial intelligence", "venue": "AI Magazine,", "year": 2020 }, { "authors": [ "Michael C Mozer" ], "title": "Induction of multiscale temporal structure", "venue": "In Advances in neural information processing systems,", "year": 1992 }, { "authors": [ "John D Murray", "Alberto Bernacchia", "David J Freedman", "Ranulfo Romo", "Jonathan D Wallis", "Xinying Cai", "Camillo Padoa-Schioppa", "Tatiana Pasternak", "Hyojung Seo", "Daeyeol Lee" ], "title": "A hierarchy of intrinsic timescales across primate cortex", "venue": "Nature neuroscience,", "year": 2014 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Ryan V Raut", "Abraham Z Snyder", "Marcus E Raichle" ], "title": "Hierarchical dynamics as a macroscopic organizing principle of the human brain", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "Greg J Stephens", "Christopher J Honey", "Uri Hasson" ], "title": "A place for time: the spatiotemporal structure of neural dynamics during natural audition", "venue": "Journal of neurophysiology,", "year": 2019 }, { "authors": [ "Ilya Sutskever" ], "title": "Training recurrent neural networks", "venue": "University of Toronto Toronto, Canada,", "year": 2013 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Nachum Ulanovsky", "Liora Las", "Dina Farkas", "Israel Nelken" ], "title": "Multiple time scales of adaptation in auditory cortex neurons", "venue": "Journal of Neuroscience,", "year": 2004 }, { "authors": [ "Laurenz Wiskott", "Terrence J Sejnowski" ], "title": "Slow feature analysis: Unsupervised learning of invariances", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Events in the world are correlated in time: the information that we receive at one moment is usually similar to the information that we receive at the next. For example, when having a conversation with someone, we see multiple samples of the same face from different angles over the course of several seconds. However, when we train neural networks for categorization or reconstruction tasks, we commonly ignore temporal ordering of samples and use randomly ordered data. Given that humans can learn robustly and efficiently when learning incrementally from sequentially correlated, it is important to examine what kinds of architectures and inductive biases may support such learning (Hadsell et al., 2020). Therefore, we asked how does the sequential correlation structure in the data affect learning in neural networks that are performing categorization or reconstruction of one input at a time? Moreover, we asked: which mechanisms can a network employ to exploit the temporal autocorrelation (“smoothness”) of data, without needing to perform backpropagation through time (BPTT) (Sutskever, 2013)?\nWe investigated this question in three stages. In the first stage, we examined the effects of temporally smooth training data on feedforward neural networks performing category learning. Here we confirmed that autocorrelation in training data slows learning in feeforward nets.\nIn the second stage, we investigated conditions under which these classifier networks might take advantage of smooth data. We hypothesized that human brains may possess mechanisms (or inductive biases) that maximize the benefits of learning from temporally smooth data. We therefore tested two network mechanisms inspired by properties of cortical circuits: leaky memory (associated with\nautocorrelated brain dynamics), and memory gating (associated with rapid changes of brain states at event boundaries). We compared the performance of these mechanisms relative to memoryless networks and also against a long short-term memory (LSTM) architecture trained using BPTT.\nFinally, having demonstrated that leaky memory can speed learning from temporally smooth data, we studied the internal representations learned by these neural networks. In particular, we showed that networks with multi-scale leaky memory and resetting could learn internal representations that separate fast-changing and slow-changing data sources." }, { "heading": "2 RELATED WORK", "text": "Effects of sampling strategies on incremental learning. The ordering of training examples affects the speed and quality of learning. For example, learning can be sped by presenting “easier” examples earlier, and then gradually increasing difficulty (Elman, 1993; Bengio et al., 2009; Kumar et al., 2010; Lee & Grauman, 2011). Similarly, learning can be more efficient if training data is organized so that the magnitude of weight updates increases over training samples (Gao & Jojic, 2016).\nHere, we do not manipulate the order based on item difficulty or proximity to category boundaries; we only explore the effects of ordering similar items nearby in time. We aim to identify mechanisms that can aid efficient learning across many levels of temporal autocorrelation, adapting to what is present in the data. This ability to adapt to the properties of the data is important in real-world settings, where a learner may lack control over the training order, or prior knowledge of item difficulty is unavailable.\nPotential costs and benefits of training with smooth data. In machine learning research, it is often assumed that the training samples are independent and identically distributed (iid) (Dundar et al., 2007). When training with random sampling, one can approximately satisfy iid assumptions because shuffling samples eliminates any sequential correlations. However, in many real-world situations, the iid assumption is violated and consecutive training samples are strongly correlated.\nTemporally correlated data may slow learning in feedforward neural networks. If consecutive items are similar, then the gradients induced by them will be related, especially early in training. If we consider the average of the gradients induced by the whole training set as the “ideal” gradient, then subsets of similar samples provide a higher-variance (i.e. noisier) estimate of this ideal. Moreover, smoothness in data may slow learning due to catastrophic forgetting (French, 1999). Suppose that, for smoother training, we sample multiple times from a category before moving to another category. This means that the next presentation of each category will be, on average, farther apart from its previous appearance. This increased distance could lead to greater forgetting for that category, thus slowing learning overall. On the other hand, smoother training data might also benefit learning. For example, there may be some category-diagnostic features that will not reliably be extracted by a learning algorithm unless multiple weight updates occur for that feature nearby in time; smoother training data would be more liable to present such features nearby in time." }, { "heading": "3 RESEARCH QUESTIONS AND HYPOTHESES", "text": "1. How does training with temporally smooth data affect learning in feedforward networks? In light of the work reviewed above, we hypothesized that temporally smooth data would slow learning in feedforward nets.\n2. How can neural networks benefit from temporally smooth data, in terms of either learning efficiency or learning more meaningfully structured representations? We hypothesized that a combination of two brain-inspired mechanisms — leaky memory and memory-resetting — could enable networks to learn more efficiently from temporally smooth data, even without BPTT." }, { "heading": "4 EFFECTS OF TEMPORAL SMOOTHNESS IN TRAINING DATA ON LEARNING", "text": "IN FEEDFORWARD NEURAL NETWORKS\nWe first explored how smoothness of data affects the speed and accuracy of category learning (classification) in feedforward networks. See Appendix A.1 for similar results with unsupervised learning." }, { "heading": "4.1 METHODS", "text": "" }, { "heading": "4.1.1 MANIPULATING SMOOTHNESS IN TRAINING DATA", "text": "We manipulated smoothness in training data by varying the number of consecutive samples drawn from the same category. We began each training session by generating a random “category order”, which was a permutation of the numbers from 1 to N (e.g. the ordering in Figure 1.B is 2-1-3). The same category order was used for all smoothness conditions in that training session.\nTo sample with minimum smoothness, we sampled exactly one exemplar from each category, before sampling from the next category in the category order (1 repeat) (Figure 1.B). This condition is called “minimum smoothness” because all consecutive items were from different categories, and there were not more examples from a category until all other categories were sampled. We increased smoothness by increasing the number of consecutive samples drawn from each category (3 repeats and 5 repeats in Figure 1.B). Finally, we also used the standard random sampling method, in which items were sampled at random, without replacement, from the training set (Figure 1.B). The training set was identical across all conditions, as was the order in which samples were drawn from within a category (Figure 1.B)." }, { "heading": "4.1.2 FEEDFORWARD NEURAL NETWORK", "text": "Dataset. We tested MNIST, Fashion-MNIST, and synthetic datasets containing low category overlap (LeCun et al., 2010; Xiao et al., 2017). An example synthetic dataset is shown in Appendix A.2. For creating synthetic datasets, we used Numpy (Harris et al., 2020). For creating and testing the models, we used PyTorch(Paszke et al., 2019). Learning rule and objective function. We used backpropagation with both mean squared error (MSE) and cross-entropy (CE) loss functions. The results reported here are using MSE, primarily for the ease of comparison with later reconstruction error measures in this manuscript. However, the same pattern was observed using CE loss, as shown in Appendix A.3. Also, it has been shown MSE loss provides comparable performance to commonly utilized classification models with CE loss function (Illing et al., 2019). To test incremental learning, we employed stochastic gradient descent (SGD), updating weights for each training sample. Optimization, initialization, and activation function. We tested the model both with and without RMSprop optimization, along with Xavier initialization (Tieleman & Hinton, 2012; Glorot & Bengio, 2010). We applied ReLU to hidden units and Softmax or Sigmoid to the output units. Hyperparameters. For MNIST and Fashion-MNIST, we used a 3-layer fully connected network with (784, 392, 10) dimensions and a learning rate of 0.01. The learning rate was not tuned for a specific condition. We used the same learning rate across all conditions; only smoothness varied across conditions. To compensate for potential advantage of a specific set of hyperparameters for a specific condition, we ran 5 runs, each with a different random weight initialization, and reported\n2Photos in this section are taken from the FRIENDS TV series, Warner Brothers (Kauffman et al., 1994).\nthe averaged results. For hyperparameters in synthetic dataset see Appendix A.2. When RMSprop was implemented, β1 and β2 were set to 0.9 and 0.99, respectively (Ruder, 2016)." }, { "heading": "4.2 RESULTS", "text": "Smooth training data slowed incremental learning (Figure 2.A). Moreover, minimum smoothness yielded more efficient learning than random sampling (Figure 2.A). These observations generalized across all tested datasets and across MSE and CE loss, with and without RMSprop optimization." }, { "heading": "4.3 DISCUSSION", "text": "The superiority of minimum smoothness over other conditions suggests that any level of smoothness slows incremental learning, even the smoothness that can occur by chance in random sampling (Figure 2.A). Therefore, given a fixed time budget for training, a sampling strategy that minimizes smoothness can reach a higher performance than random sampling.\nSampling with minimum smoothness may be advantageous because it reduces the representation overlap across consecutive training items. Catastrophic forgetting can be reduced by decreasing the overlap between learned representations, for example, via orthogonalization (French, 1999). Though we did not explicitly seek to reduce interference by sampling with minimum smoothness, this method does likely reduce the representational overlap of nearby items. In addition, training with minimum smoothness may improve learning by maintaining a near-uniform distribution of sampled categories. Training with “low-discrepancy” sequences, such as those with uniformly distributed data, avoids classification bias and enhances learning (Iwata & Ishii, 2002; Mishra & Rusch, 2020)." }, { "heading": "5 EXPLOITING TEMPORAL SMOOTHNESS IN TRAINING DATA FOR LEARNING", "text": "IN NEURAL NETWORKS\nAlthough temporally-correlated data slows learning in feedforward nets, it appears that humans are able to rapidly extract meaningful representations in such settings, even while learning incrementally. How might our brains maximize the benefits of temporally smooth training sets? Two properties of cortical population dynamics appear especially relevant to incremental learning: (i) all cortical dynamics exhibit autocorrelation on the scale of milliseconds to seconds, so that correlation in consecutive internal states is unavoidable (Murray et al., 2014; Honey et al., 2012; Bright et al., 2020); (ii) neural circuits appear to shift state suddenly at event boundaries, and this appears to be associated with “resetting” of context representations (DuBrow et al., 2017; Chien & Honey, 2020; Baldassano et al., 2018). We hypothesized that these two neural properties represent an inductive bias in cortical learning. In particular, we hypothesized that (i) data sampled from a slowly-changing environment may contain important features that are stable over time, which can be better extracted by mixing current input with a memory of recent input; and (ii) the interference of irrelevant prior information can be reduced by ”resetting” memory at boundaries between events. Therefore, we examined how neural network learning was affected by two brain-inspired mechanisms: (i) leaky memory in internal representations; (ii) a memory gating mechanism that resets internal representation at transitions between categories." }, { "heading": "5.1 BRAIN-INSPIRED NEURAL ARCHITECTURE FOR SUPERVISED LEARNING", "text": "Can brain-inspired architectural tweaks – leaky memory and memory gating – increase the efficiency of learning in supervised classification tasks?" }, { "heading": "5.1.1 METHODS", "text": "Leaky memory: We added leaky memory to the internal representations (hidden units) by linearly mixing them across consecutive time points. Hidden unit activations were updated according to following function:\nH(n) = αH(n− 1) + (1− α)ReLU(WIHI(n)) (1)\nwhere H(n) is the state of the hidden units for trial n, I(n) is the state of the input units for trial n, α is a leak parameter, WIH are the connections from the input layer to the hidden layer, and ReLU is a rectified linear activation. We set α = 0.5 in these experiments.\nMemory Gating: In order to reduce the interference between items from different categories in the leaky memory, we employed a gating mechanism to reset memory at the transitions between categories. Therefore, if sample n was drawn from a category other than the category of sample n− 1, then we set α = 0 in Eq.(1) on that trial n (Figure 2.C). For the learning rule, we used backpropagation, however the gradient computation did not account for the fact that the neurons were leaky. Therefore, the update rule in [leaky memory + reset] model is different from the common update rule in recurrent models (e.g. LSTM). LSTM uses backpropagation through time (BPTT), which is implausible for biological settings. In learning with BPTT, the same neurons must store and retrieve their entire activation history (Sutskever, 2013; Lillicrap & Santoro, 2019). In contrast, in the [leaky memory + reset] model, neurons only use local information from their most recent history. Therefore, it is computationally much simpler because it does not require to maintain the whole history and to compute the gradient relative to all that history.\nOptimization and initialization methods, and the hyperparameters were identical to those used in training and testing feedforward neural networks." }, { "heading": "5.1.2 RESULTS", "text": "Smoothness in training data increased learning efficiency in learners with leaky memory, as shown in Figure 2.B. This result is in contrast to the detrimental effects of smoothness in memoryless learners (Figure 2.A). Moreover, adding a gating mechanism to the leaky memory units further increased their learning (Figure 2.C). In learners with leaky memory and gating, all levels of smoothness significantly outperformed random sampling and sampling with minimum smoothness (1 repeat)\n(Figure 2.C). These findings generalized across MNIST, Fashion-MNIST, and synthetic datasets [Appendix A.4]." }, { "heading": "5.1.3 DISCUSSION", "text": "When data sampled at a given moment shares category-relevant features with recent samples, learners with leaky memory were able to exploit this property for more efficient category learning (Figure 2.B, C). Importantly, the resetting mechanism prevented the mixing of hidden representations from samples of different categories, allowing the system to benefit most from the data smoothness, while not suffering from between-category interference.\nWhy does averaging of current and prior states produce more efficient learning from sequentially correlated data streams? Our working hypothesis is that averaging across multiple members of the same category increases (in some datasets) the proportion of variance in the hidden units that is associated with category-diagnostic features. This hypothesis predicts that if consecutive items in the data stream, do not share any local features, then the benefits of leaky memory will be eliminated. We confirmed this prediction empirically (Appendix A.7).\nImportantly, networks with leaky memory and resetting surpassed the performance of feedforward networks, for all levels of smoothness (Figure 2.A, C). Also, leaky memory networks trained with smooth data surpassed feedforward networks, even when tested on data streams that were not smooth. This finding is notable because it indicates that the leaky memory networks learned better single-exemplar representations, because they could generalize to novel temporal contexts. Moreover, the superior learning was obtained without BPTT, only using a linear mixture of activations over time-steps, which is easy to implement in brain dynamics (Honey et al., 2012; Murray et al., 2014) .\nHow does the [leaky memory + reset] net compare with a more flexible recurrent net trained with BPTT? The leaky-memory model with reset is trained without BPTT, but it is important to compare this to the performance of a more flexible model that can directly learn from task-relevant temporal structure. We found that an LSTM trained with BPTT was able to benefit from training with smooth data, learning more slowly at first, but ultimately achieving the lowest test error of all models (See Appendix A.8 and A.9). However, the performance advantage of the LSTM trained with BPTT was not preserved when the models were tested out-of-domain. In particular, when models were trained on data that contained temporal smoothness, but tested on data with minimum smoothness (1 repetition per category), the leaky-memory with reset model showed the best performance of all models (See Appendix A.10). We interpret these results as evidence that the LSTM has a much more flexible architecture, and via BPTT it can be calibrated to the exact structure of the training data stream (e.g. there are precisely 5 repetitions in a block). Conversely, the leaky memory model with reset is more biologically plausible, it is trained without BPTT, it showed performance competitive with the LSTM in this setting, and it generalized better across different levels of temporal smoothness. Note that we found these results despite the fact that in the LSTM, the gradient updates are mathematically optimized for the task (via BPTT), whereas, in the [leaky memory + reset] model, the gradient computations do not account for the recurrence in the network at all.\nAre the benefits of leaky-memory due to a form of gradient averaging, analogous to minibatching? Leaky-memory networks average activations over time, while batching averages gradients over time. The two mechanisms appear to differ, because leaky-memory effects can be reversed when the training categories contain non-overlapping features (Appendix A.7) and leaky-memory and mini-batching affect performance in different ways as a function of the amount of category repetition (Appendix A.5 and A.6)." }, { "heading": "5.2 BRAIN-INSPIRED ARCHITECTURES FOR UNSUPERVISED LEARNING ACROSS TEMPORAL SCALES", "text": "In the real world, we may need to learn from data with multiple levels of smoothness. For instance, returning to the example of having a face-to-face conversation: the features around a person’s mouth change quickly, while their face’s outline changes more slowly (Figure 3.A). Moreover, there are no pre-defined labels to support the learning of representations in this setting. We hypothesized that neural networks equipped with multi-scale (i.e. fast and slow) leaky memory could learn more\nmeaningful representations in this settings, by separating their representations of structures that vary on fast and slow timescales." }, { "heading": "5.2.1 METHODS", "text": "Dataset. To test the un-mixing abilities of our networks, we synthesized simplified training datasets which contained three levels of temporal structure. The input to the model at each time point consisted of 3 subcomponents (top, middle, bottom), and each subcomponent had two elements. Each subcomponent was generated to express a different level of smoothness over time: for example, the top, middle and bottom rows changed feature-category every 1, 3 or 5, iterations, respectively (Figure 3.B). The individual features sampled at each time were generated as the sum of (i) an underlying binary state variable (which would switch every 1, 3 or 5 iterations) and (ii) uniformly-distributed noise (Appendix A.12). As a result, the model was provided with features that varied at 3 timescales: fast (top row), medium (middle row), and slow (bottom row). For creating the dataset, and designing and analyzing the models we used Numpy (Harris et al., 2020). Architectures. We used the same brain-inspired mechanisms for unsupervised learning models: leaky memory and gating mechanisms. To evaluate the effectiveness of the added mechanisms, we compared 5 types of autoencoder (AE) models (See Figure 3.C): i) Feedforward AE; ii) AE with leakymemory in internal representations; iii) AE with multi-scale leaky memory in internal representations, inspired by evidence showing that levels of processing in the brain can integrate information at different time-scales (Honey et al., 2012; Murray et al., 2014; Bright et al., 2020), and that multiple time-scales are present even within a single circuit (Bernacchia et al., 2011; Ulanovsky et al., 2004); iv) AE with leaky memory in internal representations and boundary-sensitive gating, motivated by the evidence showing that processing in cortical circuits are sensitive to event-boundaries and these boundaries can shift learned representations (DuBrow et al., 2017; Chien & Honey, 2020); and (v) AE with multi-scale leaky memory in internal representations and boundary-sensitive gating. Gating mechanism was sensitive to change in the input stream. It would use information from current and previous input to decide to reset memory when the change passed a threshold (see Appendix A.11) (Chien & Honey, 2020). Learning algorithm, optimization, and initialization. We used backpropagation with MSE loss, both with and without RMSprop optimization method, and Xavier initialization (Tieleman & Hinton, 2012; Glorot & Bengio, 2010). We applied ReLU and Sigmoid as activation functions for hidden and output units, respectively. Hyperparameters. To implement leaky memory at multiple scales, we varied the time constants across the nodes in the hidden layer. Thus, the variable in Eq.(1) was set to 0, 0.3, and 0.6 for “short memory”, “medium-memory”, and “long-memory” nodes, respectively. The networks were 3-layer, fully connected autoencoders with (6, 3, 6) dimension. Learning rate was 0.01. In cases where RMSprop was implemented, the beta-1 and beta-2 were set to 0.9 and 0.99. For leaky memory in internal representations in Eq.(1) was set to 0.5 (See Figure 3.C). Un-mixing Measures. We measured the network’s ability to “un-mix” the time-scales of its input. By un-mixing, we mean learning representations that selectively track distinct latent sources that generated features within each training sample. In particular, we tested whether no-memory, shortmemory, and long-memory nodes in the network would track the fast-, medium-, and slow-changing features in the data. To this end we measured the Pearson correlation between each hidden unit (nomemory, short-memory, and long-memory) and all of the data features (fast, medium and slowly changing). We then quantified the “timescale-selectivity” — e.g. whether the slow-changing feature was more correlated with long-memory node than other nodes (no-memory and short-memory) (See Figure 3.E). Learning Efficiency Measure. Learning speed was measured using the reconstruction error of the test data, computed as the MSE across all 3 subcomponents of each data sample." }, { "heading": "5.2.2 RESULTS", "text": "We first confirmed that all of the autoencoder (AE) models could learn to reconstruct the input (Figure 3.D). The most efficient architectures were the [ leaky + resetting] AE, [multiscale leaky + resetting] AE, and the memoryless AE.\nBoth networks with memory and resetting could successfully un-mix fast and slow data sources. The individual hidden state units in these AE models were selectively more correlated with their\ncorresponding data features (i.e. the slow-changing feature was more correlated with the longmemory node than with the other nodes; Figure 3.E)).\nThese findings generalized across synthesized datasets and across learning rates (See Appendix A.12)." }, { "heading": "5.2.3 DISCUSSION", "text": "The two autoencoder models that had both memory and resetting mechanisms were most successful in learning internal representations that tracked distinct timescales of the input. Slowly (or quickly) varying features were extracted by slowly (or quickly) varying subsets of the network, analogous to a matched filter (see also Mozer (1992)). Features that change on different timescales may correspond to different levels of structure in the world (Wiskott & Sejnowski, 2002). Thus, by adding leaky memory and memory-gating to a simple feedforward AE model, we equipped it with an ability to separate different levels of structure in the environment. Moreover, because intrinsic dynamics vary on multiple scales in the human brain (Stephens et al., 2013; Murray et al., 2014; Honey et al., 2012; Raut et al., 2020), this implies that slowly-varying brain circuits may be biased to extract slowly-varying structure from the world (Honey et al., 2017).\nWhy did the no-memory (feedforward) model produce slightly lower reconstruction error than models with memory? In models with memory, there is a (small) cost in the overall test error, because slowly-changing internal states are ineffective for reconstructing quickly-changing features. However, the error introduced by nodes reconstructing input from a mismatched timescale is small, and it is accompanied by a significant benefit: learning more meaningful, un-mixed representations of a multi-scale data stream. Indeed, if a model’s ”slow” hidden units (i.e. medium and long memory units) were correlated with the state fast-changing features in the data, the model’s per-feature error was worse (Appendix A.14, Figure A.14)." }, { "heading": "6 CONCLUSION", "text": "Inspired by temporal properties of the training signal and the learning architectures in primate brains, we investigated how the smoothness of training data affects incremental learning in neural networks.\nFirst, we examined the speed of learning. We found that data smoothness slowed learning in memoryless learners (feedforward neural nets), but sped learning in systems with leaky memory (Figure 2). Moreover, adding a simple gating mechanism to leaky-memory networks enabled them to flexibly adapt to the smoothness in the data, so that they could benefit from repeating structure while not the interference of unrelated prior information.\nSecond, we examined the representations learned when unlabeled data contained temporal structure on multiple smoothness levels. Neural networks with memory and feature-sensitive gating learned representations that un-mixed features varying on different timescales. If distinct timescales in data reflect distinct data generators, these “un-mixed” representations may provide a more ”meaningful” description of the input data (Mitchell, 2020; Mahto et al., 2020).\nLeaky memory networks exhibited more efficient learning and more interpretable representations, even though they were trained with a learning rule that did not employ any temporal information. In particular, all networks were trained incrementally using backpropagation and a loss function that only depended on the immediate state of the network. Architectures with leaky memory and gating can thus exploit temporal structure in a way that is computationally simpler and more biologically plausible than backpropagation through time (Sutskever, 2013; Lillicrap & Santoro, 2019). With respect to biological plausibility, we note that the leaky-memory-plus-gating system works well even for autoencoders (Figure 3), for which there are simple activation-based learning rules that do not require the propagation of partial derivatives (Lee et al., 2015). On the computational side, we highlight that the gradients computed for the leaky memory networks were, in a sense “inaccurate”, because the update rule was unaware of the recurrent leak connections, and yet learning in the leaky nets was still faster than in feedforward nets, for which the gradients should be more accurate.\nFuture work should test whether these results generalize to larger architectures and more realistic datasets, and should include a broader search of the hyperparameter space. The results may have some generality , because we used simple architectures and made few domain-specific assumptions, but at present the results serve as demonstrations of the basic phenomena. We expect the method to work best for datasets in which important or diagnostic data features persist over time. It will also be interesting to investigate the broader consequences of learning with leaky memory: for example, human internal representations of natural sensory input sequences appear to be smooth in time, in contrast to the representations of most feedforward nets (Hénaff et al., 2019)), and training with smooth data and leaky memory could potentially reduce this difference.\nIn sum, we identified simple mechanisms which enabled neural networks (i) to learn quickly from temporally smooth data and (ii) to generate internal representations that separated distinct timescales of the data, without propagating gradients in time." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 EFFECTS OF SMOOTHNESS ON CATEGORIZATION VERSUS RECONSTRUCTION TASKS", "text": "Classification networks (performing categorization) and autoencoder networks (performing reconstruction) were similarly affected by temporal smoothness of training data (Figure A.1). Increased smoothness decreased learning efficiency. Also, ”minimum smoothness” sampling exhibited the best performance across both types of networks. We used 3-layer fully connected networks for both classification and reconstruction. The network dimension for classification was (16, 8, 4) and for reconstruction was (16, 4, 16). Learning rate was 0.01 for all conditions in classification, and 0.005 for all conditions in reconstruction.\nClassi cation Reconstruction" }, { "heading": "A.2 SYNTHETIC DATASET", "text": "We synthesized a dataset with low between-category overlap. The dataset consisted of 4 categories, each with 300 training items. Each item was a 1-by-16 vector. Different exemplars of a category were created by adding uniform noise to the template of the category." }, { "heading": "A.3 SMOOTHNESS EFFECTS FOR CLASSIFICATION USING CROSS-ENTROPY LOSS", "text": "Similar to the results obtained with mean-square error (MSE) loss, we found that temporally smooth data slowed category learning with training with cross-entropy (CE) loss. Figure A.3 shows this\neffect for the MNIST dataset. We used the same neural architecture and hyperparameters as those in MSE, explained in section 4.1.2." }, { "heading": "A.4 CATEGORIZATION OF SYNTHETIC DATA BY LEAKY MEMORY NETWORKS WITH GATING", "text": "Figure A.4 shows the effects of temporal smoothness in training data for neural network models equipped with leaky memory and gating for the synthetic dataset. Similar to the pattern observed in Figure2.B, we can see that, in the network with leaky memory, higher levels of smoothness generate better performance. Moreover, adding a gating mechanism enhanced learning, such that all levels of smoothness surpassed the ”minimum smoothness” (1 repetition) condition, as was observed in Figure2.C.\nLeaky memory Leaky memory and gating" }, { "heading": "A.5 COMPARING THE LEAKY MEMORY APPROACH AGAINST MINI-BATCH TRAINING", "text": "Mini-batching Leaky memory with reset" }, { "heading": "A.6 EFFECTS OF SMOOTHNESS IN DATA ON MINI-NATCH TRAINING", "text": "We explored how smooth data affects learning when weights were updated using mini-batch training. We used the MNIST dataset and trained each network with batches of size 16. Network dimension and other hyperparameters were identical to those used in incremental SGD. We found that the level of smoothness in the training did not influence mini-batch training similarly to SGD training. Early in the training, minimum smoothness showed the fastest learning and higher levels of smoothness showed slower learning. However, later in the training, another pattern was observed: the condition with the smoothness level equal to the batch size (e.g. 16 repetitions for batch of 16) showed the greatest learning efficiency compared to both lower levels of smoothness (e.g. 10 repetitions) and higher levels of smoothness (e.g. 24 repetitions).\nIn connecting the mini-batch data to the results reported in Figure 2, consider that ”smoothness” can happen at 2 levels: samples can be similar to one another within a batch (”smooth within a batch”) and the composition of samples can be similar across consecutive mini-batches (”smooth across batches”). It seems that early in the training, the conditions with minimum ”within-batch smoothness” have the highest learning speed; this makes sense as the composition of each minibatch is most reflective of the overall composition of the test data. However, later in the training, the condition with minimum ”across-batch smoothness” has the best learning speed. Minimum\nEarly in the training Later in the training 1 repetition\nacross-batch smoothness refers to the condition where each batch consists of items from only one category (e.g. 16 repetitions for batch of 16). Note that when each individual batch contains items all from one category, this also implies that consecutive batches will not contain any items from the same categories, leading to a ”minimum smoothness at the batch level”. Thus, the advantage for minimum across-batch smoothness may be analogous to what is observed in Figure 2 at the single item level, but occurring at the batch level.\nThese results are of course, only preliminary, and future work should elaborate how the smoothness of the training data interacts with mini-batch training." }, { "heading": "A.7 SYNTHETIC DATA STREAMS IN WHICH LEAKY MEMORY IS DISADVANTAGEOUS", "text": "We hypothesized that the averaging mechanism in leaky memory models increases the proportional signal variance allocated to category diagnostic features, by emphasizing the features that are shared across multiple members of a category. In order to demonstrate this, we trained our model on a data structure in which the consecutive items did not share any local features.\nWe used a synthesized dataset and organized the data so that consecutive items did not have shared features. For resetting mechanism, we tried a range of resetting (e.g. every 2 items, every 3 items, etc). In this setting, the leaky memory advantage was eliminated, and leaky memory, with or without reset, was always less effective than learning without memory." }, { "heading": "A.8 EFFECTS OF TEMPORAL SMOOTHNESS ON CATEGORY-LEARNING IN AN LSTM TRAINED", "text": "WITH BPTT\nWe explored how training with smooth data affects category-learning in an LSTM trained with backpropagation through time. The dataset, network dimension, learning rate, and optimization method were identical to ones used in section 5.1.1.\nWe found that higher smoothness in training data resulted in better classification performance in LSTM.\nA.9 COMPARING LSTM AND [LEAKY-MEMORY WITH RESET] MODELS\nWe compared LSTM, the [leaky memory + reset] model, and the no-memory model. We first tested both the LSTM and the [leaky memory + reset] model on the same data structure that they were\ntrained on (e.g. training with 5-repetitions and testing on 5-repetitions). This means that we included memory in the testing process and tested the models on the same order that they were trained on. In this setting, LSTM was the best model, and the leaky-memory model was second-best, better than no-memory model." }, { "heading": "A.10 GENERALIZATION OF LSTM AND [LEAKY MEMORY + RESET] MODELS TO DATASETS WITH DIFFERENT TEMPORAL STRUCTURE", "text": "We compared LSTM and the [leaky memory + reset] model on their generalizability. To do so, we first trained and tested both models on the same sequence of samples (e.g. trained on 5-repetitions and tested on 5-repetitions). Then we tested them on a different sequence of samples from the one they were trained on (e.g. trained on 5-repetitions and tested on 1-repetition). We found that LSTM outperformed the [leaky memory + reset] model when tested on the same sequence used for training. However, [leaky memory + reset] model was superior to LSTM when tested on a data stream with different temporal structure from training. Differences between LSTM and the [leaky memory + reset] model suggest that the two do not exploit the temporal structure of the data stream in the same way. The LSTM makes use of information about the specific task structure (e.g. there are precisely 5 repetitions in a block) and its performance is reduced when this assumption is violated in generalization data. Conversely, the [leaky memory + reset] model simply uses the temporal smoothness in the training data to learn more useful internal representations." }, { "heading": "A.11 UNSUPERVISED LOCAL RESETTING MECHANISM", "text": "For learning multiscale data we have implemented a “resetting” mechanism in a straightforward and unsupervised way using only local computations, while preserving the same gains in learning efficiency. To do so, we have used the comparison between the difference and the average of the following input items as the resetting criterion, but other sorts of computations are also possible. Our implemented method is consistent with neurophysiological studies that demonstrate a sudden shift in memory representations in the face of a surprise in the input stimuli (DuBrow et al., 2017; Chien & Honey, 2020). The bioplasible event-related resetting: Reset the memory when the difference between the consecutive inputs is larger than their average. For instance, the memory of the hidden node with long memory will be reset based on the amount of change in the slow-changing feature of the input. [ t represents the iteration number during training, It is the current state, It−1 previous state ] |It − It−1| > |(It + It−1)/2|" }, { "heading": "A.12 GENERALIZABILITY OF FINDING IN LEARNING FROM MULTISCALE DATA FOR A", "text": "DIFFERENT LEARNING RATE AND A DIFFERENT DATASET\nTo investigate the generalizability of the findings from section 5.2, we examined the performance of the model for a range of learning rates. Part A in figure A.12 shows the results for learning rate of 0.003, in the same dataset reported in section 5.2.2. To investigate the generalizability of the findings\nfrom section 5.2, we examined the performance of the model for different synthesized datasets. Part B in Figure A.12 shows the results for a different synthesized dataset (learning rate = 0.005)." }, { "heading": "A.13 DOES FASTER CONVERGENCE IN THE NO-MEMORY MODEL FROM 5.2 CONTRADICT THE BENEFIT OF THE MEMORY-RESET MODEL FROM 5.1?", "text": "The findings from section 5.1 and 5.2 are complementary rather than contradictory. Consider that the multiscale data stream in part-2 is composed of three different subcomponents (top, middle and bottom rows of the input). Thus, the multi-scale stream can be understood as a combination of the 1-rep condition from part-1 (feature changes with every sample), the 3-rep condition from part-1( feature changes at a medium speed across samples), and the 5-rep condition from part-1 (the feature changes slowly across samples).In part-1, we showed that the [leaky memory + reset] model performs better when the data has higher smoothness (e.g. in Figure 2.C, category learning is more efficient for 5-repetitions than for 1-repetition). If the same pattern holds in the context of multi-scale autoencoders, for models with memory and reset, we should see that the slow-changing\nfeature (5-repetitions) is more quickly learned than the fast-changing feature (1-repetition). To test this, we measured the per feature error (e.g. test error for reconstructing a specific feature) and we compared the fast and slow-changing features. Consistent with the pattern observed in part-1 of the paper, we saw that both the [leaky memory + reset] model and [multiscale leaky memory + reset] model exhibited the predicted pattern ( Figure A.13): the reconstruction error for the slow changing feature was lower than the reconstruction error for the fast-changing feature.\n[ Leaky memory + reset ] model [ Multiscale leaky memory + reset ] model" }, { "heading": "A.14 WHY DOES THE [NO-MEMORY] MODEL OUTPERFORM THE [MULTISCALE MEMORY + RESET] MODEL IN TEST-ERROR (IN SECTION 5.2)", "text": "The multi-scale model faces a challenge that was not faced by the single-scale models that were tested in the category learning components of this study. Because all nodes in the hidden layer of the multi-scale model project to all nodes in the reconstruction later, the slowly changing hidden states of the model (i.e. the nodes with longer memory) are contributing to the reconstruction of quicklychanging features in the data stream. There is a (small) cost in the overall test error, because slowlychanging internal states are ineffective for reconstructing quickly-changing features. We emphasize that the quantity of noise introduced is small, and that it is accompanied by a significant benefit in learning more interpretable, un-mixed representations of a multi-scale datastream. To demonstrate that these slow units are indeed the source of poorer learning, we tested the hypothesis that (i) a higher correlation between hidden units with memory (units with short or long memory) and fastchanging part of the output would result in a worse performance in reconstructing the fast-changing feature; whereas (ii) a higher correlation between hidden unit with no-memory and fast-changing part of the output would not result in worse performance in reconstructing the fast-changing feature. These hypotheses were confirmed in our analyses [Figure A.14]." }, { "heading": "A.15 DYNAMICS OF HIDDEN UNITS IN DIFFERENT AUTOENCODER MODELS THAT ARE LEARNING TO RECONSTRUCT MULTI-TIMESCALE INPUTS", "text": "" }, { "heading": "A.16 OPEN-SOURCE", "text": "The source code will be shared on authors’ github after the reviewing process." } ]
2,020
null
SP:d460957c05007cafe286b0590ffed111c806dd48
[ "The authors study the problem of global non-convex optimization with access only to function valuations. Specifically, they propose an approach to automatically control the hyper-parameters of Directional Gaussian Smoothing (DGS) a recently proposed solution for the problem. Their proposed solution trade-offs some additional function evaluations per parameter update to perform a line search for the optimal learning rate. Then the tuned learning rate informs the update for the Gaussian smoothing parameter. The proposed automated tuning approach is supported by a large set of experiments that include standard global optimization benchmarks as well as practical applications." ]
The local gradient points to the direction of the steepest slope in an infinitesimal neighborhood. An optimizer guided by the local gradient is often trapped in local optima when the loss landscape is multi-modal. A directional Gaussian smoothing (DGS) approach was recently proposed in (Zhang et al., 2020) and used to define a truly nonlocal gradient, referred to as the DGS gradient, for high-dimensional black-box optimization. Promising results show that replacing the traditional local gradient with the DGS gradient can significantly improve the performance of gradient-based methods in optimizing highly multi-modal loss functions. However, the optimal performance of the DGS gradient may rely on fine tuning of two important hyper-parameters, i.e., the smoothing radius and the learning rate. In this paper, we present a simple, yet ingenious and efficient adaptive approach for optimization with the DGS gradient, which removes the need of hyper-parameter fine tuning. Since the DGS gradient generally points to a good search direction, we perform a line search along the DGS direction to determine the step size at each iteration. The learned step size in turn will inform us of the scale of function landscape in the surrounding area, based on which we adjust the smoothing radius accordingly for the next iteration. We present experimental results on highdimensional benchmark functions, an airfoil design problem and a game content generation problem. The AdaDGS method has shown superior performance over several the state-of-the-art black-box optimization methods.
[ { "affiliations": [], "name": "SMOOTHING GRADIENT" } ]
[ { "authors": [ "Youhei Akimoto", "Nikolaus Hansen" ], "title": "Projection-based restricted covariance matrix adaptation for high dimension", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference 2016,", "year": 2016 }, { "authors": [ "Larry Armijo" ], "title": "Minimization of functions having lipschitz continuous first partial derivatives", "venue": "Pacific J. Math.,", "year": 1966 }, { "authors": [ "A. Auger", "N. Hansen" ], "title": "A restart cma evolution strategy with increasing population size", "venue": "IEEE Congress on Evolutionary Computation,", "year": 2005 }, { "authors": [ "Krishnakumar Balasubramanian", "Saeed Ghadimi" ], "title": "Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "El Houcine Bergou", "Eduard Gorbunov", "Peter Richtrik", "Peter Richtárik" ], "title": "Stochastic three points method for unconstrained smooth minimization", "venue": null, "year": 1902 }, { "authors": [ "Sbastien Bubeck", "Nicol Cesa-Bianchi" ], "title": "Regret analysis of stochastic and nonstochastic multiarmed bandit problems", "venue": "Foundations and Trends in Machine Learning,", "year": 2012 }, { "authors": [ "Marco Ceze", "Marcelo Hayashi", "Ernani Volpe" ], "title": "A Study of the CST Parameterization Characteristics. doi: 10.2514/6.2009-3767", "venue": null, "year": 2009 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Xiangyi Chen", "Sijia Liu", "Kaidi Xu", "Xingguo Li", "Xue Lian Lin", "Mingyi Hong", "David E. Cox" ], "title": "Zo-adamm: Zeroth-order adaptive momentum method for black-box optimization", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Krzysztof Choromanski", "Mark Rowland", "Vikas Sindhwani", "Richard E Turner", "Adrian Weller" ], "title": "Structured evolution with compact architectures for scalable policy optimization", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Krzysztof M Choromanski", "Aldo Pacchiano", "Jack Parker-Holder", "Yunhao Tang", "Vikas Sindhwani" ], "title": "From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "A. Dereventsov", "C.G. Webster", "J.D. Daws Jr." ], "title": "An adaptive stochastic gradient-free approach for high-dimensional blackbox optimization", "venue": null, "year": 2006 }, { "authors": [ "Mark Drela" ], "title": "Xfoil: An analysis and design system for low reynolds number airfoils", "venue": "Low Reynolds Number Aerodynamics,", "year": 1989 }, { "authors": [ "John C. Duchi", "Michael I. Jordan", "Martin J. Wainwright", "Andre Wibisono" ], "title": "Optimal rates for zero-order convex optimization: The power of two function evaluations", "venue": "IEEE Transactions on Information Theory,", "year": 2015 }, { "authors": [ "Mohammed El-Abd" ], "title": "Black-box optimization benchmarking for noiseless function testbed using artificial bee colony algorithm", "venue": "In Proceedings of the 12th Annual Conference Companion on Genetic and Evolutionary Computation,", "year": 2010 }, { "authors": [ "David Eriksson", "Michael Pearce", "Jacob Gardner", "Ryan D Turner", "Matthias Poloczek" ], "title": "Scalable global optimization via local bayesian optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Abraham D. Flaxman", "Adam Tauman Kalai", "H. Brendan McMahan" ], "title": "Online convex optimization in the bandit setting: gradient descent without a gradient", "venue": "Proceedings of the 16th Annual ACM-SIAM symposium on Discrete Algorithms,", "year": 2005 }, { "authors": [ "Matthew C. Fontaine", "Ruilin Liu", "Ahmed Khalifa", "Julian Togelius", "Amy K. Hoover", "Stefanos Nikolaidis" ], "title": "Illuminating mario scenes in the latent space of a generative adversarial network", "venue": null, "year": 2007 }, { "authors": [ "Daniel Golovin", "John Karro", "Greg Kochanski", "Chan-Soo Lee", "Xingyou Song", "Qiuyi Zhang" ], "title": "Gradientless descent: High-dimensional zeroth-order optimization", "venue": "ArXiv, abs/1911.06317,", "year": 2020 }, { "authors": [ "I. Goodfellow", "J. Pouget-Abadie", "M. Mirza", "B. Xu", "D. Warde-Farley", "S. Ozair", "A. Courville", "Y. Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Nikolaus Hansen" ], "title": "CMA-ES with Two-Point Step-Size Adaptation", "venue": "Research Report RR-6527,", "year": 2008 }, { "authors": [ "Kevin G. Jamieson", "Robert D. Nowak", "Benjamin Recht" ], "title": "Query complexity of derivative-free optimization", "venue": "In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2,", "year": 2012 }, { "authors": [ "Momin Jamil", "Xin-She Yang" ], "title": "A literature survey of benchmark functions for global optimisation problems", "venue": "IJMNO, 4:150–194,", "year": 2013 }, { "authors": [ "Brenda Kulfan" ], "title": "A Universal Parametric Geometry Representation Method - ”CST", "venue": null, "year": 2008 }, { "authors": [ "Jeffrey Larson", "Matt Menickelly", "Stefan M. Wild" ], "title": "Derivative-free optimization methods", "venue": "Acta Numerica,", "year": 2019 }, { "authors": [ "Sijia Liu", "Jie Chen", "Pin-Yu Chen", "Alfred O. Hero" ], "title": "Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "I. Loshchilov", "T. Glasmachers", "H. Beyer" ], "title": "Large scale black-box optimization by limitedmemory matrix adaptation", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2019 }, { "authors": [ "Alvaro Maggiar", "Andreas Wachter", "Irina S Dolinskaya", "Jeremy Staum" ], "title": "A derivative-free trustregion algorithm for the optimization of functions smoothed via gaussian convolution using adaptive multiple importance sampling", "venue": "SIAM Journal on Optimization,", "year": 2018 }, { "authors": [ "Niru Maheswaranathan", "Luke Metz", "George Tucker", "Dami Choi", "Jascha Sohl-Dickstein" ], "title": "Guided evolutionary strategies: Augmenting random search with surrogate gradients", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search of static linear policies is competitive for reinforcement learning", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Florian Meier", "Asier Mujika", "Marcelo Matheus Gauy", "Angelika Steger" ], "title": "Improving gradient estimation in evolutionary strategies with past descent directions. Optimization Foundations for Reinforcement Learning Workshop at NeurIPS", "venue": null, "year": 2019 }, { "authors": [ "Yurii Nesterov", "Vladimir Spokoiny" ], "title": "Random gradient-free minimization of convex functions", "venue": "Foundations of Computational Mathematics,", "year": 2017 }, { "authors": [ "J. Nocedal", "S. Wright" ], "title": "Numerical Optimization", "venue": null, "year": 2006 }, { "authors": [ "E. Real", "S. Moore", "A. Selle", "S. Saxena", "Y.L. Suematsu", "J. Tan", "Q.V. Le", "A. Kurakin" ], "title": "Largescale evolution of image classifiers", "venue": "International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Luis Miguel Rios", "Nikolaos V. Sahinidis" ], "title": "Derivative-free optimization: a review of algorithms and comparison of software implementations", "venue": "J Glob Optim,", "year": 2009 }, { "authors": [ "Mark Rowland", "Krzysztof Choromanski", "François Chalus", "Aldo Pacchiano", "Tamás Sarlós", "Richard E. Turner", "Adrian Weller" ], "title": "Geometrically coupled monte carlo sampling", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 1952 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Ozan Sener", "Vladlen Koltun" ], "title": "Learning to guide random search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mor Sinay", "Elad Sarafian", "Yoram Louzoun", "Noa Agmon", "Sarit Kraus" ], "title": "Explicit gradient learning", "venue": null, "year": 2021 }, { "authors": [ "P. Wolfe" ], "title": "Convergence conditions for ascent methods", "venue": "SIAM Review,", "year": 1969 }, { "authors": [ "ES", "e.g", "(Akimoto", "2016 Hansen", "Loshchilov" ], "title": "2019), underperform ASEBO in optimizing the benchmark functions. We use the code published at https://github.com/jparkerholder/ASEBO by the authors of the ASEBO method with default hyper-parameters provided in the code", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "We consider the problem of black-box optimization, where we search for the optima of a loss function F : Rd → R given access to only its function queries. This type of optimization finds applications in many machine learning areas where the loss function’s gradient is inaccessible, or unuseful, for example, in optimizing neural network architecture (Real et al., 2017), reinforcement learning (Salimans et al., 2017), design of adversarial attacks (Chen et al., 2017), and searching the latent space of a generative model (Sinay et al., 2020).\nThe local gradient, i.e.,∇F (x), is the most commonly used quantities to guide optimization. When ∇F (x) is inaccessible, we usually reformulate∇F (x) as a functional of F (x). One class of methods for reformulation is Gaussian smoothing (GS) (Salimans et al., 2017; Liu et al., 2017; Mania et al., 2018). GS first smooths the loss landscape with d-dimensional Gaussian convolution and represents∇F (x) by the gradient of the smoothed function. Monte Carlo (MC) sampling is used to estimate the Gaussian convolution. It is known that the local gradient∇F (x) points to the direction of the steepest slope in an infinitesimal neighborhood around the current state x. An optimizer guided by the local gradient is often trapped in local optima when the loss landscape is non-convex or multimodal. Despite the improvements (Maggiar et al., 2018; Choromanski et al., 2018; 2019; Sener & Koltun, 2020; Maheswaranathan et al., 2019; Meier et al., 2019), GS did not address the challenge of applying the local gradient to global optimization, especially in high-dimensional spaces.\nThe nonlocal Directional Gaussian Smoothing (DGS) gradient, originally developed in (Zhang et al., 2020), shows strong potential to alleviate such challenge. The key idea of the DGS gradient is to conduct 1D nonlocal explorations along d orthogonal directions in Rd, each of which defines a non-\nlocal directional derivative as a 1D integral. Then, the d directional derivatives are assembled to form the DGS gradient. Compared with the traditional GS approach, the DGS gradient can use large smoothing radius to achieve long-range exploration along the orthogonal directions This enables the DGS gradient to provide better search directions than the local gradient, making it particularly suitable for optimizing multi-modal functions. However, the optimal performance of the DGS gradient may rely on fine tuning of two important hyper-parameters, i.e., the smoothing radius and the learning rate, which limits its applicability in practice.\nIn this work, we propose AdaDGS, an adaptive optimization method based on the DGS gradient. Instead of designing a schedule for updating the learning rate and the smoothing radius as in (Zhang et al., 2020), we learn their update rules automatically from a backtracking line search (Nocedal & Wright, 2006). Our algorithm is based on a simple observation: while the DGS gradient generally points to a good search direction, the best candidate solution along that direction may not locate in nearby neighborhood. More importantly, relying on a single candidate in the search direction based on a prescribed learning rate is simply too susceptible to highly fluctuating landscapes. Therefore, we allow the optimizer to perform more thorough search along the DGS gradient and let the line search determine the step size for the best improvement possible. Our experiments show that the introduction of the line search into the DGS setting requires a small but well-worth extra amount of function queries per iteration. After each line search, we update the smoothing radius according to the learned step size, because this quantity now represents an estimate of the distance to an important mode of the loss function, which we retain in the smoothing process. The performance and comparison of AdaDGS to other methods are demonstrated herein through three medium- and high-dimensional test problems, in particular, a high-dimensional benchmark test suite, an airfoil design problem and a level generation problem for Super Mario Bros.\nRelated works. The literature on black-box optimization is extensive. We only review methods closely related to this work (see (Rios & Sahinidis, 2009; Larson et al., 2019) for overviews).\nRandom search. These methods randomly generate the search direction and either estimate the directional derivative using GS formula or perform direct search for the next candidates. Examples are two-point approaches (Flaxman et al., 2005; Nesterov & Spokoiny, 2017; Duchi et al., 2015; Bubeck & Cesa-Bianchi, 2012), three-point approaches (Bergou et al., 2019), coordinate-descent algorithms (Jamieson et al., 2012), and binary search with adaptive radius (Golovin et al., 2020).\nZeroth order methods based on local gradient surrogate. This family mimics first-order methods but approximate the gradient via function queries (Liu et al., 2017; Chen et al., 2019; Balasubramanian & Ghadimi, 2018). A exemplary type of these methods is the particular class of Evolution Strategy (ES) based on the traditional GS, first developed by (Salimans et al., 2017). MC is overwhelmingly used for gradient approximation, and strategies for enhancing MC estimators is an active area of research, see, e.g., (Maggiar et al., 2018; Rowland et al., 2018; Maheswaranathan et al., 2019; Meier et al., 2019; Sener & Koltun, 2020). Nevertheless, these effort only focus on local regime, rather than the nonlocal regime considered in this work.\nOrthogonal exploration. It has been investigated in black-box optimization, e.g., finite difference explores orthogonal directions. (Choromanski et al., 2018) introduced orthogonal MC sampling into GS for approximating the local gradient; (Zhang et al., 2020) introduced orthogonal exploration and the Gauss-Hermite quadrature to define and approximate a nonlocal gradient.\nAdaptive methods. Another adaptive method based on DGS gradient can be found in (Dereventsov et al., 2020). Our work is dramatically different in that our update rule for the learning rate and smoothing radius is drawn from line search instead of from Lipschitz constant estimation. The long-range line search can better exploit the DGS direction and thus significantly reduce the number of function evaluations and iterations. Line search is a classical method for selecting learning rate (Nocedal & Wright, 2006) and has also been used in adaptation of some nonlocal search techniques, see, e.g., (Hansen, 2008). In this work, we apply backtracking line search on DGS direction. We do not employ popular terminate conditions such as Armijo (Armijo, 1966) and Wolfe condition (Wolfe, 1969) and always conduct the full line search, as this requires a small extra cost, compared to high-dimensional searching." }, { "heading": "2 THE DIRECTIONAL GAUSSIAN SMOOTHING (DGS) GRADIENT", "text": "We are concerned with solving the following optimization problem\nmin x∈Rd F (x),\nwhere x = (x1, . . . , xd) ∈ Rd consists of d parameters, and F : Rd → R is a d-dimensional loss function. The traditional GS method defines the smoothed loss function as Fσ(x) = Eu∼N (0,Id) [F (x+ σu)] , whereN (0, Id) is the d-dimensional standard Gaussian distribution, and σ > 0 is the smoothing radius. When the local gradient ∇F (x) is unavailable, the traditional GS uses ∇Fσ(x) = 1σEu∼N (0,Id) [F (x+ σu)u] (Flaxman et al., 2005) to approximate ∇F by exploiting limσ→0∇Fσ(x) = ∇F (x) (i.e., setting σ small). Hence, the traditional GS is unsuitable for defining a nonlocal gradient where a large smoothing radius σ is needed.\nIn (Zhang et al., 2020), the DGS gradient was proposed to circumvent this hurdle. The key idea was to apply the 1D Gaussian smoothing along d orthogonal directions, so that only 1D numerical integration is needed. In particular, define a 1D cross section of F (x)G(y |x, ξ) = F (x+y ξ), y ∈ R, where x is the current state of F and ξ is a unit vector in Rd. Then, the Gaussian smoothing of F (x) along ξ is represented as Gσ(y |x, ξ) := (1/ √ 2π) ∫ RG(y+ σv |x, ξ) exp(−v\n2/2)dv. The derivative of the smoothed F (x) along ξ is a 1D expectation\nD [Gσ(0 |x, ξ)] = 1\nσ Ev∼N (0,1) [G(σv |x, ξ) v] ,\nwhere D [·] denotes the differential operator. Intuitively, the DGS gradient is formed by assembling these directional derivatives on d orthogonal directions. Let Ξ := (ξ1, . . . , ξd) be an orthonormal system, the DGS gradient is defined as\n∇σ,Ξ[F ](x) := [ D [Gσ(0 |x, ξ1)], · · · ,D [Gσ(0 |x, ξd)] ] Ξ,\nwhere Ξ and σ can be adjusted during an optimization process.\nSince each component of ∇σ,Ξ[F ](x) only involves a 1D integral, (Zhang et al., 2020) proposed to use the Gauss-Hermite (GH) quadrature rule (Abramowitz & Stegun, 1972), where each component D [Gσ(0 |x, ξ) is approximated as\nDM [Gσ(0 |x, ξ)] = 1√ πσ M∑ m=1 wm F (x+ √ 2σvmξ) √ 2vm. (1)\nHere {vm}Mm=1 are the roots of the M -th order Hermite polynomial and {wm}Mm=1 are quadrature weights, the values of which can be found in (Abramowitz & Stegun, 1972). It was theoretically proved in (Abramowitz & Stegun, 1972) that the error of the GH estimator is ∼M !/(2M (2M)!) that is much smaller than the MC’s error ∼ 1/ √ M . Applying the GH quadrature to each component of ∇σ,Ξ[F ](x), the following estimator is defined for the DGS gradient:\n∇Mσ,Ξ[F ](x) = [ DM [Gσ(0 |x, ξ1)], · · · ,DM [Gσ(0 |x, ξd)] ] Ξ. (2)\nThen, the DGS gradient is readily integrated to first-order schemes to replace the local gradient." }, { "heading": "3 THE ADADGS ALGORITHM", "text": "In this section, we describe an adaptive procedure to remove manually designing and tuning the update schedules for the learning rate and the smoothing radius of the DGS-based gradient descent (Zhang et al., 2020). Our intuitions are: (i) for multimodal landscapes, choosing one candidate solution along the search direction according to a single learning rate may make insufficient progress, and (ii) the optimal step size, if known, is a good indicator for the width of optima that dominates the surrounding area and could be used to inform smoothing radius update. Following this rationale, AdaDGS first uses backtracking line search to estimate the optimal learning rate, and then uses the acquired step size to update the smoothing radius. AdaDGS is straightforward to implement and we\nfind this strategy to overcome the sensitivity to the hyper-parameter selection that affects the original DGS method. As we shall see, the most important hyperparameters in AdaDGS control how aggressive we want to conduct the line search. Our key advantage in high-dimensional optimization is that with a modest budget for line search (compared to that for computing DGS gradient), we can still get a very generous number of function evaluations along DGS direction and approximate the optimal learning rate. We suggested some default values of these hyperparameters which are proven to be universally good throughout our test. However, if one prefers, they can definitely adjust these for a more aggressive line search. For example, even doubling or tripling the number of points to be visited along DGS direction will increase the total number of function evaluations by a small fraction (5% and 10% correspondingly).\nRecall the gradient descent scheme with DGS\nxt+1 = xt − λt∇Mσ,Ξ[F ](xt), where xt and xt+1 are the candidate solutions at iteration t and t + 1, and λt is the learning rate. The details of the AdaDGS algorithm is described below.\nLearning rate update via line search. At iteration t, we perform the line search along ∇Mσ,Ξ[F ](xt) within the interval [xt − Lmin ∇Mσ,Ξ[F ](xt) ‖∇Mσ,Ξ[F ](xt)‖ ,xt − Lmax ∇Mσ,Ξ[F ](xt) ‖∇Mσ,Ξ[F ](xt)‖ ], where Lmax and Lmin are the maximum and minimum exploration distances, respectively. We visit S points in the interval, equally spaced on a log scale, and choose the best candidate. The corresponding contraction factor is ρ = min{0.9, (Lmin/Lmax)1/(S−1)}. More rigorously, the selected learning rate is\nλt := Lmaxρ\nJ\n‖∇Mσ,Ξ[F ](xt)‖ , where J = arg min j∈{0,...,S−1} F\n( xt − Lmaxρj\n∇Mσ,Ξ[F ](xt) ‖∇Mσ,Ξ[F ](xt)‖\n) . (3)\nThe default value of Lmax is the length of the diagonal of the search domain. This value could be refined by running some test iterations, but our algorithm is not sensitive to such refining. The default value of Lmin is Lmin = 0.005Lmax. The default value for S is S = max{12, 0.05Md}, where Md is the number of samples required by the DGS gradient.\nAlgorithm 1: The AdaDGS algorithm 1: Hyper-parameters: M : # GH quadrature points Lmax: the maximum exploration Lmin: the minimum exploration S: # function eval. per line search σ0: initial smoothing radius γ: tolerance for triggering random exploration\n2: Input: The initial state x0 3: Output: The final state xT 4: Set Ξ = Id (or a random orthonormal matrix) 5: for t = 0, . . . T − 1 do 6: Evaluate {G( √ 2σivm |xt, ξi)}i=1,...,dm=1,...,M 7: for i = 1, . . . , d do 8: Compute DM [Gσi(0 |xt, ξi)] in Eq. (1) 9: end for\n10: Assemble∇Mσ,Ξ[F ](xt) in Eq. (2) 11: Update λt according to Eq. (3) 12: Set xt+1 = xt − λt∇Mσ,Ξ[F ](xt) 13: Set σt+1 = 12 (σt + λt) according to Eq. (4) 14: if |F (xt)− F (xt−1)|/|F (xt−1)| < γ then 15: Generate a random rotation Ξ 16: Set σt+1 = σ0 17: end if 18: end for\nThis means that when d is high, we spend roughly 5% budget of function evaluations for line search. Note that when S is large, ρ = 0.9 and the actual minimum exploration distance is Lmax0.9S−1 < Lmin. As long as the DGS gradient points to a good search direction, the line search along a 1D ray is much more cost-effective than searching in ddimensional spaces.\nSmoothing radius update. The smoothing radius σt is adjusted based on the learning rate learned from the line search. The initial radius σ0 is set to be on the same scale as the width of the search domain. At iteration t, we set σt to be the mean of the smoothing radius and the learning rate from iteration t− 1, i.e.,\nσt = 1\n2 (σt−1 + λt−1). (4)\nbecause both quantities indicate the landscape of the loss function.\nThe number of Gaussian-Hermite points. The AdaDGS method is not sensitive to the number of GH points. We do not observe significant benefit of using more than 5 GH quadrature points per direction. In some tests (Section 4.3), 3 GH quadrature points per direction are sufficient.\nRandom exploration. We incorporate the following strategies to support random exploration and help the AdaDGS algorithm escape undesirable scenarios. We use the condition |F (xt) − F (xt−1)|/|F (xt−1)| < γ to trigger the random exploration, where the default value for γ is 0.001. Users can optionally trigger these strategies when the method fails to make progress, e.g., insufficient decrease or too small step size. • Reset the smoothing radius. Since σ is updated following Eq. (4), σ becomes small with the\nlearning rate. Thus, we occasionally reset σ to its initial value. We set a minimum interval of 10 iterations between two consecutive resets. In many of our tests, the function values reached by AdaDGS within 10 first iterations (before the radius reset is triggered) are already lower than those by its competitors can at the end.\n• Random generation of Ξ. Keeping the directional smoothing along a fixed set of coordinates Ξ may eventually reduce the exploration capability. To alleviate this issue, we occasionally change the nonlocal exploration directions by randomly generating an orthogonal matrix Ξ. An important difference between our approach and the random perturbation strategy in (Zhang et al., 2020) is that the approach in (Zhang et al., 2020) only add small perturbation to the identity matrix, but we generate a totally random rotation matrix." }, { "heading": "4 EXPERIMENTS", "text": "We present the experimental results using three sets of problems. All experiments were implemented in Python 3.6 and conducted on a set of cloud servers with Intel Xeon E5 CPUs." }, { "heading": "4.1 TESTS ON HIGH-DIMENSIONAL BENCHMARK FUNCTIONS", "text": "We compare the AdaDGS method with the following (a) DGS: the baseline DGS with polynomial decay update schedule developed in (Zhang et al., 2020), (b) ES-Bpop: the standard OpenAI evolution strategy in (Salimans et al., 2017) with a big population (i.e., using the same number of samples as AdaDGS), (c) ASEBO : Adaptive ES-Active Subspaces for Blackbox Optimization (Choromanski et al., 2019) with a population of size 4 + 3 log(d), (d) IPop-CMA: the restart covariance matrix adaptation evolution strategy with increased population size (Auger & Hansen, 2005), (e) Nesterov: the random search method in (Nesterov & Spokoiny, 2017), (f) FD: the classical central difference scheme, and (g) TuRBO: trust region Bayesian optimization (Eriksson et al., 2019). The information of the codes used for the baselines is provided in Appendix.\nWe test the performance of the AdaDGS method on 12 high-dimensional benchmark functions (ElAbd, 2010; Jamil & Yang, 2013), including F1(x): Ackley, F2(x): Alpine, F3(x): Ellipsoidal, F4(x): Quintic, F5(x): Rastrigin, F6(x): Rosenbrock, F7(x): Salomon, F8(x): Schaffer’s F7, F9(x): Sharp-Ridge, F10(x): Sphere, F11(x): Trigonometric, and F12(x): Wavy. To make the test functions more general, we applied the following linear transformation to x,\nz = R(x+ xopt − xloc), which first moves the optimal state xopt to a new random location xloc and then applies a random rotation R to make the function non-separable. We substitute z into the standard definitions of the benchmark functions to formulate our test problems. Details about those functions are provided in Appendix.\nThe hyper-parameters of the AdaDGS method are fixed for the six test functions. Specifically, Lmax is the length of the diagonal of the domain, S = 200 (= 0.05Md), σ0 ∼ 5 ∗ width, and M = 5. Since S is large, the minimum exploration distance is easily small and we do not need to concerned with Lmin. We choose contraction factor to be 0.9. We turned off the random perturbation by setting γ = 0. For each test function, we performed 20 trials, each of which has a random initial state, a random rotation matrix R and a random location of xloc.\nThe comparison between AdaDGS and the baselines in the 1000D case are shown in Figure 1. Additional results are shown in Appendix C, where the loss decay is plotted in log-scale. The AdaDGS has the best performance overall. In particular, the improvement of AdaDGS over baseline DGS is significant, demonstrating the effectiveness of our adaptive mechanism. AdaDGS shows substantially superior performance in optimizing the highly multimodal functions F1, F2, F4, F5, F7, F8, F11, which is significant in global optimization. For the ill-conditioned functions F3, F6\nand F9, AdaDGS can at least match the performance of the best baseline method, e.g., IPop-CMA. The test with sphere function F10 show AdaDGS converges within 2 steps, confirming the quality of DGS search direction. For F12, all the methods fail to find the global minimum because it is highly multi-modal and there is no global structure to exploit, which makes it extremely challenging for all global optimization methods. We also tested AdaDGS in 2000D, 4000D and 6000D to illustrate its scalability with the dimension. The hyper-parameters are set the same as the 1000D cases. The results are shown in Figure 2. The AdaDGS method still achieves promising performance, even though the number of total function evaluations increases with the dimension." }, { "heading": "4.2 TESTS ON AIRFOIL SHAPE OPTIMIZATION", "text": "We applied the AdaDGS method to design a 2D airfoil. We used a computational fluid dynamics (CFD) code, XFoil v.6.91 (Drela, 1989), and its Python interface v.1.1.11. The Xfoil can conduct CFD simulations of an airfoil given a 2D contour design. The first step is to choose an appropriate parameterization for the upper and lower parts of the airfoil. In this work, we used the state-of-the-art Class/Shape function Transformation (CST) (Kulfan, 2008). Specifically, the upper/down airfoil geometry is represented as z(x) = √ x(1 −\nx)ΣNi=0[Ai ( N i ) xi(1−x)N−i]+x∆zte,where x ∈ [0, 1],N is the polynomial order. The polynomial\n1Avaliable at https://github.com/daniel-de-vries/xfoil-python.\ncoefficients Ai and the position of the airfoil tail ∆zte are the parameters needed to be optimized. We used two different CST polynomials to parameterize the upper and lower part of the airfoil, where the polynomial degree for each polynomial is set to 6 by following the suggestion in (Ceze et al.). Then, the dimension of the optimization problem is d = 15. The initial search domain is set to [−1, 1]d. We simulated all models with Reynolds number 12e6, speed 0.4 mach and the angles of attack from 5 to 8 degrees. The initial condition is the standard NACA 0012 AIR-\nFOIL. The hyper-parameters of the AdaDGS method are Lmax is the length of the diagonal of the domain, Lmin = 0.005Lmax, S = 12, σ0 = search domain width, M = 5 and γ = 0.001. The gain function is set to Lift-Drag and the goal is to maximize the gain. The results are shown in Table 1. With 1500 simulations, all the methods reach a shape with Lift>Drag, which means the airfoils can fly under the experimental scenario. Our AdaDGS method produced the best design, i.e., biggest\nLift-Drag. The other baselines achieved lower Drag than the AdaDGS but did not achieve very high Lift force." }, { "heading": "4.3 TESTS ON GAME CONTENT GENERATION FOR SUPER MARIO BROS", "text": "We apply the AdaDGS method to generate a variety of Mario game levels with desired attributes. These levels are produced by generative adversarial networks (GAN) (Goodfellow et al., 2014), which map from latent parameters to high-quality images. To generate a game level with desired characteristic, one needs to search in the latent space of the GAN for parameters that optimize a prescribed stylistic or performance metric.\nIn this paper, we evaluate the performance of AdaDGS in generating game levels for two different types of objectives: (i) levels that have the maximum number of certain tiles. We consider sky tiles (i.e., game objects that lie in the above half of the image) (MaxSkyTiles) and enemy tiles (MaxEnemies); (ii) playable levels that require the AI agent perform maximum certain action. We consider jumping (MaxJumps) and killing an enemy (MaxKills). These characteristics are often considered for evaluating latent space search and optimization methods (Volz et al., 2018; 2019; Fontaine et al., 2020). Specifically for type (ii) objective, we use the AI agent developed by Robin Baumgarten2 to evaluate the playability of the level and the objective functions. We set unplayable penalty to be 100 and add that to the objective function when the generated level is unplayable. The game levels are generated from a pre-trained DCGAN by (Fontaine et al., 2020), whose inputs are vectors in [−1, 1]32. Details of the architecture can also be found in (Volz et al., 2018). The hyper-parameters of the AdaDGS method are set at default values for the four tests. Specifically, Lmax is the length of the diagonal of the domain, Lmin = 0.029 (= 0.005Lmax), S = 12, σ0 = search domain width, M = 3 and γ = 0.001. We start with Ξ being a random orthonormal matrix generated by scipy.stats.ortho group.rvs. As demonstrated in (Volz et al., 2018), the IPop-CMA is by far the mostly used and superior method for this optimization task, so we only compared the performance of our method with IPop-CMA. We used the pycma v.3.0.3 with the population size be 17 and the radius be 0.5, as described in (Fontaine et al., 2020). We apply tanh function to the latent variables before sending it to the generator model, because this model was trained on [−1, 1]32. 50 trials with random initialization are run for each test. The comparison between AdaDGS and IPop-CMA are shown in Figure 3. AdaDGS outperforms IPop-CMA in three out of four test functions and is close in the other. We find that Ipop-CMA can also find the best optima in many trials, but it is easier to get stuck at undesirable modes, e.g, local minima. Taking the MaxSkyTiles case as an example. There are 4 types of patterns, shown in Figure 4, are generated by AdaDGS and IPop-CMA in maximizing MaxSkyTiles. The top-left pattern in Figure 4 is the targeted one, and the other three represent different types of local minima. The probability of generating the targeted pattern is 90% for AdaDGS, and 74% for IPop-CMA." }, { "heading": "5 CONCLUSION", "text": "We developed an adaptive optimization algorithm with the DGS gradient, which successfully removed the need of hyper-parameter fine tuning of the original DGS method in (Zhang et al., 2020). Experimental results demonstrated the superior performance of the AdaDGS method compared to\n2https://www.youtube.com/watch?v=DlkMs4ZHHr8\nseveral the state-of-the-art black-box optimization methods. On the other hand, the AdaDGS method has some drawbacks that need to be addressed. The most important one is sampling complexity. The GH quadrature requires M × d samples per iteration, which is much more than samples required by MC estimators. The reasons why the AdaDGS outperforms several ES-type methods are due to the good quality of the DGS gradient direction and the line search which significantly reduces the number of iterations. However, when the computing budget is very limited (e.g., only allowing d function evaluations for a d-dimensional problem), then our method becomes inapplicable. One way to alleviate this challenge is to adopt dimensionality reduction (DR) techniques (Choromanski et al., 2019), such as active subspace and sliced linear regression, and apply the AdaDGS in a subspace to reduce the sampling complexity. Incorporating DR into the AdaDGS method will be considered in our future research." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A DEFINITION OF BENCHMARK FUNCTIONS", "text": "Here we provide the definition of the 12 benchmark functions tested in Section 4.1. Let Ω denote the search domain. To make the test functions more general, we randomly translate the functions so that the optimal state xopt moves to new location xloc, and then apply a rotation. This transformation can be written as z = R(x+ xopt − xloc), where R is a rotation matrix making the functions non-separable and xloc is a random location in 0.8Ω. The optimization will need to travel farther to reach the new optimal state. We substitute z into the standard definitions of the benchmark functions to formulate our test problems.\n• F1(x): Ackley function\nF1(x) =− a exp −b √√√√1 d d∑ i=1 x2i − exp(1 d d∑ i=1 cos(cxi) ) + a+ exp(1),\nwhere d is the dimension and a = 20, b = 0.2, c = 2π are used in our experiments. The initial search domain x ∈ [−32.768, 32.768]d. The global minimum is f(xopt) = 0 at xopt = (0, . . . , 0). The Ackley function represents non-convex landscapes with nearly flat outer region. The function poses a risk for optimization algorithms, particularly hill-climbing algorithms, to be trapped in one of its many local minima.\n• F2(x): Alpine function\nF2(x) = d∑ i=1 |xi sin(xi) + 0.1xi|,\nwhere the initial search domain is x ∈ [−10, 10]d. The global minimum is f(xopt) = 0, with 8d solutions. We choose xopt = (0, . . . , 0). This function represents multimodal landscapes with non-unique global optimum.\n• F3(x): Ellipsoidal function\nF3(x) = d∑ i=1 106 i−1 d−1x2i ,\nwhere d is the dimension and x ∈ [−2, 2]d is the input domain. The global minimum is f(xopt) = 0 at xopt = (0, . . . , 0). This represents convex and highly ill-conditioned landscapes.\n• F4(x): Quintic function\nF4(x) = d∑ i=1 |x5i − 3x4i + 4x3i + 2x2i − 10xi − 4|,\nwhere x ∈ [−10, 10]d is the initial search domain. The global minimum is f(xopt) = 0, at xi is either −1 or 2. We choose xopt = (−1, . . . ,−1). This function represents multimodal landscapes with global structure.\n• F5(x): Rastrigin function\nF5(x) = 10d+ d∑ i=1 [x2i − 10 cos(2πxi)],\nwhere d is the dimension and x ∈ [−5.12, 5.12]d is the initial search domain. The global minimum is f(xopt) = 0 atxopt = (0, . . . , 0). This function represents highly multimodal landscapes with global structure.\n• F6(x): Rosenbrock function\nF6(x) = d−1∑ i=1 [100(xi+1 − x2i )2 + (xi − 1)2],\nwhere d is the dimension and x ∈ [−5, 10]d is the initial search domain. The global minimum is f(xopt) = 0 at xopt = (1, . . . , 1). The function is unimodal, and the global minimum lies in a bending ridge, which needs to be followed to reach solution.\n• F7(x): Salomon function\nF7(x) = 1− cos 2π √√√√ d∑\ni=1\nx2i\n+ 0.1 √√√√ d∑\ni=1\nx2i ,\nwhere x ∈ [−100, 100]d is the initial search domain. The global minimum is f(xopt) = 0 at xopt = (0, . . . , 0). This function represents multimodal landscapes with global structure.\n• F8(x): Schaffer function\nF8(x) = 1\nd− 1 ( d−1∑ i=1 (√ si + √ si sin 2(50s 1 5 i ) ))2 with si = √ x2i + x 2 i+1,\nwhere x ∈ [−100, 100]d is the initial search domain. The global minimum is f(xopt) = 0 at xopt = (0, . . . , 0). This function represents highly multimodal landscapes.\n• F9(x): Sharp-Ridge function\nF9(x) = x 2 1 + 100 √√√√ d∑ i=2 x2i ,\nwhere d is the dimension and x ∈ [−10, 10]d is the initial search domain. The global minimum is f(xopt) = 0 at xopt = (0, . . . , 0). This represents convex and anisotropic landscapes. There is a sharp ridge defined along x22 + · · ·+ x2d = 0 that must be followed to reach the global minimum, which creates difficulties for optimizations algorithms.\n• F10(x): Sphere function\nF10(x) = d∑ i=1 x2i ,\nwhere x ∈ [−5.12, 5.12]d is the initial search domain. The global minimum is f(xopt) = 0, at xopt = (0, · · · , 0). The Sphere function represents unimodal, isotropic landscapes, and can be used to test the quality of the search direction.\n• F11(x): Trigonometric function\nF11(x) =1 + d∑ i=1 {8 sin2[7(xi − 0.9)2] + 6 sin2[14(xi − 0.9)2] + (xi − 0.9)2},\nwhere x ∈ [−500, 500]d is the initial search domain. The global minimum is f(xopt) = 1, at xopt = (0.9, · · · , 0.9). This function represents multimodal landscapes with global structure.\n• F12(x): Wavy function\nF12(x) = 1− 1\nd d∑ i=1 cos(kxi) exp ( −x2i 2 ) ,\nwhere k = 10, x ∈ [−π, π]d is the initial domain. The global minimum is f(xopt) = 0, at xopt = (0, · · · , 0). This function represents multimodal landscapes with no global structure." }, { "heading": "B ADDITIONAL INFORMATION ABOUT THE BASELINE METHODS", "text": "" }, { "heading": "B.1 THE ES-BPOP METHOD", "text": "ES-Bpop refers to the standard OpenAI evolution strategy in (Salimans et al., 2017) with the a big population, i.e., the same population size as the AdaDGS method. The purpose of using a big population is to compare the MC-based estimator for the standard GS gradient and the GH-based estimator for the DGS gradient given the same computational cost." }, { "heading": "B.2 THE ASEBO METHOD", "text": "ASEBO refers to Adaptive ES-Active Subspaces for Blackbox Optimization proposed in (Choromanski et al., 2019). This is the state-of-the-art method in the family of ES. It has been shown that other recent developments on ES, e.g., (Akimoto & Hansen, 2016; Loshchilov et al., 2019), underperform ASEBO in optimizing the benchmark functions. We use the code published at https://github.com/jparkerholder/ASEBO by the authors of the ASEBO method with default hyper-parameters provided in the code." }, { "heading": "B.3 THE IPOP-CMA METHOD", "text": "IPop-CMA refers to the restart covariance matrix adaptation evolution strategy with increased population size proposed in (Auger & Hansen, 2005). We use the code pycma v3.0.3 available at https://github.com/CMA-ES/pycma. The main subroutine we use is cma.fmin, in which the hyper-parameters are • restarts=9: the maximum number of restarts with increasing population size; • incpopsize=2: multiplier for increasing the population size before each restart; • σ0: the initial exploration radius is set to 1/4 of the search domain width." }, { "heading": "B.4 THE NESTEROV METHOD", "text": "Nesterov refers to the random search method proposed in (Nesterov & Spokoiny, 2017). We use the stochastic oracle xt+1 = xt − λtF ′(xt,ut), where ut is a randomly selected direction and F ′(xt,ut) is the directional derivative along ut. According to the analysis in (Nesterov & Spokoiny, 2017), this oracle is more powerful and can be used for non-convex non-smooth functions. As suggested in (Nesterov & Spokoiny, 2017), we use forward difference scheme to compute the directional derivative." }, { "heading": "B.5 THE FD METHOD", "text": "FD refers to the classical central difference scheme for local gradient estimation. We implemented our own FD code following the standard numerical recipe." }, { "heading": "B.6 THE TURBO METHOD", "text": "TuRBO refers to Trust-Region Bayesian Optimization proposed in (Eriksson et al., 2019), which is the state-of-the-art Bayesian optimization method. We used the code released by the authors at https://github.com/uber-research/TuRBO." }, { "heading": "C ADDITIONAL RESULTS ON HIGH-DIMENSIONAL BENCHCHMARK FUNCTIONS", "text": "AdaDGS converges to the global minimum in six of our test functions. Other baselines fail to converge for any of the test functions with the same number of function evaluations. In Figure 5, we plot the performance of AdaDGS and other baselines in log scale to show the convergence of our method. Figure 6 compares the convergence rate of the AdaDGS when changing the dimension of the test functions. With exception of Rastrigin function, the convergence rate does not change when we increase the dimension from 1000 to 6000." } ]
2,020
ADADGS: AN ADAPTIVE BLACK-BOX OPTIMIZATION METHOD WITH A NONLOCAL DIRECTIONAL GAUSSIAN
SP:253566b5271d22d4d6492ef9def2e67fb99c5d57
[ "The paper is addressing an important and challenging problem of end-to-end training of deep nets in fixed-point, in this case, with 8-bit precision. A good solution to this problem can have a major impact on the deployability of deep nets on embedded hardware. The basic idea is to introduce an additional term (the log-barrier constraint) in the loss function to constrain the allowable range over which model parameters are allowed to take values. The authors use of mu-encoding to assign non-uniform quantization levels to minimize the quantization error. The main results are in Table 2 showing that the method eliminates overflow in practice and allows quantized networks to approach the accuracy of full-precision networks on the MNIST, CIFAR-10 and ImageNet." ]
Quantization of neural network parameters and activations has emerged as a successful approach to reducing model size and inference time on hardware that supports native low-precision arithmetic. Fully quantized training would facilitate further computational speed-ups as well as enable model training on embedded devices, a feature that would alleviate privacy concerns resulting from the transfer of sensitive data and models that is necessitated by off-device training. Existing approaches to quantization-aware training (QAT) perform “fake” quantization in the forward pass in order to learn model parameters that will perform well when quantized, but rely on higher precision variables to avoid overflow in large matrix multiplications, which is unsuitable for training on fully low-precision (e.g. 8-bit) hardware. To enable fully end-to-end quantized training, we propose Log Barrier Tail-bounded Quantization (LogBTQ). LogBTQ introduces a loss term, inspired by the log-barrier for constrained optimization, that enforces soft constraints on the range of values that model parameters can take on. By constraining and sparsifying model parameters, activations and inputs, our approach eliminates overflow in practice, allowing for fully quantized 8-bit training of deep neural network models. We show that models trained using our approach achieve results competitive with state-of-the-art full-precision networks on the MNIST, CIFAR-10 and ImageNet classification benchmarks.
[]
[ { "authors": [ "Pulkit Bhuwalka", "Alan Chiao", "Suharsh Sivakumar", "Raziel Alvarez", "Feng Liu", "Lawrence Chan", "Skirmantas Kligys", "Yunlu Li", "Khanh LeViet", "Billy Lambert", "Mark Daoust", "Tim Davis", "Sarah Sirajuddin", "François Chollet" ], "title": "Quantization aware training with tensorflow model optimization toolkit - performance", "venue": null, "year": 2020 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Dipankar Das", "Naveen Mellempudi", "Dheevatsa Mudigere", "Dhiraj Kalamkar", "Sasikanth Avancha", "Kunal Banerjee", "Srinivas Sridharan", "Karthik Vaidyanathan", "Bharat Kaul", "Evangelos Georganas" ], "title": "Mixed precision training of convolutional neural networks using integer operations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hongyang Deng", "Milos Doroslovacki" ], "title": "Proportionate adaptive algorithms for network echo cancellation", "venue": "IEEE Transactions on Signal Processing,", "year": 2006 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Steven K Esser", "Jeffrey L McKinstry", "Deepika Bablani", "Rathinakumar Appuswamy", "Dharmendra S Modha" ], "title": "Learned step size quantization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Elad Hoffer", "Ron Banner", "Itay Golan", "Daniel Soudry" ], "title": "Norm matters: efficient and accurate normalization schemes in deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Lu Hou", "Ruiliang Zhang", "James T Kwok" ], "title": "Analysis of quantized models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Benoit Jacob", "Skirmantas Kligys", "Bo Chen", "Menglong Zhu", "Matthew Tang", "Andrew Howard", "Hartwig Adam", "Dmitry" ], "title": "Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sambhav Jain", "Albert Gural", "Michael Wu", "Chris Dick" ], "title": "Trained quantization thresholds for accurate and efficient fixed-point inference of deep neural networks", "venue": "Proceedings of Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Shigeki Karita", "Nanxin Chen", "Tomoki Hayashi", "Takaaki Hori", "Hirofumi Inaguma", "Ziyan Jiang", "Masao Someki", "Nelson Enrique Yalta Soplin", "Ryuichi Yamamoto", "Xiaofei Wang" ], "title": "A comparative study on transformer vs rnn in speech applications", "venue": "IEEE Automatic Speech Recognition and Understanding Workshop (ASRU),", "year": 2019 }, { "authors": [ "Hoel Kervadec", "Jose Dolz", "Jing Yuan", "Christian Desrosiers", "Eric Granger", "Ismail Ben Ayed" ], "title": "Constrained deep networks: Lagrangian optimization via log-barrier extensions, 2019", "venue": null, "year": 2019 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory Diamos", "Erich Elsen", "David Garcia", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh" ], "title": "Mixed precision training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Manuele Rusci", "Alessandro Capotondi", "Luca Benini" ], "title": "Memory-driven mixed low precision quantization for enabling deep network inference on microcontrollers", "venue": "Proceedings of Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Charbel Sakr", "Naresh R Shanbhag" ], "title": "Per-tensor fixed-point quantization of the back-propagation algorithm", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Charbel Sakr", "Naigang Wang", "Chia-Yu Chen", "Jungwook Choi", "Ankur Agrawal", "Naresh Shanbhag", "Kailash Gopalakrishnan" ], "title": "Accumulation bit-width scaling for ultra-low precision training of deep networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tao Sheng", "Chen Feng", "Shaojie Zhuo", "Xiaopeng Zhang", "Liang Shen", "Mickey Aleksic" ], "title": "A quantization-friendly separable convolution for mobilenets", "venue": "1st Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications", "year": 2018 }, { "authors": [ "Pierre Stock", "Armand Joulin", "Rémi Gribonval", "Benjamin Graham", "Hervé Jégou" ], "title": "And the bit goes down: Revisiting the quantization of neural networks", "venue": "In Eighth International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xiao Sun", "Jungwook Choi", "Chia-Yu Chen", "Naigang Wang", "Swagath Venkataramani", "Vijayalakshmi Viji Srinivasan", "Xiaodong Cui", "Wei Zhang", "Kailash Gopalakrishnan" ], "title": "Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ryan Tibshirani" ], "title": "Mixed precision dnns: All you need is a good parametrization", "venue": "Log barrier method,", "year": 2019 }, { "authors": [ "Naigang Wang", "Jungwook Choi", "Daniel Brand", "Chia-Yu Chen", "Kailash Gopalakrishnan" ], "title": "Training deep neural networks with 8-bit floating point numbers", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Shuang Wu", "Guoqi Li", "Feng Chen", "Luping Shi" ], "title": "Training and inference with integers in deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jingzhao Zhang", "Tianxing He", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Why gradient clipping accelerates training: A theoretical justification for adaptivity", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Xishan Zhang", "Shaoli Liu", "Rui Zhang", "Chang Liu", "Di Huang", "Shiyi Zhou", "Jiaming Guo", "Qi Guo", "Zidong Du", "Tian Zhi", "Yunji Chen" ], "title": "Fixed-point back-propagation training", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020b", "year": 2020 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "arXiv preprint arXiv:1606.06160,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "As state-of-the-art deep learning models for vision, language understanding and speech grow increasingly large and computationally burdensome (He et al., 2017; Devlin et al., 2018; Karita et al., 2019), there is increasing antithetical demand, motivated by latency, security and privacy concerns, to perform training and inference in these models on smaller devices at the edge rather than in server farms in the cloud. Model quantization has emerged as a promising approach to enable deployment of deep learning models on edge devices that reduce energy, latency and storage requirements by performing floating-point computation in low precision (less than 32 bits).\nThere are two primary strategies for quantization: Post-training approaches quantize the parameters of a model trained in full precision post-hoc, and tend to suffer a heavy penalty on accuracy since their inference graph differs substantially from training (Jacob et al., 2018). Quantization-aware training (QAT) (Bhuwalka et al., 2020) combats this discrepancy by simulating quantization during training, so that model parameters are learned that will work well when inference is performed in low precision. In this work, we focus on the latter setting, suitable for fully quantized training on low-precision (e.g. 8-bit) devices.\nThough QAT results in quantized models that perform largely on par with their non-quantized counterparts, current state-of-the-art QAT methods (Wu et al., 2018; Wang et al., 2018; Bhuwalka et al., 2020) are not suitable for training on fully low-precision hardware because they employ fake quantization, meaning each operation is executed using 32- or 16-bit floating point arithmetic, and its output is quantized to lower precision, e.g. int8. This results in two key incompatibilities with fully low-precision training, and consequently deployment on real low-precision hardware. First, existing QAT approaches assume perfect sums in inner product operations, which means that the accumulators used to compute matrix multiplies (the acc row in Table 1) must be higher precision than the values being multiplied (other bit-precision rows in Table 1). This is to avoid losing res-\nolution in low-precision additions, also known as swamping (Wang et al., 2018)1. Second, QAT commonly leverages dynamic quantization ranges per-layer, meaning the mapping between highand low-precision values varies by layer, carefully tuned as a function of the network architecture, optimization dynamics and data during training. While this practice results in higher quantized inference accuracy, it is also a challenge to low-precision training, since it is unclear how to tune those ranges when training on new data in the absence of high-precision arithmetic. These incompatibilities present a substantial hurdle to quantized training in practice. For example, an automotive electronics manufacturer may want to deploy a machine learning model on its 8-bit door lock or power window controller to adaptively fit the users’ habits. In this scenario, existing approaches for quantized training would fail (Sakr et al., 2019).\nIn response, we propose a new approach for fully quantized training of neural network models, inspired by the barrier method from convex optimization (Boyd & Vandenberghe, 2004). Log Barrier Tail-bounded Quantization (LogBTQ) utilizes a log barrier extension loss (Kervadec et al., 2019) to constrain the output of the network, encouraging all model parameters and activations to stay within the same predefined range. The log barrier function itself is a smooth approximation of the indicator function, which is ideal for selecting the weights that are within the range of quantization (see Figure 1, left). By fixing a single quantization range throughout the network at the beginning of training, our approach both obviates the need for dynamic ranges, and the limits of the range are set so as to alleviate overflow2 in matrix multiply accumulations. We combine the log barrier extension loss with an L1 regularization term (Hoffer et al., 2018) to further reduce the total magnitude of parameters and activations in the model. To allow for gradients, which tend form a peaky distribution near extremely small values (Zhou et al., 2016; Jain et al., 2020), to be quantized using the same range as the rest of the network, we also adopt the nonlinear µ-law algorithm from audio applications (Deng & Doroslovacki, 2006) to construct a new MU8 codebook that better deals with “swamping” issues compared to the standard IEEE Float Standard. Experiments show that our approach achieves competitive results compared to state-of-art full-precision models on the MNIST, CIFAR-10 and ImageNet classification benchmarks, despite our models being trained end-to-end using only 8 bits of precision.\n1Swamping: Accumulation of floating-point numbers, where the small magnitude value is ignored (or truncated) when it is added to the large magnitude sum.\n2Overflowing: for the fixed-point accumulation where the accumulated value wraps around to the small value when it exceeds the largest value representable by the given accumulation precision." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 POST-TRAINING QUANTIZATION", "text": "There was been a recent surge of interest in quantization research. In 2020 alone, there were a number of important developments in post-training quantization. Rusci et al. (2020); Jain et al. (2020); Esser et al. (2020); Uhlich et al. (2020) proposed learning-based approaches for determining the quantization ranges of activation and weights at low precision. Stock et al. (2020) advocates preserving the quality of the reconstruction of the network outputs rather than its weights. They all show excellent performance compared to full-precision models after quantization. Sakr & Shanbhag (2019) presented a detailed analysis of reduced precision training for a feedforward network that accounts for both the forward and backward passes, demonstrating that precision can be greatly reduced throughout the network computations while largely preserving training quality. Our work share the same intuition of preferring small predetermined dynamic range (PDR) and small clipping rate3. However, Sakr & Shanbhag (2019)’s approach requires the network first be trained to convergence at full 32 bit precision, which is a significant limitation. In this paper, we focus on training rather than inference on low-precision hardware, therefore, we do not assume access to a full-precision high-performing model as a starting point." }, { "heading": "2.2 QUANTIZATION-AWARE TRAINING", "text": "Pioneering works in this domain (Zhou et al., 2016; Courbariaux et al., 2015) looked at quantizing model weights, activations, gradients to lower precision to accelerate neural network training. The terminology quantization aware training (QAT) was first introduced by Jacob et al. (2018). QAT incoporates quantization error as noise during training and as part of the overall loss, which the optimization algorithm tries to minimize. Hence, the model learns parameters that are more robust to quantization, but QAT is not meant to be performed entirely in low precision, it aims to learn parameters that will work well for low-precision inference. More recently, several works further pursued the goal of enabling fully low-precision training (Wu et al., 2018; Wang et al., 2018; Das et al., 2018; Sun et al., 2019). As shown in Table 1, most existing work employs fake quantization, resorting to higher precision values to compensate for the swamping issue, especially during gradient accumulation. Mixed-precision quantization (Das et al., 2018; Wang et al., 2018; Zhang et al., 2020a), which quantizes a neural network using multiple bit precisions across layers, still relies on higher-precision gradients to preserve model accuracy. This means it is difficult, if not impossible, to implement these approaches on low-bit (e.g. 8-bit) hardware.\nMost similar to our work, Sun et al. (2019) claim it is possible to do every step in low precision, but the quantization range for the layers in their work is very carefully chosen empirically, which presents great difficulty if we were to train models from scratch on low-precision hardware. Their method also requires a copy of the quantization error (residual) in FP16(1-6-9) (hence 8/16 in Ta-\n3see Appendix B of Sakr & Shanbhag (2019) explaining PDR; refer to their Criterion 2 about clipping rate.\nble 1). In addition to the 9-bit mantissa, the exponent bit in their floating point format would need to be manually modified to store the residual due to its small value.\nIn this paper, we propose a new quantization scheme: log-barrier tail-bounded quantization (LogBTQ) that can perform fully end-to-end low precision training, suitable for deployment on lowprecision hardware. Our major contributions are the following:\n1. We apply a log barrier extension loss to soft-threshold the values of network weights and activations to constrain all the values to be small. Our quantization scheme also enables global fixed-range quantization which together significantly alleviates the overflow issue caused by large numbers and dynamic range.\n2. We add an L1 loss term to encourage sparsity and further reduce overflow.\n3. We propose µ-law quantization (MU8) instead of INT8, FP8(1-4-3) or FP8(1-5-2) to construct a more accurate codebook that better compensates for the peaky concentration of network parameters around small values." }, { "heading": "3 LOG BARRIER TAIL-BOUNDED QUANTIZATION (LOGBTQ)", "text": "The overall diagram of our quantization scheme is shown in Figure 2 (left). Figure 2 (right) shows the backward pass, where we quantize everything at each layer and all operations including input x, weights w, activations a, errors e, and gradients g (including the gradient accumulation step). We denote all these values as the set Z = {x,w, a, e, g}. In this work, different from previous works (Sakr & Shanbhag, 2019; Zhang et al., 2020b) that used adaptive quantization range, we adopt a globally fixed quantization range for every element z ∈ Z, and set z ∈ [−2, 2]. We do not need to adjust the range and precision during training as in other quantization work that relies on layer-wise dynamic ranges. This would greatly reduce the overhead for implementation on hardware." }, { "heading": "3.1 CONSTRAINED FORMULATION", "text": "Let D = {I1, ...IN} denote the labeled set of N training images, and f denote the neural network model, θ here denotes all the parameters of the neural network including weights w. For task, such as image classification, we are usually solving such an optimization problem: min\nθ L(fθ(I)) where\nL is the loss function of our neural network training objective. In this work, we use the typical cross-entropy loss, and since we are interested in constraining the quantization threshold, we are effectively performing constrained optimization in such a form:\nminimize θ L (fθ(I)) subject to |θn| ≤ u, n = 1, . . . , N. (1)\nWith u our desired barrier (perturbation). In practice, we set u = 0.1 to ensure we can represent as much information as possible within our quantization range (Figure 1, left). This setting is also explained further in Section 3.3." }, { "heading": "3.2 LOG-BARRIER EXTENSION FUNCTION", "text": "Theoretically, problem (1) should be best solved by the log barrier method which is an interior point method that perfectly handles inequality constraints. (Tibshirani, 2019; Boyd & Vandenberghe, 2004): In phase I, we would perform Lagrangian-dual optimization to find the feasible points:\nmaximize λ minimize θ L (x, λ) = L (fθ(I)) + N∑ n=1 λn(|θn| − u))\nsubject to λ 0, n = 1, . . . , N.\n(2)\nwhere λ ∈ R1×N+ is the Lagrangian multiplier (dual variable). After we find a feasible set of network parameters, we can use the barrier method to solve equation (1) as an unconstrained problem:\nminimize θ L(fθ(I)) + N∑ n=1 ψt(|θn| − u) (3)\nsolving problem (3) is Phase II, where ψt is the standard log-barrier function: ψt = − 1t log(−z). As t approaches infinity, the approximation becomes closer to the indicator function. Also, for any value of t, if any of the constraints is violated, the value of the barrier approaches infinity. However, a huge limitation in applicability to practical problems such as ours is that the domain of Eq. (3) must be the set of feasible points. The canonical barrier method above is also prohibitively computationally expensive given there are millions of parameters in the network, and we need to alternate the training between primal and dual and do projected gradient ascent for the dual variable.\nWe are not particularly concerned with the weak duality gap to lower-bound the optimal solution in this work, instead, we are interested in the property of the barrier method to handle inequality constraints. Therefore, inspired by Kervadec et al. (2019), we formulate quantization as an unconstrained loss to approximate the constrained optimization problem:\nminimize θ L(fθ(I)) + N∑ n=1 ψ̃t(|θn| − u) (4)\nwhere ψ̃t is the log-barrier extension, which is convex, continuous, and twice-differentiable:\nψ̃t(z) =\n{ − 1t log(−z) z ≤ − 1 t2\ntz − 1t log( 1 t2 ) + 1 t otherwise\n(5)\nin our case, the input z to the log-barrier extension is the same z we defined in the beginning of Section 3, and t is the scaling parameter. It basically shares the same property with the standard logbarrier function, when t approaches +∞, our log-barrier extension would approach a hard indicator: H(z) = 0 if z ≤ 0 and +∞ otherwise. But its domain is not restricted to the feasible points only. This removes the demanding requirement for explicit Lagrangian optimization.\nWe are doing approximated Lagrangian optimization with implicit dual variables. Our strict positive gradient of ψ̃t will get higher when z approaches violation and effectively push back into the feasible value set. Because our penalty does not serve as a strict barrier of the feasible set, there is possibility of overflow. However, our goal is to achieve the practical goal of fully-quantized training on low-bit hardware, as long as the majority of values stay within a high confidence interval, the approach will work in practice, as we show in experimental results (§4). Recall the scenario in Figure 1(left)." }, { "heading": "3.3 TAIL BOUND OF DISTRIBUTION", "text": "In this section, we demonstrate that the probability of overflowing the quantization range can be controlled for a standard ResNet model. The ResNet model has L layers, each layer contains a CNN operation and ReLU as an activation. Consider at layer l, for each output element O ∈ R in the\nlayer, a CNN operation with the kernel size k and channel size c can be viewed as a rectified dot product between the input feature I ∈ Rck2with w ∈ Rck2 .\nO = ReLU( ck2∑ i=1 wiIi) (6)\nSuppose wi ∼ N (0, σ2(l)) and σ 2 (l) is dependent on the layer l and the parameter u we choose.\nThe second moment of any element z(l+1) ∈ R from layer l + 1 can be connected with the second moment of element z(l) from layer l as follows:\nE[z2(l+1)] = ck2\n2 σ2(l)E[z 2 (l)] (7)\nIn the He initialization, we could set σw = √ 2 ck2 to cancel the first two terms. In this work, we want to control σ2(l) in order to reduce the chance of overflow. We choose σ 2 (l) to depend on u which\nwas defined in optimization problem (1): σ(l) = √ 2u ck2 . Then, we can simplify Eq. (7):\nE[z2(l+1)] = uE[z 2 (l)] (8)\nSuppose the input feature to the first layer has second moments E(1) = E[z21 ], then the second moments at layer l can be estimated as: E(l) = E[z2(l)] = u\nl−1E(1). Next, for each element z(l) at layer l, its tail distribution outside the quantization region can be bounded by\nP (|z(l)| > 2) = P (z(l) > 2) ≤ P (|z(l) − E[z(l)]| > |2− E[z(l)]|) ≤ Var(z(l))\n(2− E[z(l)])2 (9)\nwhere the second inequality is established by Chebyshev’s inequality, here, zl is guaranteed to be nonnegative since it’s the output of the ReLU layer. We can further estimate the worst upper bound for the right hand side by considering the following optimization problem:\nsup E[z(l)],Var(z(l))\nVar(z(l))\n(2− E[z(l)])2\nsubject to (E[z(l)])2 +Var(z(l)) = E(l)\nAs z(l) is the rectified value, E[z(l)] > 0. One upper bound can be established as follows:\nsup E[z(l)],Var(z(l))\nVar(z(l))\n(2− E[z(l)])2 <\nE(l) (2− √ E(l))2\n(10)\nbecause E[z(l)] < √ E(l) and Var(z(l)) < E(l). Notice that the last term can be approximated to E(l) when E(l) is small enough, and we can establish our probability bound of overflowing the quantization range:\nP (|z(l)| > 2) < E(l) (2− √ E(l))2 ' E(l) 4 (11)\nFinally, consider the average overflow probability of z ∈ R from any layer:\nP (|z| > 2) = 1 L L∑ l=1 P (z(l) > 2) ' 1 4L L∑ l=1 ul−1E(1) ' E(1) 4L(1− u) (12)\nFormula (12) is derived because in deep neural networks, L is usually a large number allowing us to effectively ignore uL. Therefore, the overflow probability is determined by the input’s second moments E, the layer size L and the barrier parameter u. In practice, we can adjust both u and L to control the overflowing tails. In this work, E(1) is set around 1.0 because the input features are normalized, and L is around 50 in the ResNet-50. By choosing u = 0.1, the number of overflowing parameters can be controlled under 2.3%." }, { "heading": "3.4 SPARSITY", "text": "In order to achieve the goal of practical implementation on 8-bit hardware, we aggressively fix the range of quantization to z ∈ [−2, 2]. This leaves us with the task of mapping millions of parameters to this range. Therefore, we desire a sparse solution, and naturally, as is pointed out by Hoffer et al. (2018), we use L1 penalty | n| to encourage sparsity. Our unconstrained loss then becomes:\nminimize θ L(fθ(I)) + N∑ n=1 ψ̃t(|θn| − u) + γ N∑ n=1 | n| (13)\nwhere γ > 0 and is tuning variable, and ψ̃t is the log-barrier extension as proposed.\n3.5 µ-LAW QUANTIZATION CODEBOOK (MU8)\nEven after sparsifying the weight, we still are left will many parameters which are non-linear distributed. Luckily, they should all be small enough by now thanks to our log-barrier constraint. Uniform quantization (e.g. INT8) would cause huge information loss in this case. Inspired by µ-law’s (Deng & Doroslovacki, 2006) application in audio encoding, we construct a non-linear codebook accordingly, which could be implemented on 8-bit hardware directly. As is shown in Figure 1(right), we computed all possible values of FP8(1-4-3) or FP8(1-5-2)4used by Sun et al. (2019), and we can see our MU8 encoding are better at handling very small values e.g. FP8 (1-4- 3)’s smallest possible number is 1.9 × 10−3, FP8 (1-5-2) is 1.5 × 10−5, whereas our MU8 could handle 5×10−6 with the cost of getting sparser in larger numbers, which we have almost eliminated using log-barrier. Let’s denote our the quantization function as Qµ8(X) which take x as input and output quantized value x′. First, following the range of µ-law encoding, we set the input to Equation 15 x = z/2, since z ∈ [−2, 2] in this paper.\nF (x) = sgn(x) ln(1 + µ|x|) ln(1 + µ) −1 ≤ x ≤ 1 (14)\nwhere µ is a tuning variable, the larger µ is, the more non-linear it becomes, and sgn(x) is the sign function. Then, we get the y ∈ [−1, 1] as the output of F (x), then we perform Stochastic Rounding step introduced by Gupta et al. (2015). e.g. y× (27− 1) ∈ [15, 16], if y = 15.5/(27− 1), then P (ŷ = 16/(27 − 1)) = P (ŷ = 15/(27 − 1)) = 0.5; if y = 15.1/(27 − 1), then P ((ŷ = 15/(27 − 1)) = 0.9, P (ŷ = 16/(27 − 1)) = 0.1, as shown in the following equation.\nŷ ← Stochastic Rounding(y, LL,UL) = { LL w.p. 1− (y − LL) UL w.p. (y − LL) (15)\nwhere UL indicates upper limit and LL indicates lower limit. At last, we get the quantized value x′ via the inverse function of encoding function (15):\nx′ ← F−1(y) = sgn(y) 1 µ ((1 + µ)|y| − 1) (16)" }, { "heading": "3.6 OTHER USEFUL TECHNIQUES", "text": "Straight-through Estimator (STE): We adopt the same STE as Zhou et al. (2016) which is critical to the convergence of our models. The gradient of the quantization steps in the quantization model is based on the straight through estimator.\nChunked Updates: We also use the same gradient accumulation strategy as Sakr et al. (2019); Wang et al. (2018). During updates, since we have quantized the gradient and weight parameters using Qµ8(X), the gradient will be lower bounded by the smallest possible value of the MU8 encoding(10−6), which is small enough to preserve model accuracy within the 8-bit constraint. However, directly adding this small gradient to a large number (weight) in our MU8 quantization would cause this small gradient to be rounded-off and thus lose information due to the growing\n4FP8(1-4-3)’s largest value is [-240, 240], and FP8(1-5-2)’s is [-57344, 57344], they are better at handling long-tails than MU8, but we are not interested in large numbers in the long-tail.\nintervals of MU8 encoding at larger values. Performing chunked gradient updates, updating the weights only after k steps of gradient descent (in this work, we set k = 20), helps to accumulate the gradients to be large enough to be rounded up, aiding convergence." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Table. 2 shows the performance of models trained with LogBTQ from scratch on the MNIST, CIFAR-10, and ImageNet datasets. Our performance on CIFAR-10 is even higher than the FP32 baseline, and our results using ResNet-50 trained from scratch on ImageNet are competitive against both of the works in comparison. This is particularly impressive given that LogBTQ uses strictly 8-bits to store gradients whereas previous work resorted to FP16 or hybrid 8/16. Our accuracy loss of ResNet-50 on ImageNet is only 2.48% which for many use cases is worth the benefit of keeping everything in only 8 bits of precision. For practical purposes, 2-5% accuracy loss should be tolerable if calculations can be kept strictly in 8 bits." }, { "heading": "5 DISCUSSION", "text": "Hou et al. (2019) also provides a bound for the quantized gradient of QAT models, giving the intuition that quantized gradients would slow down convergence, which we also observe in our training.\nFor each arithmetic operation, we perform our Qµ8(X) quantization right after each operation to ensure the number still falls into 8 bits. For out of range numbers, we clamp. We verify our assump-\ntions in our experiments: Figure 3 (left) shows a layer’s weight distribution during training, and we can see that they are nicely bounded by our log-barrier to within [-1.5, 1.5]. Figure 3 (right) shows the weight distribution is concentrating to smaller values during training.\nSince we only have 10−6 precision to handle the gradients, we are inevitably losing some accuracy compared with higher precision training schemes. FP16 can handle 10−8 precision and FP32 can handle 10−32 precision. As we can see in Table 2, LogBTQ can perform almost perfectly on easier tasks such as MNIST and CIFAR-10, but degrades when we are training on the more challenging ImageNet dataset. Our Mu8 encoding performs better than 1-5-2 and 1-4-3 with our LogBTQ quantization scheme, showing µ-law encoding can handle small fixed quantization range better than 1-5-2 or 1-4-3. As is pointed out by Sheng et al. (2018), MobileNet architecture has some layers that are unfriendly for quantization, e.g. An outlier in one channel could cause a huge quantization loss for the entire model due to an enlarged data range. We employed the same techniques proposed by Sheng et al. (2018), though MobileNet benchmark is still not as good as ResNet-50 due to the agressive reduction of parameters.\nOverall, it still depends on the use case of the deep learning models to consider the trade-off between accuracy and precision to choose which quantization scheme to adopt. As far as we are aware, LogBTQ is the first full 8-bit quantization scheme with competitive performance on a large-scale dataset such as ImageNet. This opens up the prospect of training fully-quantized models on lowprecision hardware." }, { "heading": "6 CONCLUSION", "text": "Motivated by the limitations of fake quantization, we propose Log Barrier Tail-bounded Quantization (LogBTQ) which introduces a log-barrier extension loss term that enforces soft constraints on the range of values that model parameters can take on at every operation. Our approach eliminates overflow in practice, sparsifying the weights and using µ-law non-uniform quantization, allowing for fully quantized 8-bit training of deep neural network models. By constraining the neural network parameters driven by theoretical motivations, this work enables the possibility for the first time fully-quantized training on low-precision hardware." } ]
2,020
null
SP:d9155553fae947cc53d87a221fdd1d57b44f5ec6
[ "I read this paper with great interest. The authors propose an easy-to-understand, easy-to-implement baseline method for detecting when inputs to a ML model is out of distribution. The method involves augmenting the training dataset with an out of distribution dataset and adding an additional class in the classification layer for out of distribution. The paper describes several experiments—some in computer vision, some in NLP—and then compares them to other OOD techniques. The results are comparable to other techniques, although the proposed technique is definitely simpler." ]
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems. While simple to state, this has been a particularly challenging problem in deep learning, where models often end up making overconfident predictions in such situations. In this work we present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention: when encountering a sample from an unseen class, the desired behavior is to abstain from predicting. Our approach uses a network with an extra abstention class and is trained on a dataset that is augmented with an uncurated set that consists of a large number of out-of-distribution (OoD) samples that are assigned the label of the abstention class; the model is then trained to learn an effective discriminator between in and out-of-distribution samples. We compare this relatively simple approach against a wide variety of more complex methods that have been proposed both for out-of-distribution detection as well as uncertainty modeling in deep learning, and empirically demonstrate its effectiveness on a wide variety of of benchmarks and deep architectures for image recognition and text classification, often outperforming existing approaches by significant margins. Given the simplicity and effectiveness of this method, we propose that this approach be used as a new additional baseline for future work in this domain.
[]
[ { "authors": [ "Loïc Barrault", "Fethi Bougares", "Lucia Specia", "Chiraag Lala", "Desmond Elliott", "Stella Frank" ], "title": "Findings of the third shared task on multimodal machine translation", "venue": "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers,", "year": 2018 }, { "authors": [ "Abhijit Bendale", "Terrance E Boult" ], "title": "Towards open set deep networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "TE Boult", "S Cruz", "AR Dhamija", "M Gunther", "J Henrydoss", "WJ Scheirer" ], "title": "Learning and the unknown: Surveying steps toward open world recognition", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Samuel R. Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D. Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics,", "year": 2015 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Learning confidence for out-of-distribution detection in neural networks", "venue": "arXiv preprint arXiv:1802.04865,", "year": 2018 }, { "authors": [ "Akshay Raj Dhamija", "Manuel Günther", "Terrance Boult" ], "title": "Reducing network agnostophobia", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tom Fawcett" ], "title": "An introduction to roc analysis", "venue": "Pattern recognition letters,", "year": 2006 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "ZongYuan Ge", "Sergey Demyanov", "Zetao Chen", "Rahil Garnavi" ], "title": "Generative openmax for multiclass open set classification", "venue": "arXiv preprint arXiv:1707.07418,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "arXiv preprint arXiv:1706.04599,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "arXiv preprint arXiv:1610.02136,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "arXiv preprint arXiv:1812.04606,", "year": 2018 }, { "authors": [ "Yen-Chang Hsu", "Yilin Shen", "Hongxia Jin", "Zsolt Kira" ], "title": "Generalized odin: Detecting out-ofdistribution image without learning from out-of-distribution data", "venue": "arXiv preprint arXiv:2002.11297,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ken Lang" ], "title": "Newsweeder: Learning to filter netnews", "venue": "In Proceedings of the Twelfth International Conference on Machine Learning,", "year": 1995 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "arXiv preprint arXiv:1711.09325,", "year": 2017 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks. 2018", "venue": "URL http://arxiv.org/abs/1706.02690", "year": 2018 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,", "year": 2011 }, { "authors": [ "Wesley J Maddox", "Pavel Izmailov", "Timur Garipov", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Stephan Mandt", "Matthew D Hoffman", "David M Blei" ], "title": "Stochastic gradient descent as approximate bayesian inference", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Lawrence Neal", "Matthew Olson", "Xiaoli Fern", "Weng-Keen Wong", "Fuxin Li" ], "title": "Open set learning with counterfactual images", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Kazuki Osawa", "Siddharth Swaroop", "Mohammad Emtiyaz E Khan", "Anirudh Jain", "Runa Eschenhagen", "Richard E Turner", "Rio Yokota" ], "title": "Practical deep learning with bayesian principles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Antonio Torralba", "Rob Fergus", "William T Freeman" ], "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1958 }, { "authors": [ "Sachin Vernekar", "Ashish Gaurav", "Vahdat Abdelzad", "Taylor Denouden", "Rick Salay", "Krzysztof Czarnecki" ], "title": "Out-of-distribution detection in classifiers via generation", "venue": null, "year": 1910 }, { "authors": [ "Apoorv Vyas", "Nataraj Jammalamadaka", "Xia Zhu", "Dipankar Das", "Bharat Kaul", "Theodore L Willke" ], "title": "Out-of-distribution detection using an ensemble of self supervised leave-out classifiers", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level Convolutional Networks for Text Classification", "venue": "[cs],", "year": 2015 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION AND RELATED WORK", "text": "Most of supervised machine learning has been developed with the assumption that the distribution of classes seen at train and test time are the same. However, the real-world is unpredictable and open-ended, and making machine learning systems robust to the presence of unknown categories and out-of-distribution samples has become increasingly essential for their safe deployment. While refraining from predicting when uncertain should be intuitively obvious to humans, the peculiarities of DNNs makes them overconfident to unknown inputs Nguyen et al. (2015) and makes this a challenging problem to solve in deep learning.\nA very active sub-field of deep learning, known as out-of-distribution (OoD) detection, has emerged in recent years that attempts to impart to deep neural networks the quality of \"knowing when it doesn’t know\". The most straight-forward approach in this regard is based on using the DNNs output as a proxy for predictive confidence. For example, a simple baseline for detecting OoD samples using thresholded softmax scores was presented in Hendrycks & Gimpel (2016). where the authors provided empirical evidence that for DNN classifiers, in-distribution predictions do tend to have higher winning scores than OoD samples, thus empirically justifying the use of softmax thresholding as a useful baseline. However this approach is vulnerable to the pathologies discussed in Nguyen et al. (2015). Subsequently, increasingly sophisticated methods have been developed to attack the OoD problem. Liang et al. (2018) introduced a detection technique that involves perturbing the inputs in the direction of increasing the confidence of the network’s predictions on a given input, based on the observation that the magnitude of gradients on in-distribution data tend to be larger than for OoD data. The method proposed in Lee et al. (2018) also involves input perturbation, but confidence in this case was measured by the Mahalanobis distance score using the computed mean and covariance of the pre-softmax scores. A drawback of such methods, however, is that it introduces a number of\nhyperparameters that need to be tuned on the OoD dataset, which is infeasible in many real-world scenarios as one does not often know in advance the properties of unknown classes. A modified version of the perturbation approach was recently proposed in in Hsu et al. (2020) that circumvents some of these issues, though one still needs to ascertain an ideal perturbation magnitude, which might not generalize from one OoD set to the other.\nGiven that one might expect a classifier to be more uncertain when faced with OoD data, many methods developed for estimating uncertainty for DNN predictions have also been used for OoD detection. A useful baseline in this regard is the temperature scaling method of Guo et al. (2017) that was was proposed for calibrating DNN predictions on in-distribution data and has been observed to also serve as a useful OoD detector in some scenarios. Further, label smoothing techniques like mixup Zhang et al. (2017) have also been shown to be able to improve OoD detection performance in DNNs Thulasidasan et al. (2019). An ensemble-of-deep models approach, that is also augmented with adversarial examples during training, described in Lakshminarayanan et al. (2017) was also shown to improve predictive uncertainty and succesfully applied to OoD detection.\nIn the Bayesian realm, methods such as Maddox et al. (2019) and Osawa et al. (2019) have also been used for OoD detection, though at increased computational cost. However, it has been argued that for OoD detection, Bayesian priors on the data are not completely justified since one does not have access to the prior of the open-set Boult et al. (2019). Nevertheless, simple approaches like dropout – which have been shown to be equivalent to deep gaussian processes Gal & Ghahramani (2016) – have been used as baselines for OoD detection.\nTraining the model to recognize unknown classes by using data from categories that do not overlap with classes of interest has been shown to be quite effective for out-of-distribution detection and a slew of methods that use additional data for discriminating between ID and OD data have been proposed. DeVries & Taylor (2018) describes a method ithat uses a separate confidence branch and misclassified training data samples that serve as a proxy for OoD samples. In the outlier exposure technique described in Hendrycks et al. (2018), the predictions on natural outlier images used in training are regularized against the uniform distribution to encourage high-entropy posteriors on outlier samples. An approach that uses an extra-class for outlier samples is described in Neal et al. (2018), where instead of natural outliers, counterfactual images that lie just outside the class boundaries of known classes are generated using a GAN and assigned the extra class label. A similar approach using generative samples for the extra class, but using a conditional Variational Auto-Encoders Kingma & Welling (2013) for generation, is described in Vernekar et al. (2019). A method to force a DNN to produce high-entropy (i.e., low confidence) predictions and suppress the magnitude of feature activations for OoD samples was discussed in Dhamija et al. (2018), where, arguing that methods that use an extra background class for OoD samples force all such samples to lie in one region of the feature space, the work also forces separation by suppressing the activation magnitudes of samples from unknown classes\nThe above works have shown that the use of known OoD samples (or known unknowns) often generalizes well to unknown unknown samples. Ineed, even though the space of unknown classes is potentially infinite, and one can never know in advance the myriad of inputs that can occur during test time, empirically this approach has been shown to work. The abstention method that we describe in the next section borrows ideas from many of the above methods: as in Hendrycks et al. (2018), we uses additional samples of real images and text from non-overlapping categories to train the model to abstain, but instead of entropy regularization over OoD samples, out method uses an extra abstention class. While it has been sometimes argued in the literature that that using an additional abstention (or rejection) class is not an effective approach for OoD detection Dhamija et al. (2018); Lee et al. (2017), comprehensive experiments we conduct in this work demonstrate that this is not the case. Indeed, we find that such an approach is not only simple but also highly effective for OoD detection, often outperforming existing methods that are more complicated and involve tuning of multiple hyperparameters. The main contributions of this work are as follows:\n• To the best of our knowledge, this is the first work to comprehensively demonstrate the efficacy of using an extra abstention (or rejection class) in combination with outlier training data for effective OoD detection. • In addition to being effective, our method is also simple: we introduce no additional\nhyperparameters in the loss function, and train with regular cross entropy. From a practical standpoint, this is especially useful for deep learning practitioners who might not wish\nto make modifications to the loss function while training deep models. In addition, since outlier data is simply an additional training class, no architectural modifications to existing networks are needed. • Due to the simplicity and effectiveness of this method, we argue that this approach be\nconsidered a strong baseline for comparing new methods in the field of OoD detection." }, { "heading": "2 OUT-OF-DISTRIBUTION DETECTION WITH AN ABSTAINING CLASSIFIER", "text": "(DAC)\nOur approach uses a DNN trained with an extra abstention class for detecting out-of-distribution and novel samples; from here on, we will refer to this as the deep abstaining classifier (DAC). We augment our training set of in-distribution samples (Din) with an auxiliary dataset of known out-of-distribution samples (D̃out), that are known to be mostly disjoint from the main training set (we will use Dout to denote unknown out-of-distribution samples that we use for testing). We assign the training label of K + 1 to all the outlier samples in D̃out (where K is the number of known classes) and train with cross-entropy; the minimization problem then becomes:\nmin θ E(x,y)∼Din [− logPθ(y = ŷ|x)] + Ex,y∼D̃out [− logPθ(y = K + 1|x)] (1)\nwhere θ are the weights of the neural network. This is somewhat similar to the approaches described in Hendrycks et al. (2018) as well as in Lee et al. (2017), with the main difference being that in those methods, an extra class is not used; instead predictions on outliers are regularized against the uniform distribution. Further the loss on the outlier samples is weighted by a hyperparameter λ which has to be tuned; in contrast, our approach does not introduce any additional hyperparameters.\nIn our experiments, we find that the presence of an abstention class that is used to capture the mass in D̃out significantly increases the ability to detect Dout during testing. For example, in Figure 1, we show the distribution of the winning logits (pre-softmax activations) in a regular DNN (left). For the same experimental setup, the abstention logit of the DAC produces near-perfect separation of the in and outof-distribution logits indicating that using an abstention class for mapping outliers can be a\nvery effective approach to OoD detection. Theoretically, it might be argued that the abstention class might only capture data that is aligned with the weight vector of that class, and thus this approach might fail to detect the myriad of OoD inputs that might span the entire input region. Comprehensive experiments over a wide variety of benchmarks described in the subsequent section, however, empirically demonstrate that while the detection is not perfect, it performs very well, and indeed, much better than more complicated approaches.\nOnce the model is trained, we use a simple thresholding mechanism for detection. Concretely, the detector, g(x) : X → 0, 1 assigns label 1 (OoD) if the softmax score of the abstention class, i.e., pK+1(x) is above some threshold δ, and label 0, otherwise:\ng(x) = { 1 if pK+1(x) ≥ δ 0 otherwise (2)\nLike in other methods, the threshold δ has to be determined based on acceptable risk that might be specific to the application. However, using performance metrics like area under the ROC curve (AUROC), we can determine threshold-independent performance of various methods, and we use this as one of our evaluation metrics in all our experiments." }, { "heading": "3 EXPERIMENTS", "text": "The experiments we describe here can be divided into two sets: in the first set, we compare against methods that are explicitly designed for OoD detection, while in the second category, we compare\nagainst methods that are known to improve predictive uncertainty in deep learning. In both cases, we report results over a variety of architectures to demonstrate the efficacy of our method." }, { "heading": "3.1 DATASETS", "text": "For all computer vision experiments, we use CIFAR-10 and CIFAR-100 Krizhevsky & Hinton (2009) as the in-distribution datasets, in addition to augmenting our training set with 100K unlabeled samples from the Tiny Images dataset Torralba et al. (2008). For the out-of-distribution datasets, we test on the following:\n• SVHN Netzer et al. (2011), a large set of 32×32 color images of house numbers, comprising of ten classes of digits 0− 9. We use a subset of the 26K images in the test set. • LSUN Yu et al. (2015), the Large-scale Scene Understanding dataset, comprising of 10\ndifferent types of scenes.\n• Places365 Zhou et al. (2017), a large collection of pictures of scenes that fall into one of 365 classes.\n• Tiny ImageNet tin (2017) (not to be confused with Tiny Images) which consists of images belonging to 200 categories that are a subset of ImageNet categories. The images are 64×64 color, which we scale down to 32× 32 when testing. • Gaussian A synthetically generated dataset consisting of 32× 32 random Gaussian noise\nimages, where each pixel is sampled from an i.i.d Gaussian distribution.\nFor the NLP experiments, we use 20 Newsgroup Lang (1995), TREC Sherman, and SST Socher et al. (2013) datasets as our in-distribution datasets, which are the same as those used by Hendrycks et al. (2018) to facilitate direct comparison. We use the 50-category version of TREC, and for SST, we use binarized labels where neutral samples are removed. For out OoD training data, we use unlabeled samples from Wikitext2 by assigning them to the abstention class. We test our model on the following OoD datasets:\n• SNLI Bowman et al. (2015) is a dataset of predicates and hypotheses for natural language inference. We use the hypotheses for testing .\n• IMDB Maas et al. (2011) is a sentiment classification dataset of movie reviews, with similar statistics to those of SST.\n• Multi30K Barrault et al. (2018) is a dataset of English-German image descriptions, of which we use the English descriptions.\n• WMT16 Bojar et al. (2016) is a dataset of English-German language pairs designed for machine translation task. We use the English portion of the test set from WMT16.\n• Yelp Zhang et al. (2015) is a dataset of restaurant reviews." }, { "heading": "3.2 COMPARISON AGAINST OOD METHODS", "text": "In this section, we compare against a slew of recent state-of-the-art methods that have been explicitly designed for OoD detection. For the image experiments, we compare against the following:\n• Deep Outlier Exposure, as described in Hendrycks et al. (2018) and discussed in Section 1 • Ensemble of Leave-out Classifiers Vyas et al. (2018) where each classifier is trained by\nleaving out a random subset of training data (which is treated as OoD data), and the rest is treated as ID data.\n• ODIN, as described in Liang et al. (2018) and discussed in Section 1. ODIN uses input perturbation and temperature scaling to differentiate between ID and OoD samples.\n• Deep Mahalanobis Detector, proposed in Lee et al. (2018) which estimates the classconditional distribution over hidden layer features of a deep model using Gaussian discriminant analysis and a Mahalanobis distance based confidence-score for thresholding, and further, similar to ODIN, uses input perturbation while testing.\n• OpenMax, as described in Bendale & Boult (2016) for novel category detection. This method uses mean activation vectors of ID classes observed during training followed by Weibull fitting to determine if a given sample is novel or out-of-distribution.\nFor all of the above methods, we use published results when available, keeping the architecture and datasets the same as in the experiments described in the respective papers. For the NLP experiments, we only compare against the published results in Hendrycks et al. (2018). For OpenMax, we re-implement the authors’ published algorithm using the PyTorch framework Paszke et al. (2019)." }, { "heading": "3.2.1 METRICS", "text": "Following established practices in the literature, we use the following metrics to measure detection performance of our method:\n• AUROC or Area Under the Receiver Operating Characteristic curve depicts the relationship between the True Positive Rate (TPR) (also known as Recall)and the False Positive Rate (FPR) and can be interpreted as the probability that a positive example is assigned a higher detection score than a negative example Fawcett (2006). Unlike 0/1 accuracy, the AUROC has the desirable property that it is not affected by class imbalance1. • FPR at 95% TPR which is the probability that a negative sample is misclassified as a\npositive sample when the TPR (or recall) on the positive samples is 95%.\nIn work that we compare against, the out-of-distribution samples are treated as the positive class, so we do the same here, and treat the in-distribution samples as the negative class." }, { "heading": "3.2.2 RESULTS", "text": "Detailed results against the various OoD methods are shown in Tables 1 through 3 for vision and language respectively, where we have a clear trend: in almost all cases, the DAC outperforms the other methods, often by significant margins especially when the in-distribution data is more complex, as is the case with CIFAR-100. While the Outlier Exposure method Hendrycks et al. (2018) (shown at the top in Table 1) is conceptually similar to ours, the presence of an extra abstention class in our model often bestows significant performance advantages. Further, we do not need to tune a separate hyperparameter which determines the weight of the outlier loss, as done in Hendrycks et al. (2018).\nIn fact, the simplicity of our method is one of its striking features: we do not introduce any additional hyperparameters in our approach, which makes it significantly easier to implement than methods such as ODIN and the Mahalanobis detector; these methods need to be tuned separately on each OoD dataset, which is usually not possible as one does not have access to the distribution of unseen classes in advance. Indeed, when performance of these methods is tested without tuning on the OoD test set, the DAC significantly outperforms methods such as the Mahalanobis detector (shown at the bottom of Table 1). We also show the performance against the OpenMax approach of Bendale & Boult (2016) in Table 2 and in every case, the DAC outperforms OpenMax by significant margins.\nWhile the abstention approach uses an extra class and OoD samples while training, and thus does incur some training overhead, it is significantly less expensive during test time, as the forward pass is no different from that of a regular DNN. In contrast, methods like ODIN and the Mahalanobis detector require gradient calculation with respect to the input in order to apply the input perturbation; the DAC approach thus offers a computationally simpler alternative. Also, even though the DAC approach introduces additional network parameters in the final linear layers (due to the presence of an extra abstention class), and thus might be more prone to overfitting, we find that this to be not the case as evidenced by the generalization of OoD performance to different types of test datasets." }, { "heading": "3.3 COMPARISON AGAINST UNCERTAINTY-BASED METHODS", "text": "Next we perform experiments to compare the OoD detection performance of the DAC against various methods that have been proposed for improving predictive uncertainty in deep learning. In these cases,\n1An alternate area-under-the-curve metric, known as Area under Precision Recall Curve, or AUPRC, is used when the size of the negative class is high compared to the positive class. We do not report AUPRC here, as we keep our in-distribution and out-of-distribution sets balanced in these experiments.\nhttps://github.com/hendrycks/outlier-exposure\none expects that such methods will cause the DNN to predict with less confidence when presented with inputs from a different distribution or from novel categories; we compare against the following methods:\n• Softmax Thresholding This is the simplest baseline, where OoD samples are detected by thresholding on the winning softmax score; scores falling below a threshold are rejected.\n• Entropy Thresholding Another simple baseline, where OoD samples are rejected if the Shannon entropy calculated over the softmax posteriors is above a certain threshold.\n• MonteCarlo Dropout A Bayesian inspired approach proposed in Gal & Ghahramani (2016) for improving the predictive uncertainty for deep learning. We found a dropout probability of p = 0.5 to perform well, and use 100 forward passes per sample during the prediction.\n• Temperature Scaling, which improves DNN calibration as described in Guo et al. (2017). The scaling temperature T is tuned on a held-out subset of the validation set of the indistribution data.\n• Mixup As shown in Thulasidasan et al. (2019), Mixup can be an effective OoD detector, so we also use this as one of our baselines.\n• Deep Ensembles which was introduced in Lakshminarayanan et al. (2017) for improving uncertainty estimates for both classification and regression. In this approach, multiple versions of the same model are trained using different random initializations, and while training, adversarial samples are generated to improve model robustness. We use an ensemble size of 5 as suggested in their paper.\n• SWAG, as described in Maddox et al. (2019), which is a Bayesian approach to deep learning and exploits the fact that SGD itself can be viewed as approximate Bayesian inference Mandt et al. (2017). We use an ensemble size of 30 as proposed in the original paper." }, { "heading": "3.3.1 RESULTS", "text": "Detailed results are shown in Table 4, where the best performing method for each metric is shown in bold. The DAC is the only method in this set of experiments that uses an augmented dataset, and as is clearly evident from the results, this confers a significant advantage over the other methods in most cases. Calibration methods like temperature scaling, while producing well calibrated scores on in-distribution data, end up reducing the confidence on in-distribution data as well, and thus losing discriminative power between the two types of data. We also note here that many of the methods listed in the table, like temperature scaling and deep ensembles, can be combined with the\nabstention approach. Indeed, the addition of an extra abstention class and training with OoD data is compatible with most uncertainty modeling techniques in deep learning; we leave the exploration of such combination approaches for future work." }, { "heading": "4 CONCLUSION", "text": "We presented a simple, but highly effective method for open set and out-of-distribution detection that clearly demonstrated the efficacy of using an extra abstention class and augmenting the training set with outliers. While previous work has shown the efficacy of outlier exposure Hendrycks et al. (2018), here we demonstrated an alternative approach for exploiting outlier data that further improves upon existing methods, while also being simpler to implement compared to many of the other methods. The ease of implementation, absence of additional hyperparameter tuning and computational efficiency during testing makes this a very viable approach for improving out-of-distribution and novel category detection in real-world deployments; we hope that this will also serve as an effective baseline for comparing future work in this domain." } ]
2,020
null
SP:70b8c75426f18a3dc4a359c8a8cd7dd2076953a0
[ "The authors propose to address the robustness over outliers for optimal transport (OT). They propose a new formulation based on penalizing the contaminated probability measures by a signed measure (which shares a close relation with unbalanced OT). The authors further derive an equivalent formulation by adjusting the cost matrix for the corresponding standard OT. Empirically, the authors evaluate their proposed approach on a toy example of robust mean estimation and outlier detection for data collection. " ]
Optimal transport (OT) provides a way of measuring distances between distributions that depends on the geometry of the sample space. In light of recent advances in solving the OT problem, OT distances are widely used as loss functions in minimum distance estimation. Despite its prevalence and advantages, however, OT is extremely sensitive to outliers. A single adversarially-picked outlier can increase OT distance arbitrarily. To address this issue, in this work we propose an outlier-robust OT formulation. Our formulation is convex but challenging to scale at a first glance. We proceed by deriving an equivalent formulation based on cost truncation that is easy to incorporate into modern stochastic algorithms for regularized OT. We demonstrate our model applied to mean estimation under the Huber contamination model in simulation as well as outlier detection on real data.
[]
[ { "authors": [ "David Alvarez-Melis", "Tommi S Jaakkola" ], "title": "Gromov-Wasserstein alignment of word embedding spaces", "venue": null, "year": 2018 }, { "authors": [ "Yogesh Balaji", "Rama Chellappa", "Soheil Feizi" ], "title": "Robust optimal transport with applications in generative modeling and domain adaptation", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Federico Bassetti", "Antonella Bodini", "Eugenio Regazzini" ], "title": "On minimum Kantorovich distance estimators", "venue": "Statistics & Probability Letters,", "year": 2006 }, { "authors": [ "Luis A Caffarelli", "Robert J McCann" ], "title": "Free boundaries in optimal transport and Monge-Ampere obstacle problems", "venue": "Annals of Mathematics,", "year": 2010 }, { "authors": [ "Gao Chao", "Yao Yuan", "Zhu Weizhi" ], "title": "Robust estimation via generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lenaic Chizat", "Gabriel Peyré", "Bernhard Schmitzer", "François-Xavier Vialard" ], "title": "Scaling algorithms for unbalanced optimal transport problems", "venue": "Mathematics of Computation,", "year": 2018 }, { "authors": [ "Lenaı̈c Chizat" ], "title": "Unbalanced optimal transport: Models, numerical methods, applications", "venue": "Numerical Analysis [math.NA]. Université Paris sciences et lettres,", "year": 2017 }, { "authors": [ "Nicolas Courty", "Rémi Flamary", "Devis Tuia" ], "title": "Domain adaptation with regularized optimal transport", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2014 }, { "authors": [ "Nicolas Courty", "Rémi Flamary", "Amaury Habrard", "Alain Rakotomamonjy" ], "title": "Joint distribution optimal transportation for domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn Distances: Lightspeed Computation of Optimal Transport", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Steven Diamond", "Stephen Boyd" ], "title": "CVXPY: A Python-embedded modeling language for convex optimization", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Alessio Figalli" ], "title": "The optimal partial transport problem", "venue": "Archive for Rational Mechanics and Analysis,", "year": 2010 }, { "authors": [ "Rémi Flamary", "Nicolas Courty" ], "title": "POT Python optimal transport library, 2017", "venue": "URL https: //pythonot.github.io/", "year": 2017 }, { "authors": [ "Aude Genevay", "Marco Cuturi", "Gabriel Peyré", "Francis Bach" ], "title": "Stochastic optimization for largescale optimal transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Nhat Ho", "XuanLong Nguyen", "Mikhail Yurochkin", "Hung Hai Bui", "Viet Huynh", "Dinh Phung" ], "title": "Multilevel clustering via Wasserstein means", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Gao Huang", "Chuan Guo", "Matt J Kusner", "Yu Sun", "Fei Sha", "Kilian Q Weinberger" ], "title": "Supervised word mover’s distance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Peter J. Huber", "Elvezio Ronchetti" ], "title": "Robust Statistics. Wiley Series in Probability and Statistics", "venue": "2nd ed edition,", "year": 2009 }, { "authors": [ "Leonid Vitalievich Kantorovich" ], "title": "On the translocation of masses", "venue": "In Dokl. Akad. Nauk. USSR (NS),", "year": 1942 }, { "authors": [ "Matt Kusner", "Yu Sun", "Nicholas Kolkin", "Kilian Weinberger" ], "title": "From Word Embeddings To Document Distances", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Matthias Liero", "Alexander Mielke", "Giuseppe Savaré" ], "title": "Optimal entropy-transport problems and a new Hellinger–Kantorovich distance between positive measures", "venue": "Inventiones Mathematicae,", "year": 2018 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-GAN: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ofir Pele", "Michael Werman" ], "title": "Fast and robust earth mover’s distances", "venue": "IEEE 12th International Conference on Computer Vision,", "year": 2009 }, { "authors": [ "Gabriel Peyré", "Marco Cuturi" ], "title": "Computational Optimal Transport", "venue": "[stat],", "year": 2018 }, { "authors": [ "Benedetto Piccoli", "Francesco Rossi" ], "title": "Generalized Wasserstein distance and its application to transport equations with source", "venue": "Archive for Rational Mechanics and Analysis,", "year": 2014 }, { "authors": [ "Yanyao Shen", "Sujay Sanghavi" ], "title": "Learning with bad training data via iterative trimmed loss minimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Justin Solomon", "Fernando De Goes", "Gabriel Peyré", "Marco Cuturi", "Adrian Butscher", "Andy Nguyen", "Tao Du", "Leonidas Guibas" ], "title": "Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains", "venue": "ACM Transactions on Graphics (TOG),", "year": 2015 }, { "authors": [ "Sanvesh Srivastava", "Cheng Li", "David B. Dunson" ], "title": "Scalable Bayes via Barycenter in Wasserstein Space", "venue": "[stat],", "year": 2018 }, { "authors": [ "Guillaume Staerman", "Pierre Laforgue", "Pavlo Mozharovskyi", "Florence d’Alché Buc" ], "title": "When OT meets MOM: Robust estimation of Wasserstein distance", "venue": null, "year": 2006 }, { "authors": [ "Kaiwen Wu", "Gavin Weiguang Ding", "Ruitong Huang", "Yaoliang Yu" ], "title": "On Minimax Optimality of GANs for Robust Mean Estimation", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Mikhail Yurochkin", "Sebastian Claici", "Edward Chien", "Farzaneh Mirzazadeh", "Justin Solomon" ], "title": "Hierarchical Optimal Transport for Document Representation", "venue": null, "year": 2019 }, { "authors": [], "title": "limn→∞WC((P + S)n, νn)→ W (P + S,Q)", "venue": "(Villani,", "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "Optimal transport is a fundamental problem in applied mathematics. In its original form (Monge, 1781), the problem entails finding the minimum cost way to transport mass from a prescribed probability distribution µ on X to another prescribed distribution ν on X . Kantorovich (1942) relaxed Monge’s formulation of the optimal transport problem to obtain the Kantorovich formulation:\nOT(µ, ν) , min Π∈F(µ,ν)\nE(X1,X2)∼Π [ c(X1, X2) ] , (1.1)\nwhere F(µ, ν) is the set of couplings between µ and ν (probability distributions on X × X whose marginals are µ and ν) and c is a cost function, where we typically assume c(x, y) ≥ 0 and c(x, x) = 0. Compared to other notions of distance between probability distributions, optimal transport uniquely depends on the geometry of the sample space.\nRecent advancements in optimization for optimal transport (Cuturi, 2013; Solomon et al., 2015; Genevay et al., 2016; Seguy et al., 2018) enabled its broad adaptation in machine learning applications where geometry of the data is important. See (Peyré & Cuturi, 2018) for a survey. Optimal transport has found applications in natural language processing (Kusner et al., 2015; Huang et al., 2016; Alvarez-Melis & Jaakkola, 2018; Yurochkin et al., 2019), generative modeling (Arjovsky et al., 2017), clustering (Ho et al., 2017), domain adaptation (Courty et al., 2014; 2017), large-scale Bayesian modeling (Srivastava et al., 2018), and many other domains.\nMany applications use OT as a loss in an optimization problem of the form:\nθ ∈ arg minθ∈Θ OT(µn, νθ), (1.2) where {νθ}θ∈Θ is a collection of parametric models, µn is the empirical distribution of the samples. Such estimators are called minimum Kantorovich estimators (MKE) (Bassetti et al., 2006). They are popular alternatives to likelihood-based estimators, especially in generative modeling. For example, when OT(·, ·) is the Wasserstein-1 distance and νθ is a generator parameterized by a neural network with weights θ, equation 1.2 corresponds to the Wasserstein GAN (Arjovsky et al., 2017).\nOne drawback of optimal transport is its sensitivity to outliers. Because all the mass in µ must be transported to ν, a small fraction of outliers can have an outsized impact on the optimal transport problem. For statistics and machine learning applications in which the data is corrupted or noisy, this is a major issue. For example, the poor performance of Wasserstein GANs in the presence of outliers was noted in the recent works on outlier-robust generative learning with f -divergence GANs (Chao et al., 2018; Wu et al., 2020). The problem of outlier-robustness in MKE has not been studied, with the exception of two concurrent works (Staerman et al., 2020; Balaji et al., 2020).\nIn this paper, we propose a modification of OT to address its sensitivity to outliers. Our formulation can be used as a loss in equation 1.2 so that it is robust to a small fraction of outliers in the data. To keep things simple, we consider the -contamination model (Huber & Ronchetti, 2009). Let νθ0 be a member of a parametric model {νθ : θ ∈ Θ} and let\nµ = (1− )νθ0 + ν̃,\nwhere µ is the data-generating distribution, > 0 is the fraction of outliers, and ν̃ is the distribution of the outliers. Although the fraction of outliers is capped at , the value of the outliers is arbitrary, so the outliers may have an arbitrarily large impact on the optimal transport problem. Our goal is to modify the optimal transport problem so that it is more robust to outliers. We have in mind the downstream application of learning θ0 from (samples from) µ in the -contamination model. Our main contributions are as follows:\n1. We propose a robust OT formulation that is suitable for statistical estimation in the - contamination model using MKE. 2. We show that our formulation is equivalent to the original OT problem with a clipped transport cost. This connection enables us to leverage the voluminous literature on computational optimal transport to develop efficient algorithm to perform MKE robust to outliers. 3. Our formulation enables a new application of optimal transport: outlier detection in data." }, { "heading": "2 PROBLEM FORMULATION", "text": "" }, { "heading": "2.1 ROBUST OT FOR MKE", "text": "To promote outlier-robustness in MKE, we need to allow the corresponding OT problem to ignore the outliers in the data distribution µ. The -contamination model imposes a cap on the fraction of outliers, so it is not hard to see that ‖µ − νθ0‖TV ≤ , where ‖ · ‖TV is the total-variation norm defined as ‖µ‖TV = ∫ 1 2 |µ(dx)|. This suggests we solve a TV-constrained/regularized version of equation 1.2. The constrained version\nmin θ∈Θ,µ̃\nOT(µ̃, νθ)\nsubject to ‖µ− µ̃‖TV ≤\nsuffers from identification issues. In particular, it cannot distinguish between “clean” distributions within TV distance of νθ0 . This makes it unsuitable as a loss function for statistical estimation, because it cannot lead to a consistent estimator. However, its regularized counterpart\nmin θ∈Θ,s\nOT(µ+ s, νθ) + λ‖s‖TV, (2.1)\nwhere λ > 0 is a regularization parameter, does not suffer from this issue. In the rest of this paper, we work with the TV-regularized formulation equation 2.1.\nThe main idea of our formulation is to allow for modifications of µ, while penalizing their magnitude and ensuring that the modified µ is still a probability measure. Below we formulate this intuition in an optimization problem titled ROBOT (ROBust Optimal Transport):\nFormulation 1:\nROBOT(µ, ν) = minΠ∈F+(Rd×Rd) s∈F(Rd) ∫ C(x, y) Π(dx,dy) + λ‖s‖TV subject to ∫ B×Rd Π(dx, dy) = ∫ B (µ(dx) + s(dx)) ≥ 0 ∀ B ∈ B(Rd) (Borel σ-algebra)∫ Rd×C Π(dx,dy) = ∫ C ν(dy) ∀ C ∈ B(Rd)∫ s(dx) = 0.\n(2.2)\nHere F(Rd) denotes the set of all signed measures with finite total variation on Rd, F+(Rd × Rd) is the set of all measures with finite total variation on Rd × Rd. The first and the last constraints ensure that µ + s is a valid probability measure, while λ‖s‖TV penalizes the amount of modifications in µ. It is worth noting that we can identify exact locations of outliers in µ by inspecting µ+ s, i.e. if µ(x) + s(x) = 0, then x got eliminated and is an outlier.\nROBOT, unlike classical OT, guarantees that an adversarially picked outliers can not increase the distance arbitrarily. Let µ̃ = (1− )µ+ µc, i.e. µ̃ is µ contaminated with outliers from µc, and let ν be an arbitrary measure (in MKE, µ̃ is the contaminated data and ν is the model we learn). Adversary can arbitrarily increase OT(µ̃, ν) by manipulating the outlier distribution µc. For ROBOT we have the following bound:\nTheorem 2.1. Let µ̃ = (1− )µ+ µc for some ∈ [0, 1), then\nROBOT(µ̃, ν) ≤ (OT(µ, ν) + λ ‖µ− µc‖TV) ∧ λ‖µ̃− ν‖TV ∧OT(µ̃, ν). (2.3)\nThis bound has two key takeaways: since TV norm of any two distributions is bounded by 1, adversary can not increase ROBOT(µ̃, ν) arbitrarily; in the absence of outliers, ROBOT is bounded by classical OT. See Appendix C for the proof.\nRelated work We note connection between equation 2.2 and unbalanced OT (UOT) (Chizat., 2017; Chizat et al., 2018). UOT is typically formulated by replacing TV norm with KL(µ + s|µ) and adding an analogous term for ν. Chizat et al. (2018) studied entropy regularized UOT with various divergences penalizing marginal violations. Optimization problems similar to equation 2.2 have also been considered outside of the ML literature (Piccoli & Rossi, 2014; Liero et al., 2018). We are unaware of prior applications of UOT to outlier-robustness, but it was studied in the concurrent work of Balaji et al. (2020). Another relevant variation of OT is partial OT (Figalli, 2010; Caffarelli & McCann, 2010). It may also be considered for outlier-robustness, but it has a drawback of forcing mass destruction rather than adjusting marginals to ignore outliers when they are present. A concurrent work by Staerman et al. (2020) took a different path: they replaced the expectation in the Wasserstein-1 dual with a median-of-means to promote robustness. It is unclear what is the corresponding primal, making it hard to interpret as an optimal transport problem.\nA major challenge with the aforementioned methods, including our Formulation 1, is the difficulty of the optimization problem. This is especially the case for MKEs, where a transport problem has to be solved in every iteration to obtain the gradient of the model parameters. Chizat et al. (2018) proposed a Sinkhorn-like algorithm for entropy regularized UOT, but it is not amenable to stochastic optimization. Balaji et al. (2020) proposed a stochastic optimization algorithm based on the UOT dual, but it requires two additional neural networks (total of four including dual potentials) to parameterize modified marginal distributions (i.e., µ + s and analogous one for ν). Optimizing with a median-of-means in the objective function as in (Staerman et al., 2020) is also challenging. The key contribution of our work is a formulation equivalent to equation 2.2, which is easily compatible with the large body of classical OT optimization techniques (Cuturi, 2013; Solomon et al., 2015; Genevay et al., 2016; Seguy et al., 2018).\nMore efficient equivalent formulation At a first glance, there are two issues with equation 2.2: it appears asymmetric and it is unclear if it can be optimized efficiently. Below we present an equivalent formulation that is free of these issues:\nFormulation 2:\nROBOT(µ, ν) = minΠ∈F+(Rd×Rd) ∫ Cλ(x, y)Π(dx, dy) subject to ∫ B×Rd Π(dx, dy) = ∫ B µ(dx) ∀ B ∈ B(Rd)∫ Rd×C Π(dx, dy) = ∫ C ν(dy) ∀ C ∈ B(Rd), (2.4)\nwhere Cλ is the truncated cost function defined as Cλ(x, y) = C(x, y) ∧ 2λ. Looking at equation 2.4, it is not apparent that it adds robustness to MKE, but it is symmetric, easy to combine\nwith entropic regularization by simply truncating the cost, and benefits from stochastic optimization algorithms (Genevay et al., 2016; Seguy et al., 2018). This formulation also has a distant relation to the idea of loss truncation for achieving robustness (Shen & Sanghavi, 2019). Pele & Werman (2009) considered the Earth Mover Distance (discrete OT) with truncated cost to achieve computational improvements; they also mentioned its potential to promote robustness against outlier noise but did not explore this direction.\nIn Section 3, we establish equivalence between the two ROBOT formulations, equation 2.2 and equation 2.4. This equivalence allows us to obtain an efficient algorithm based on equation 2.4 for robust MKE. We also provide a simple procedure for computing optimal s in equation 2.2 from the solution of equation 2.4, enabling a new OT application: outlier detection. We verify the effectiveness of robust MKE and outlier detection in our experiments in Section 4. Before presenting the equivalence proof, we formulate the discrete analogs of the two ROBOT formulations for their practical value." }, { "heading": "2.2 DISCRETE ROBOT FORMULATIONS", "text": "In practice we typically encounter samples from the distributions, rather then the distributions themselves. Sampling is also built into stochastic optimization. In this subsection, we present the discrete versions of the ROBOT formulations. The key detail is that, in equation 2.2, µ, ν and s are all supported on Rd, while in the discrete case the empirical measures µn ∈ ∆n−1 and νm ∈ ∆m−1 are supported on a set of points (∆r is the unit probability simplex in Rr). As a result, to formulate a discrete version of equation 2.2, we need to augment µn and νm with each others’ supports. To be precise, let supp(µn) = {X1, . . . , Xn} and supp(νm) = {Y1, . . . , Ym}. Define C = {Z1, Z2, . . . , Zm+n} = {X1, . . . , Xn, Y1, . . . , Ym}. Then discrete analog of equation 2.2 is Formulation 1 (discrete):\nROBOT(µn, νm) = minΠ∈R(m+n)×(m+n) s∈Rm+n 〈Caug,Π〉+ λ [‖s1‖1 + ‖t1‖1] subject to Π1m+n = [ µn + s1 t1 ] , Π>1m+n = [ 0 νm ] Π 0, 1>m+ns = 0, (2.5)\nwhere Caug ∈ R(m+n)×(m+n) is the augmented cost function Caug,i,j = c(Zi, Zj) (c is the ground cost, e.g., squared Euclidean distance), s = (s1, t1) and 1r is the vector all ones in Rr. The TV norm got replaced with its discrete analog, the L1 norm. Similarly to its continuous counterpart, the optimization problem is harder than the typical OT due to additional constraint optimization variable s and increased cost matrix size.\nThe discrete analog of equation 2.4 is straightforward:\nFormulation 2 (discrete):\nROBOT(µn, νm) = { minΠ∈Rn×m 〈Cλ,Π〉 subject to Π1n = µn, Π >1m = νm, Π 0, (2.6)\nwhereCλ,i,j = c(Xi, Yj)∧2λ. As in the continuous case, it is easy to adapt modern (regularized) OT solvers without any computational overhead. As in the continuous case, formulations of equation 2.5 and equation 2.6 are equivalent. It is also possible to recover s of equation 2.5 from the solution of equation 2.6 to perform outlier detection.\nTwo-sided formulation So far we have assumed that one of the input distributions does not have outliers, which is the setting of MKE, where the clean distribution corresponds to the model we learn. In some applications, both distributions may be corrupted. To address this case, we provide an equivalent two-sided formulation, analogous to UOT with TV norm:\nFormulation 3 (two-sided):\nROBOT(µn, νm) = min Π∈R(m+n)×(m+n) s1∈Rm+n, s2∈Rm+n 〈Caug,Π〉+ λ [‖s1‖1 + ‖t1‖1 + ‖s2‖1 + ‖t2‖1] subject to Π1m+n = [ µn + s1 t1 ] , Π>1m+n = [ s2 νm + t2 ] Π 0, 1>m+ns1 = 0, 1>m+ns2 = 0. (2.7) where s1 = (s>1 , t > 1 ) > and s2 = (s>2 , t > 2 ) >." }, { "heading": "3 EQUIVALENCE OF THE ROBOT FORMULATIONS", "text": "In this section we present our main theorem, which demonstrates the equivalence between two formulations of the robust optimal transport:\nTheorem 3.1. For any two measures µ and ν, ROBOT(µ, ν) has same value for both the formulations, i.e., Formulation 1 is equivalent to Formulation 2 both for continuous and discrete case. Moreover, we can recover optimal coupling of one formulation from the other.\nBelow we sketch the proof of this theorem and highlight some important techniques used in the proof. We focus on the discrete case as it is more intuitive and has concrete practical implications in our experiments. A complete proof can be found in Appendix A. Please also see Appendix A.2 for the proof of equivalence between Formulations 1, 2 and 3 in the discrete case." }, { "heading": "3.1 PROOF SKETCH", "text": "In the remainder of this section we consider the discrete case, i.e., equation 2.5 for Formulation 1 (F1) and equation 2.6 for Formulation 2 (F2). Suppose Π∗2 is an optimal solution of F2. Then we construct a feasible solution Π∗1, s ∗ 1 = (s ∗ 1, t ∗ 1) of F1 based on Π ∗ 2 with the same value of the objective function as F2 and claim that (Π∗1, s ∗ 1) is an optimal solution. We prove the claim by contradiction: if (Π∗1, s ∗ 1) is not optimal, then there exists another pair (Π̃1, s̃1) which is optimal for F1 with strictly less objective value. We then construct another feasible solution Π∗2,new of Formulation 2 which has the same objective value as of (Π̃1, s̃1) for F1. This implies Π∗2,new has strictly less objective value for F2 than Π∗2, which is a contradiction.\nThe two main pillars of this proof are (1) to construct a feasible solution of F1 starting from a feasible solution of F2 and (2) to show that the solution constructed is indeed optimal for F1. Hence step (1) gives a recipe to construct an optimal solution of F1 starting from an optimal solution of F2. We elaborate the first point in the next subsection, which has practical implications for outlier detection. The other point is more technical; interested readers may go through the proof in Appendix A.1.\nAlgorithm 1 Generating optimal solution of F1 from F2\n1: Start with Π∗2 ∈ Rn×m, an optimal solution of Formulation 2. 2: Create an augmented matrix Π ∈ Rm+n×m+n with all 0. Divide Π into four blocks:\nΠ = Π11︸︷︷︸ n×n Π12︸︷︷︸ n×m\nΠ21︸︷︷︸ m×n Π22︸︷︷︸ m×m 3: Set Π12 ← Π∗2 and collect all the indices I = {(i, j) : Ci,j > 2λ}. 4: Set Π12(i, j)← 0 for (i, j) ∈ I. 5: Set Π22(j, j)← ∑n i=1 Π ∗ 2(i, j)1(i,j)∈I for all 1 ≤ j ≤ m and set Π∗1 ← Π.\n6: Set s∗1(i) ≤ ∑m j=1 Π ∗ 2(i, j)1(i,j)∈I for all 1 ≤ i ≤ n. 7: Set t∗1(j) = Π22(j, j) for all 1 ≤ j ≤ m. 8: return Π∗1, s ∗ 1, t ∗ 1." }, { "heading": "3.2 GOING FROM FORMULATION 2 TO FORMULATION 1", "text": "Let Π∗2 (respectively Π ∗ 1) be an optimal solution of F2 (respectively F1). Recall that Π ∗ 1 has dimension (m + n) × (m + n). From the column sum constraint in F1, we need to take the first n columns of Π∗1 to be exactly 0, whereas the last m columns must sum up to νm. For any matrix A, we denote by A[(a : b)× (c : d)] the submatrix consisting of rows from a to b and columns from c to d. Our main idea is to put a modified version of Π∗2 in Π ∗ 1[(1 : n) × (n + 1 : m + n)] and make Π∗1[(n+1 : m+n)×(n+1 : m+n)] diagonal. First we describe how to modify Π∗2. Observe that, if for some (i, j) Ci,j > 2λ, we expectXi ∈ supp(µn) to be an outlier resulting in high transportation cost, which is why we truncate the cost in F2. Therefore, to get an optimal solution of F1, we make the corresponding value of optimal plan 0 and dump the mass into the corresponding slack variable t∗1 in the diagonal of the bottom right submatrix. This changes the row sum, which is taken care of by s∗1. But, as we are not moving this mass outside the corresponding column, the column sum of Π∗1[(1 : (m + n)) : ((n + 1) : (m + n))] remains same as column sum of Π ∗ 2, which is νn. We summarize this procedure in Algorithm 1.\nExample. In Figure 1, we provide an example to visualize the construction. On the left, we have Π∗2, an optimal solution of Formulation 2. The blue triangles denote the positions where the corresponding cost value is ≤ 2λ, and light-green squares denote the positions where the corresponding value of the cost matrix is> 2λ. To construct an optimal solution Π∗1 of Formulation 1 from this Π ∗ 2, we first create an augmented matrix of size 6× 6. We keep all the entries of of left 6× 3 sub-matrix as 0 (in this picture blank elements indicate 0). On the right submatrix, we put Π∗2 into the top-right block, but remove the masses from light-green squares, i.e. where cost value is > 2λ, and put it in the diagonal entries of the bottom right block as shown in Figure 1. This mass contributes to the slack variables s1 and t1, and this augmented matrix along with s1, t1 give us an optimal solution of Formulation 1." }, { "heading": "3.3 OUTLIER DETECTION WITH ROBOT", "text": "Our construction algorithm has practical consequences for outlier detection. Suppose we have two datasets, a clean dataset νm (i.e., has no outliers) and an outlier-contaminated dataset µn. We can detect the outliers in µn without directly solving costly Formulation 1 by following Algorithm 2. In this algorithm, λ is a regularization parameter that can be chosen via cross-validation or heuristically (see Section 4.2 for an example). In Section 4.2, we use this algorithm to perform outlier detection on image data.\nAlgorithm 2 Outlier detection in contaminated data 1: Start with µn (contaminted data) and νm (clean data). 2: Solve Formulation 2 and obtain Π∗2 using a suitable value of λ. 3: Use Algorithm 1 to obtain Π∗1, s ∗ 1, t ∗ 1 from Π ∗ 2.\n4: Find I, the set of all the indices where µn + s∗1 = 0. 5: Return I as the indices of outliers in µn.\nTable 1: Robust mean estimation with GANs using different distribution divergences. True mean is η0 = 05; sample size n = 1000; contamination proportion = 0.2. We report results over 30 experiment restarts.\nContamination JS Loss SH Loss RKL Loss ROBOT UOT\nN (0.1 · 15, I5) 0.09 ± 0.03 0.11 ± 0.03 0.115 ± 0.03 0.1 ± 0.03 0.1 ± 0.04 N (0.5 · 15, I5) 0.23 ± 0.04 0.24 ± 0.05 0.24 ± 0.05 0.117 ± 0.03 0.2 ± 0.04 N (1 · 15, I5) 0.43 ± 0.05 0.43 ± 0.06 0.43 ± 0.06 0.261 ± 0.06 0.25 ± 0.05 N (2 · 15, I5) 0.67 ± 0.07 0.67 ± 0.08 0.67 ± 0.08 0.106 ± 0.03 0.1 ± 0.03\n(a) Varying proportion of contamination (b) Varying outlier distribution mean\nFigure 2: Empirical study of regularization hyperparameter λ sensitivity" }, { "heading": "4 EMPIRICAL STUDIES", "text": "To evaluate effectiveness of ROBOT, we consider the task of robust mean estimation under the Huber contamination model. The data is generated from (1 − )N (η0, Id) + N (η1, Id) and the goal is to estimate η0. Prior work has advocated for using f -divergence GANs (Chao et al., 2018; Wu et al., 2020) for this problem and pointed out inefficiencies of Wasserstein GAN in the presence of outliers. We show that our robust OT formulation allows us to estimate the uncontaminated mean η0 comparably or better than a variety of f -divergence GANs. We also use this simulated setup to study sensitivity to the regularization hyperparameter λ.\nIn our second experiment, we present a new application of optimal transport enabled by ROBOT. Suppose we have collected a curated dataset νm (i.e., we know that it has no outliers)—such data collection is expensive, and we want to benefit from it to automate subsequent data collection. Let µn be a second dataset collected “in the wild,” i.e., it may or may not have outliers. We demonstrate how ROBOT can be used to identify outliers in µn using the curated dataset νm." }, { "heading": "4.1 ROBUST MEAN ESTIMATION", "text": "Following Wu et al. (2020), we consider a simple generator of the form gθ(x) = x + θ, x ∼ N (0, Id), d is the data dimension. The basic idea of robust mean estimation with GANs is to minimize various distributional divergences between samples from gθ and observed data simulated from (1 − )N (η0, Id) + N (η1, Id). The goal is to estimate η0 with θ. To efficiently implement ROBOT GAN, we use a standard min-max optimization approach: solve the inner max (ROBOT) and use gradient descent for the outer min parameter. To solve ROBOT, it is straightforward to adopt any of the prior stochastic regularized OT solvers: the only modification is the truncation of the cost entries as in equation 2.6. We use the stochastic algorithm for semi-discrete regularized OT from\n(Genevay et al., 2016, Algorithm 2). We summarize ROBOT GAN in Algorithm 3. Line 5 - Line 10 perform the inner optimization where we solve entropy regularized OT dual with truncated cost and Line 11 - Line 12 perform gradient update of θ.\nAlgorithm 3 ROBOT GAN\n1: Input: robustness regularizion λ, entropic regularization α, data distribution µn ∈ ∆n−1, supp(µn) = X = [X1, . . . , Xn], steps sizes τ and γ 2: Initialize: Initialize θ = θinit, set number of iterations M and L, i = 0, v = ṽ = 0. 3: for j = 1, . . . ,M do 4: Generate z̃ ∼ N (0, Id) and set z = z̃ + θ. 5: Set the cost vector c ∈ Rn as c(k) = c(Xk, z) ∧ 2λ for k = 1, . . . , n. 6: for i = 1, . . . , L do . solve entropy regularized OT dual 7: Set h← ṽ−cα and do the normalized exponential transformation u← eh\n〈1,eh〉 . 8: Calculate the gradient∇ṽ← µn − u. 9: Update ṽ← ṽ + γ∇ṽ and v← (1/(j + i))ṽ + (j + i− 1/(j + i))v.\n10: Do the same transformation of v as in Step 7, i.e. set h← v−cα and set Π← eh\n〈1,eh〉 . 11: Set Π(k) = 0 for k such that C(Xk, z) > 2λ for k = 1, . . . , n. 12: Calculate gradient with respect to θ as ∇θ = 2 [ z ∑ k Π(k)−X>Π ] 13: Update θ ← θ − τ∇θ. 14: Ouput: θ\nFor the f -divergence GANs (Nowozin et al., 2016) we use the code of Wu et al. (2020) for GANs with Jensen-Shannon (JS) loss, squared Hellinger (SH) loss and Reverse Kullback-Leibler (RKL) loss. For the exact expression of these divergences see Table 1 of Wu et al. (2020). We report estimation error measured by the Euclidean distance between true uncontaminated mean η0 and estimated mean θ for various contamination distributions in Table 1. ROBOT GAN performs well across all considered contamination distributions. As the difference between true mean η0 and contamination mean η1 increases, the estimation error of all methods tends to increase. However, when it becomes easier to distinguish outliers from clean samples, i.e., η1 = 2 · 15, performance of ROBOT noticeably improves.\nWe also compared to the Sinkhorn-based UOT algorithm (Chizat et al., 2018) available in the Python Optimal Transport (POT) library (Flamary & Courty, 2017); to obtain a UOT GAN, we modified steps 5-11 of Algorithm 3 for computing Π. Unsurprisingly, both ROBOT and UOT perform similarly: recall equivalence to Formulation 3, which is similar to UOT with TV norm. The key insight of our work is the equivalence to classical OT with truncated cost, that greatly simplifies optimization and allows to use existing stochastic OT algorithms. In this experiment, the sample size n = 1000 is sufficiently small for the Sinkhorn-based UOT POT implementation to be effective, but it breaks in the experiment we present in Section 4.2. We also tried the code of Balaji et al. (2020) based on CVXPY (Diamond & Boyd, 2016), but it is too slow even for the n = 1000 sample size.\nIn the previous experiment, we set λ = 0.5. Now we demonstrate empirically that there is a broad range of λ values performing well. In Figure 2a, we study sensitivity of λ under various contamination proportions holding η0 = 15 and η1 = 5 · 15 fixed. Horizontal lines correspond to λ = ∞, i.e., vanilla OT. The key observations are: there is a wide range of λ efficient at all contamination proportions, and ROBOT is always at least as good as vanilla OT (even when there is no contamination = 0). In Figure 2b, we present a similar study varying the mean of the contamination distribution and holding = 0.2 fixed. We see that as the contamination distribution gets closer to the true distribution, it becomes harder to pick a good λ, but the performance is always at least as good as the vanilla OT (horizontal lines)." }, { "heading": "4.2 OUTLIER DETECTION FOR DATA COLLECTION", "text": "Our robust OT formulation equation 2.5 enables outlier identification. Let νm be a clean dataset and µn potentially contaminated with outliers. Recall that ROBOT allows modification of one of the input distributions to eliminate potential outliers. We can identify outliers in µn as follows: if µn(i)+s ∗ 1(i) = 0, thenXi, the ith point in µn, is an outlier. Instead of directly solving equation 2.5,\nwhich may be inefficient, we use our equivalence results and solve an easier optimization problem equation 2.6, followed by recovering s to find outliers via Algorithm 2.\nLet νm be a clean dataset consisting of 10k MNIST digits and µn be a dataset collected “in the wild” consisting of (different) 8k MNIST digits and 2k Fashion MNIST images. We compute ROBOT(µn, νm) to identify outlier Fashion MNIST images in µn. For each point in µn we obtain a prediction, outlier or clean, which allows us to evaluate accuracy. ROBOT outlier detection is 90% accurate in this experiment. We also comment on λ selection: since we know that νm is clean, we can subsample two datasets from it, compute vanilla OT to obtain transportation plan Π and set λ to be half the maximum distance between matched elements, i.e. 2λ = maxi,j{Cij : Πij > 0}, where C is the cost matrix for the two subsampled datasets. This procedure is essentially estimating maximum distance between matched clean samples. We also present a random sample of outliers identified by our method in Figure 3. All of the sampled outliers are Fashion MNIST images, although 90% accuracy suggests that some of the outliers were not identified. Decreasing λ can help to find more outliers, but may result in some clean samples being mistaken for outliers. We conclude that ROBOT can be used to assist in data collection once an initial set of clean data has been acquired. As we mentioned previously, the Sinkhorn-based UOT POT implementation is too expensive for this experiment due to larger sample size, yielding memory errors on a personal laptop with 16GB RAM.\nFor comparison, we also consider a heuristic distance-based approach for identifying outliers. We estimate diameter τ of the set of clean dataset νm by taking the 99th percentile of the pairwise distance matrix of samples in νm. If outliers and clean data have disjoint support, we can adopt a simple heuristic: for each sample in the potentially contaminated µn compute an average distance to the clean samples in νm and declare a sample as an outlier if this average distance is greater than the diameter τ of the clean data. The accuracy of this procedure is 85.4%, inferior to the ROBOT accuracy of 90%. The disjoint support assumption justifying the distance-based heuristic might be too strong in practice. ROBOT continues to be effective even when the supports of clean and outlier distributions are not easily separable." }, { "heading": "5 SUMMARY AND DISCUSSION", "text": "We proposed and studied ROBOT, a robust formulation of optimal transport. We showed that although the problem is seemingly asymmetric and challenging to optimize, there is an equivalent formulation based on cost truncation that is symmetric and compatible with modern stochastic optimization methods for OT.\nROBOT closely resembles unbalanced optimal transport (UOT). In our formulation, we added a TV regularizer to the vanilla optimal transport problem. This is motivated by the -contamination model. In UOT, the TV regularizer is typically replaced with a KL divergence. Other choices of the regularizer may lead to new properties and applications. Studying equivalent, simpler formulations of UOT with different divergences may be a fruitful future work direction.\nFrom the practical perspective, in our experiments we observed no degradation of ROBOT GAN in comparison to OT GAN, even when there were no outliers. It is possible that replacing OT with ROBOT may be beneficial for various machine learning applications of OT. Data encountered in practice may not be explicitly contaminated with outliers, but it often has errors and other deficiencies, suggesting that a “no-harm” robustness is desirable." }, { "heading": "A PROOF OF THEOREM 3.1", "text": "" }, { "heading": "A.1 PROOF OF DISCRETE VERSION", "text": "Proof. Define a matrix Π as:\nΠ(i, j) = { 0, if C(i, j) > 2λ Π∗2(i, j), otherwise\nAlso define s ∈ Rn and t ∈ Rm as:\ns∗1(i) = − m∑ j=1 Π∗2(i, j)1C(i,j)>2λ\nand similarly define:\nt∗1(j) = n∑ i=1 Π∗2(i, j)1C(i,j)>2λ\nThese vectors corresponds to the row sums and the column sums of the elements of the optimal transport plan of Formulation 2, where the cost function exceeds 2λ. Note that, these co-ordinates of the optimal transport plan corresponding to those co-ordinates of cost matrix, where the cost is greater than 2λ and contribute to the objective value via their sum only, hence any different arrangement of these transition probabilities with same sum gives the same objective value.\nNow based on this Π obtained we construct a feasible solution of Formulation 1 following Algorithm 1:\nΠ∗1 = [ 0 Π 0 diag(t∗1) ] The row sums of Π∗1 is:\nΠ∗11 =\n[ µn + s ∗ 1\nt∗1 ] and it is immediate from the construction that the column sums of Π∗1 is νm. Also as:\nn∑ i=1 s∗1(i) = m∑ j=1 t∗1(j) = ∑\n(i,j):Ci,j>2λ\nΠ∗2(i, j)\nand s∗1 0, t∗1 0, we have:\n1>(µn + s ∗ 1 + t ∗ 1) = 1 >p = 1 .\nTherefore, we have (Π∗1, s ∗ 1, t ∗ 1) is a feasible solution of Formulation 1. Now suppose this is not an optimal solution. Pick an optimal solution Π̃, s̃, t̃ of Formulation 1 so that:\n〈Caug, Π̃〉+ λ [ ‖s̃‖1 + ‖t̃‖1 ] < 〈Caug,Π∗1〉+ λ [‖s∗1‖1 + ‖t∗1‖1]\nThe following two lemmas provide some structural properties of any optimal solution of Formulation 1:\nLemma A.1. Suppose Π∗1, s∗1, t∗1 are optimal solution for Formulation 1. Divide Π∗1 into four parts corresponding to augmentation as in algorithm 1:\nΠ∗1 =\n[ Π∗1,11 Π ∗ 1,12\nΠ∗1,21 Π ∗ 1,22 ] Then we have Π∗1,11 = Π ∗ 1,21 = 0 and Π ∗ 1,22 is a diagonal matrix.\nLemma A.2. If Π∗1, s∗1, t∗1 is an optimal solution of Formulation 1 then:\n1. If Ci,j > 2λ then Π∗1(i, j) = 0. 2. If Ci,j < 2λ for some i and for all 1 ≤ j ≤ n, then s∗1(i) = 0. 3. If Ci,j < 2λ for some j and for all 1 ≤ i ≤ m, then t∗1(j) = 0. 4. If Ci,j < 2λ then s∗1(i)t ∗ 1(j) = 0.\nWe provide the proofs in the next subsection. By Lemma A.1 we can assume without loss of generality:\nΠ̃ = [ 0 Π̃12 0 diag(t̃) ] Now based on ( Π̃, s̃, t̃ ) we create a feasible solution namely Π∗2,new of Formulation 2 as follows:\nDefine the set of indices {i1, · · · , ik} and {j1, . . . , jl} as:\ns̃i1 , s̃i2 , . . . , s̃ik > 0 and t̃j1 , t̃j2 , . . . , t̃jl > 0 .\nThen by part (4) of Lemma A.2 we have Ciα,jβ > 2λ for α ∈ {1, . . . , k} and β ∈ {1, . . . , l}. Also by part (2) of Lemma A.2 the value of transport plan at these co-ordinates is 0. Now distribute the mass of slack variables in these co-ordinates such that the marginals of new transport plan becomes exactly µn and νm. This new transport plan is our Π∗2,new. Recall that, ‖s̃‖1 = ‖t̃‖1. Hence, here the regularizer value decreases by 2λ‖s̃‖1 and the cost value increased by exactly 2λ‖s̃‖1 as we are truncating the cost. Hence we have:\n〈Cλ,Π∗2,new〉 = 〈Caug, Π̃〉+ λ [ ‖s̃‖1 + ‖t̃‖1 ] < 〈Caug,Π∗1〉+ λ [‖s∗1‖1 + ‖t∗1‖1] = 〈Cλ,Π∗2〉\nwhich is contradiction as Π∗2 is the optimal solution of Formulation 2. This completes the proof for the discrete part." }, { "heading": "A.2 PROOF OF EQUIVALENCE FOR TWO SIDED FORMULATION", "text": "Here we prove that our two sided formulation, i.e. Formulation 3 (equation 2.7) is equivalent to Formulation 1 (equation 2.5) for the discrete case. Towards that end, we introduce another auxiliary formulation and show that both Formulation 1 and Formulation 3 are equivalent to the following auxiliary formulation of the problem.\nFormulation 4:\nWR,L,4(p, q) = minΠ∈Rm×n,s1∈Rm,s2∈Rn 〈C,Π〉+ λ [‖s1‖1 + ‖s2‖1] subject to Π1n = p+ s1 ΠT 1m = q + s2\nΠ 0\n(A.1)\nFirst we show that Formulation 1 and Formulation 4 are equivalent in a sense that they have the same optimal objective value.\nTheorem A.3. Suppose C is a cost function such that C(x, x) = 0. Then Formulation 1 and Formulation 4 has same optimal objective value.\nProof. Towards that end, we show that given one optimal variables of one formulation we can get optimal variables of other formulation with the same objective value. Before going into details we need the following lemma whose proof is provided in Appendix B:\nLemma A.4. Suppose Π∗4, s∗4,1, s∗4,2 are the optimal variables of Formulation 4. Then s∗4,1 0 and s∗4,2 0.\nNow we prove that optimal value of Formulation 1 and Formulation 4 are same. Let (Π∗1, s ∗ 1,1, t ∗ 1,1) is an optimal solution of Formulation 1. Then we claim that (Π∗1, s ∗ 1,1, t ∗ 1,1) is also an optimal solution of Formulation 4. Clearly it is feasible solution of Formulation 4. Suppose it is not optimal, i.e. there exists another optimal solution (Π̃4, s̃4,1, s̃4,2) such that:\n〈C, Π̃4〉+ λ(‖s̃4,1‖1 + ‖s̃4,2‖2) < 〈C,Π∗1,12〉+ λ(‖s∗1,1‖1 + ‖t∗1,1‖1)\nNow based on (Π̃4, s̃4,1, s̃4,2) we construct a feasible solution of Formulation 1 as follows:\nΠ̃1 = [ 0 Π̃4 0 −diag(s̃4,2) ] Note that we proved in Lemma A.4 s̃4,2 0, hence we have Π̃1 0. Now as the column sums of Π̃4 is q + s̃4,2, we have column sums of Π̃1 = [0 q>]> and the row sums are [(p+ s̃4,1)> s̃>4,2]\n>. Hence we take s̃1,1 = s̃4,1 and s̃1,2 = s̃4,2. Then it follows:\n〈Caug, Π̃1〉+ λ [‖s̃1,1‖1 + ‖s̃1,2‖1] = 〈C, Π̃4〉+ λ [‖s̃4,1‖1 + ‖s̃4,2‖1] < 〈C,Π∗1,12〉+ λ [ ‖s∗1,1‖1 + ‖t∗1,1‖1 ] = 〈Caug,Π∗1〉+ λ [ ‖s∗1,1‖1 + ‖t∗1,1‖1\n] This is contradiction as we assumed (Π∗1, s ∗ 1,1, t ∗ 1,2) is an optimal solution of Formulation 1. Therefore we conclude (Π∗1, s ∗ 1,1, t ∗ 1,1) is also an optimal solution of Formulation 4 which further concludes Formulation 1 and Formulation 4 have same optimal values. This completes the proof of the theorem.\nTheorem A.5. The optimal objective value of Formulation 3 and Formulation 4 are same.\nProof. Like in the proof of Theorem A.3 we also prove couple of lemmas.\nLemma A.6. Any optimal transport plan Π∗3 of Formulation 3 has the following structure: If we write,\nΠ∗3 =\n[ Π∗3,11 Π ∗ 3,12\nΠ∗3,21 Π ∗ 3,22 ] then Π∗3,11 and Π ∗ 3,22 are diagonal matrices and Π ∗ 3,21 = 0.\nLemma A.7. If s∗3,1, t∗3,1, s∗3,2, t∗3,2 are four optimal slack variables in Formulation 3, then s∗3,1, t ∗ 3,1 0 and s∗3,2, t∗3,2 0.\nProof. The line of argument is same as in proof of Lemma A.4.\nNext we establish equivalence. Suppose (Π∗3, s ∗ 3,1, t ∗ 3,1, s ∗ 3,2, t ∗ 3,2) are optimal values of Formulation 3. We claim that (Π∗3,12, s ∗ 3,1 − s∗3,2, t∗3,1 − t∗3,2) forms an optimal solution of Formulation 4. The objective value will then also be same as s∗3,1 0, s∗3,2 0 (Lemma A.7) implies ‖s∗3,1 − s∗3,2‖1 = ‖s∗3,1‖1 + ‖s∗3,2‖1 and similarly t∗3,1 0, t∗3,2 0 implies ‖t∗3,1 − t∗3,2‖1 = ‖t∗3,1‖1 + ‖t∗3,2‖1. Feasibility is immediate. Now for optimality, we again prove by contradiction. Suppose they are not optimal. Then lets say Π̃4, s̃4,1, s̃4,2 are an optimal triplet of Formulation 4. Now construct another feasible solution of Formulation 3 as follows: Set s̃3,2 = t̃3,2 = 0, s̃3,1 = s̃4,1 and t̃3,1 = s̃4,2. Set the matrix as:\nΠ̃3 = [ 0 Π̃4 0 −diag(s̃4,2) ] Then it follows that ( Π̃3, s̃3,1, s̃3,2, t̃3,1, t̃3,2 ) is a feasible solution of Formulation 3. Finally we\nhave:\n〈Caug, Π̃3〉+ λ [ ‖s̃3,1‖1 + ‖s̃3,2‖1 + ‖t̃3,1‖1 + ‖t̃3,2‖1 ] = 〈Caug, Π̃3〉+ λ [‖s̃4,1‖1 + ‖s̃4,2‖1] = 〈C, Π̃4〉+ λ [‖s̃4,1‖1 + ‖s̃4,2‖1] < 〈C,Π∗3,12〉+ λ [ ‖s∗3,1 − s∗3,2‖1 + ‖t∗3,1 − t∗3,2‖1\n] = 〈Caug,Π∗3〉+ λ [ ‖s∗3,1‖1 + ‖s∗3,2‖1 + ‖t∗3,1‖1 + ‖t∗3,2‖1\n] This contradicts the optimality of (Π∗3, s ∗ 3,1, s ∗ 3,2, t ∗ 3,1, t ∗ 3,2). This completes the proof." }, { "heading": "A.3 PROOF OF CONTINUOUS VERSION", "text": "Proof. In this proof we denote by F1 the optimization problem of equation equation 2.2 and by F2 the optimization problem equation equation 2.4. Assume that µn and νm denote the respective empirical measures relative to µ, ν. From Villani (2009), we know that µn, νn converge weakly to µ and ν respectively. Therefore, ROBOT2(µn, µ) → 0. Similary for νn and ν. Thus, by triangle inequality,\nlim n→∞\n|F2(µn, νn)− F2(µ, ν)| = 0.\nBut ROBOT2(µn, νn) = ROBOT1(µn, νn). Therefore, our proof is complete if we can show that\nlim n,m→∞\n|F1(µn, νm)− F1(µ, ν)| → 0.\nLet S = {s signed measure : µ+ s is a probability measure in Rd}. For s ∈ S , define\nW (µ+S, ν) = \nminΠ∈F(Rd×Rd) ∫ C(x, y) Π(dx, dy) + λ‖s‖TV\nsubject to\n∫ A Π(dx, dy) ≥ 0 ∀ A ∈ B(Rd × Rd)\nµ(dx) + s(dx)) ≥ 0 ∀ B ∈ B(Rd)∫\nRd×C Π(dx, dy) = ∫ C ν(dy) ∀ C ∈ B(Rd)\nBy Lemma A.8, ∃ s ∈ S such that ROBOT (µ, ν) = W (µ + s, ν) + λ‖s‖TV . Let s = s+ − s−, where s+ and s− are positive measures on Rd. Let ‖s‖TV = γ. Then, ‖s−‖TV = ‖s+‖TV = γ/2. Then consider X1, . . . , Xn ∼ (P − s−)/(1 − γ), Y1, . . . , Yn ∼ s−/γ, Z1, . . . , Zn ∼ s+/γ. Then for any bounded continuous function f ,\nlim n→∞ ∑ i f(Xi)/n = ∫ f(x) (P − s−) (1− γ) (dx)\nlim n→∞ ∑ i f(Zi)/n = ∫ f(x) s+ γ (dx) (A.2)\nTherefore, the distribution given by (P + s)n = (1− γ) n ∑ i δXi + γ n ∑ i δZi satisfies, (P + s)n\nL→ P + s, and therefore from (Villani, 2009), limn→∞WC((P + S)n, νn)→ W (P + S,Q). Here δx is the Dirac mass at x. Moreover, ‖sn‖ = ‖s‖, where sn satisfies sn = γ n ( ∑ i δZi − ∑ i δYi).\nAlso, ROBOT (µn, νn) ≤ W ((P + s)n, νn) + λ‖s‖TV , and therefore ROBOT2(µ, ν)) = lim supn→∞ROBOT (µn, νn) ≤ ROBOT (µ, ν). Now, let s̃n satisfy W1(µn + s̃n, νn) + λ‖s̃n‖TV = ROBOT (µn, νn). Such an s̃n exists by the proof of the discrete part because µn, νn are discrete measures.\nThen, similar to the Step 1 in the proof of Lemma A.8, there exists a probability measure µ⊕ s and a subsequence {nk}k≥1 such that µnk + snk almost surely converges weakly to µ⊕ s. Moreover, similar to Step 2 of Lemma A.8 W1(µnk + snk , µ ⊕ s) → 0 as well as ‖snk‖TV → ‖µ⊕ s−µ‖TV . Thus, W1(µnk + snk , νnk) +λ‖snk‖TV →W1(µ⊕ s, ν) +λ‖µ⊕ s−µ‖TV . But by the proof of the discrete part ROBOT (µnk , νnk) = ROBOT2(µnk , νnk) → ROBOT2(µ, ν). Therefore, with s = µ⊕ s− µ, W1(µ+ s, ν) + λ‖s‖TV = ROBOT2(µ, ν). Therefore, ROBOT2(µ, ν) = lim supn→∞ROBOT (µn, νn) ≥ ROBOT (µ, ν). Thus the equality holds.\nLemma A.8. Assume that µ, ν is such that ∫ ‖x‖dµ, ∫ ‖x‖dν < ∞. Moreover, assume that C(x, y) in equation 2.2 is the l1 norm, i.e., C(x, y) = ‖x− y‖. Then, there exists s with µ+ s being a probability measure such that\nW1(µ+ s, ν) + λ‖s‖TV = ROBOT (µ, ν), (A.3)\nwhere W1 is the Wasserstein-1 norm with the cost function C(·, ·) as mentioned above.\nProof. Let µn, νm be the empirical measures relative to µ, ν respectively. We know that since µn, νm are discrete, there exists sn satisfying W1(µn + sn, νm) = ROBOT (µn, νm). We provide the proof in the following steps.\nStep 1: Almost surely µ× ν, there exists a subsequence {nk}k≥1 such that {µn + sn}n and {νn}n is relatively compact.\nµ and ν are probability measures on Rd and are therefore tight. Let K be such that Pµ(X /∈ K ), Pν(Y /∈ K ) ≤ /4. Consider the empirical distributions νn = ∑ i δYi/n, µn ∑ i δXi/n of ν, µ respectively. Here, Xi ∼ µ and Yi ∼ν . Fix an ω. Then {X1, . . . , Xn, Y1, . . . , Yn} is fixed. Now by the construction for the discrete case, sn has support in {X1, . . . , Xn, Y1, . . . , Yn}. Let Tn be the optimal transport map from µn to νn. Then, for every i ≤ n, there exists a unique j ≤ n, such that Tn(Xi) = Yj . Define τn : {1, . . . , n} → {1, . . . , n} such that τn(i) = j if Tn(Xi) = Yj .Then µn + sn = ∑ i δZi/n, where Zi = Xi or Yτn(i) and δx is the Dirac delta mass at x.\nThen, let Z ∼ µn + sn Pω(Z /∈ K |µn + sn) ≤ ∑ i 1(Xi /∈K )/n+ ∑ i 1(Yi /∈K )/n (A.4)\nTherefore, E(Pω(Z /∈ K |µn+sn)) ≤ /2. Moreover, V ar(Pω(Z /∈ K |µn+sn)) = o(n−1)→ 0. Therefore, limn→∞ Pµn×νn(Pω(Z /∈ K |µn + sn) ≤ ) → 1. Therefore µn + sn is almost surely tight and thus by Prokhorov’s Theorem also relatively compact.\nStep 2: Therefore, for ω almost surely, there exists a subsequence {nk}k≥1 such that µnk + snk converges weakly to a limit (dependent on ω) µ ⊕ s which is a probability measure. Moreover,∫ ‖x‖d(µnk + snk) < ∞ almost surely. By Bolzano-Weierstrass Theorem, there exists a further\nsubsequence {nkl}l such that ∫ ‖x‖d(µnkl + snkl )→ ∫ ‖x‖d(µ+ s) almost surely. For the sake of convenience, without loss of generality, we will replace the sub-subsequence {nkl}l with {nk}k≥1 henceforth.\nThus, by Theorem 6.9 of (Villani, 2009) , W1(µnk + snk , µ ⊕ s) → 0 almost surely. Moreover, W1(µnk , µ)→ 0 almost surely. Therefore ‖snk‖TV → ‖µ⊕ s− µ‖TV almost surely.\nStep 3: Consider an arbitrary S = S+ − S−, such that S+ and S− are positive measures on Rd, and µ+ S is a probability measure. Let ‖S‖TV = γ. Then, ‖S−‖TV = ‖S+‖TV = γ/2. Then consider X1, . . . , Xn ∼ (µ− S−)/(1− γ), Y1, . . . , Yn ∼ S−/γ, Z1, . . . , Zn ∼ S+/γ. Then for any bounded continuous function f ,\nlim n→∞ ∑ i f(Xi)/n = ∫ f(x) (P − s−) (1− γ) (dx)\nlim n→∞ ∑ i f(Zi)/n = ∫ f(x) s+ γ (dx) (A.5)\nTherefore, the distribution given by (µ + S)n(A) = (1 − γ) ∑ i 1Xi∈A + (γ) ∑ i 1Zi∈A satisfies, (µ+S)n L→ µ+S, and therefore from (Villani, 2009), limn→∞W1((µ+S)n, νn)→W1(µ+S, ν). Moreover, ‖Sn‖TV = ‖S‖TV , where Sn satisfies Sn(A) = γ\nn\n∑ i 1Zi∈A − ∑ i 1Yi∈A.\nBut, W1(µnk + snk , νnk) + λ‖snk‖TV ≤ W1((µ + S)nk , νnk) + λ‖Snk‖TV . Therefore, taking limits, W1(µ⊕ s, ν) + λ‖µ⊕ s−µ‖TV ≤W1(µ+S, ν) + λ‖S‖TV , and thus the proof holds with s = µ⊕ s− µ." }, { "heading": "B PROOF OF ADDITIONAL LEMMAS", "text": "" }, { "heading": "B.1 PROOF OF LEMMA A.1", "text": "Proof. The fact that Π∗1,11 = Π ∗ 1,21 = 0 follows from the fact that Π ∗ 1 0 and Π∗11 = Q. To prove that Π∗1,22 is diagonal, we use the fact that the any diagonal entry the cost matrix is 0. Now suppose Π∗1,22 is not diagonal. Then define a matrix Π̂ as following: set Π̂11 = Π̂21 = 0, Π̂12 = Π ∗ 1,12 and:\nΠ̂22(i, j) =\n{∑m k=1 Π ∗ 1,22(k, i), if j = i\n0, if j 6= i\nAlso define ŝ = s∗1 and t̂ as t̂(i) = Π̂22(i, i). Then clearly (Π̂, ŝ, t̂) is a feasible solution of Formulation 1. Note that:\n‖t̂‖1 = 1>Π̂221 = 1>Π∗1,221 = ‖t∗1‖1\nand by our construction 〈Caug, Π̂〉 < 〈Caug,Π∗1〉. Hence (Π̂, ŝ, t̂) reduces the value of the objective function of Formulation 1 which is a contradiction. This completes the proof." }, { "heading": "B.2 PROOF OF LEMMA A.2", "text": "Proof. 1. Suppose Π∗1(i, j) > 0. Then dump this mass to s ∗ 1(j) and make it 0. In this way\n〈Caug,Π∗1〉 will decrease by > 2λΠ∗1(i, j) and the regularizer value will increase by atmost 2λΠ∗1(i, j), resulting in overall reduction in the objective value, which leads to a contradiction. 2. Suppose each entry of ith row of C is < 2λ. Then if s∗1(i) > 0, we can distribute this mass in the ith row such that, s∗1(i) = a1 + a2 + · · · + am with the condition that t∗1(j) ≥ aj . Now we reduce t∗1 as:\nt∗1(j)← t∗1(j)− aj Hence the value 〈Caug,Π∗1(i, j)〉 will increase by a value < 2λs∗1(i) but the value of regularizer will decrease by the value of 2λs∗1(i), resulting in overall decrease in the value of objective function. 3. Same as proof of part (2) by interchanging row and column in the argument. 4. Suppose not. Then choose < s∗1(i) ∧ t∗1(j), Add to Π∗1(i, j). Hence the cost function value 〈Caug,Π∗1〉 will increase by < 2λ but the regularizer value will decrease by 2λ , resulting in overall decrease in the objective function." }, { "heading": "B.3 PROOF OF LEMMA A.4", "text": "Proof. For the notational simplicity, we drop the subscript 4 now as we will only deal with the solution of Formulation 4 and there will be no ambiguity. We prove the Lemma by contradiction. Suppose s∗1,i > 0. Then we show one can come up with another solution (Π̃, s̃1, s̃2) of Formulation 4 such that it has lower objective value. To construct this new solution, make:\ns̃1,j = { s∗1,j , if j 6= i 0, if j = i\nNow to change the optimal transport plan, we will only change ith row of Π∗. We subtract a1, a2, . . . , an ≥ 0 from ith column of Π∗ in such a way, such that none of the elements are negative. Hence the column sum will be change, i.e. the value of s̃2 will be:\ns̃2,j = s ∗ 2,j − aj ∀1 ≤ j ≤ n .\nNow clearly from our construction: 〈C, Π̃〉 ≤ 〈C,Π∗〉\nFor the regularization part, note that, as we only reduced ith element of s∗1, we have ‖s̃1‖1 = ‖s∗1‖1 − s∗1,i. And by simple triangle inequality,\n‖s̃2‖1 ≤ ‖s∗2‖1 + ‖a1‖1 = ‖s∗2‖1 + s∗1,i\nby construction ai’s, as ai ≥ 0 and ∑ i ai = s ∗ 1,i. Hence we have:\n‖s̃1‖1 + ‖s̃2‖1 ≤ ‖s∗1‖1 − s∗1,i + ‖s∗2‖1 + s∗1,i = ‖s∗1‖1 + ‖s∗2‖1 .\nHence the value corresponding to regularizer will also decrease. This completes the proof." }, { "heading": "B.4 PROOF OF LEMMA A.6", "text": "Proof. We prove this lemma by contradiction. Suppose Π∗3 does not have the structure mentioned in the statement of Lemma. Construct another transport plan for Formulation 3 Π̃3 as follows: Keep Π̃3,12 = Π ∗ 3,12 and set Π̃3,12 = 0. Construct the other parts as:\nΠ̃3,11(i, j) =\n{∑m k=1 Π ∗ 3,11(i, k) + ∑n k=1 Π ∗ 3,21(k, i), if i = j\n0, if i 6= j\nand\nΠ̃3,22(i, j) =\n{∑n k=1 Π ∗ 3,22(k, i), if i = j\n0, if i 6= j It is immediate from the construction that:\n〈Caug, Π̃3〉 ≤ 〈Caug,Π∗3〉\nAs for the regularization term: Note the by our construction s̃4 will be same as s∗4 as column sum of Π̃3,22 is same as Π∗3,22. For the other three:\ns̃3(i) = Π̃3,11(i, i) = m∑ k=1 Π∗3,11(i, k) + n∑ k=1 Π∗3,21(k, i)\ns̃2(i) = Π̃3,22(i, i) = n∑ k=1 Π∗3,22(k, i)\nand hence by construction:\n‖s̃2‖1 = 1>Π∗3,221 = ‖s∗2‖1 − 1>Π∗3,211 .\n‖s̃3‖1 = 1>Π∗3,111 + 1>Π∗3,211 = ‖s∗3‖1 And also by our construction, s̃1 = s∗1 + c where c = (Π ∗ 3,21)\n>1. As a consequence we have ‖c‖1 = 1>Π∗3,211. Then it follows:\n4∑ i=1 ‖s̃i‖1 = ‖s∗1 + c‖+ ‖s∗2‖1 − 1>Π∗3,211 + ‖s∗3‖1 + ‖s∗4‖1\n≤ 4∑ i=1 ‖s∗i ‖1 + ‖c‖1 − 1>Π∗3,211\n= 4∑ i=1 ‖s∗i ‖1\nSo the objective value is overall reduced. This contradicts the optimality of Π∗3 which completes the proof." }, { "heading": "C PROOF OF THEOREM 2.1", "text": "Proof. The proof is immediate from the Formulation 1. Recall that the Formulation 1 can restructured as:\nROBOT (µ̃, ν) = inf P {OT (P, ν) + λ‖P − µ̃‖TV } .\nwhere the infimum is taking over all measure dominated by some common measure σ (with respect to which µ, µc, ν are dominated). Hence,\nROBOT (µ̃, ν) ≤ OT (P, ν) + λ‖P − µ̃‖TV for any particular choice of P . Taking P = µ we get that\nROBOT (µ̃, ν) ≤ OT (µ, ν) + λ‖µ− µ̃‖TV = OT (µ, ν)) + λ ‖µ− µc‖TV Taking P = ν we get ROBOT (µ̃, ν) ≤ λ‖ν − µ̃‖TV and finally taking P = µ̃ we get ROBOT (µ̃, ν) ≤ OT (µ̃, ν). This completes the proof." } ]
2,020
OUTLIER-ROBUST OPTIMAL TRANSPORT
SP:198d7f650c930a1423f7f30688cd2f73d2719920
[ "This paper aims to extend the continuous optimization approach to causal discovery to handle interventional data as well as observational data. It describes a method for learning the causal structure over a set of categorical variables and reports strong empirical performance. However, no theoretical guarantee or analysis is provided, which is a significant weakness in my view. It also makes no comment on or comparison to a paper that has essentially the same goal, https://arxiv.org/pdf/2007.01754.pdf. The latter paper seems to me more principled and convincing. " ]
Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides much richer information about the underlying data-generating process. However, the extension and application of methods designed for observational data to include interventions is not straightforward and remains an open problem. In this paper we provide a general framework based on continuous optimization and neural networks to create models for the combination of observational and interventional data. The proposed method is applicable even in the challenging and realistic case that the identity of the intervened upon variable is unknown. We examine the proposed method in the setting of graph recovery both de novo and from a partially-known edge set. We establish strong benchmark results on several structure learning tasks, including structure recovery of both synthetic graphs as well as standard graphs from the Bayesian Network Repository.
[]
[ { "authors": [ "Bruce Abramson", "John Brown", "Ward Edwards", "Allan Murphy", "Robert L Winkler" ], "title": "Hailfinder: A bayesian system for forecasting severe weather", "venue": "International Journal of Forecasting,", "year": 1996 }, { "authors": [ "Ingo A Beinlich", "Henri Jacques Suermondt", "R Martin Chavez", "Gregory F Cooper" ], "title": "The alarm monitoring system: A case study with two probabilistic inference techniques for belief networks", "venue": "In AIME", "year": 1989 }, { "authors": [ "Yoshua Bengio", "Tristan Deleu", "Nasim Rahaman", "Rosemary Ke", "Sébastien Lachapelle", "Olexa Bilaniuk", "Anirudh Goyal", "Christopher Pal" ], "title": "A meta-transfer objective for learning to disentangle causal mechanisms", "venue": null, "year": 1901 }, { "authors": [ "David Maxwell Chickering" ], "title": "Optimal structure identification with greedy search", "venue": "Journal of machine learning research,", "year": 2002 }, { "authors": [ "Gregory F. Cooper", "Changwon Yoo" ], "title": "Causal Discovery from a Mixture of Experimental and Observational Data", "venue": "In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence,", "year": 1999 }, { "authors": [ "Laura Douglas", "Iliyan Zarov", "Konstantinos Gourgoulias", "Chris Lucas", "Chris Hart", "Adam Baker", "Maneesh Sahani", "Yura Perov", "Saurabh Johri" ], "title": "A universal marginalizer for amortized inference in generative models", "venue": "arXiv preprint arXiv:1711.00695,", "year": 2017 }, { "authors": [ "Daniel Eaton", "Kevin Murphy" ], "title": "Belief net structure learning from uncertain interventions", "venue": "J Mach Learn Res,", "year": 2007 }, { "authors": [ "Daniel Eaton", "Kevin Murphy" ], "title": "Exact bayesian structure learning from uncertain interventions", "venue": "In Artificial Intelligence and Statistics, pp", "year": 2007 }, { "authors": [ "Daniel Eaton", "Kevin Murphy" ], "title": "Bayesian structure learning using dynamic programming and MCMC", "venue": "In Uncertainty in Artificial Intelligence, pp", "year": 2007 }, { "authors": [ "Frederick Eberhardt", "Clark Glymour", "Richard Scheines" ], "title": "On the number of experiments sufficient and in the worst case necessary to identify all causal relations among n variables", "venue": "arXiv preprint arXiv:1207.1389,", "year": 2012 }, { "authors": [ "AmirEmad Ghassami", "Saber Salehkaleybar", "Negar Kiyavash", "Kun Zhang" ], "title": "Learning causal structures using regression invariance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Olivier Goudet", "Diviyan Kalainathan", "Philippe Caillou", "Isabelle Guyon", "David Lopez-Paz", "Michèle Sebag" ], "title": "Causal generative neural networks", "venue": "arXiv preprint arXiv:1711.08936,", "year": 2017 }, { "authors": [ "Olivier Goudet", "Diviyan Kalainathan", "Philippe Caillou", "Isabelle Guyon", "David Lopez-Paz", "Michele Sebag" ], "title": "Learning functional causal models with generative neural networks", "venue": "In Explainable and Interpretable Models in Computer Vision and Machine Learning,", "year": 2018 }, { "authors": [ "Isabelle Guyon" ], "title": "Cause-effect pairs kaggle competition", "venue": "URL https://www. kaggle. com/c/causeeffect-pairs,", "year": 2013 }, { "authors": [ "Isabelle Guyon" ], "title": "Chalearn fast causation coefficient challenge", "venue": "URL https://www. codalab. org/competitions/1381,", "year": 2014 }, { "authors": [ "Alain Hauser", "Peter Bühlmann" ], "title": "Characterization and greedy learning of interventional markov equivalence classes of directed acyclic graphs", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "David Heckerman", "Dan Geiger", "David M Chickering" ], "title": "Learning bayesian networks: The combination of knowledge and statistical data", "venue": "Machine learning,", "year": 1995 }, { "authors": [ "Christina Heinze-Deml", "Marloes H Maathuis", "Nicolai Meinshausen" ], "title": "Causal structure learning", "venue": "Annual Review of Statistics and Its Application,", "year": 2018 }, { "authors": [ "Christina Heinze-Deml", "Jonas Peters", "Nicolai Meinshausen" ], "title": "Invariant causal prediction for nonlinear models", "venue": "Journal of Causal Inference,", "year": 2018 }, { "authors": [ "Steven M Hill", "Laura M Heiser", "Thomas Cokelaer", "Michael Unger", "Nicole K Nesser", "Daniel E Carlin", "Yang Zhang", "Artem Sokolov", "Evan O Paull", "Chris K Wong" ], "title": "Inferring causal molecular networks: empirical assessment through a community-based effort", "venue": "Nature methods,", "year": 2016 }, { "authors": [ "Biwei Huang", "Kun Zhang", "Jiji Zhang", "Joseph Ramsey", "Ruben Sanchez-Romero", "Clark Glymour", "Bernhard Schölkopf" ], "title": "Causal discovery from heterogeneous/nonstationary data", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Oleg Ivanov", "Michael Figurnov", "Dmitry Vetrov" ], "title": "Variational autoencoder with arbitrary conditioning", "venue": "arXiv preprint arXiv:1806.02382,", "year": 2018 }, { "authors": [ "Amin Jaber", "Murat Kocaoglu", "Karthikeyan Shanmugam", "Elias Bareinboim" ], "title": "Causal discovery from soft interventions with unknown targets: Characterization and learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Diviyan Kalainathan", "Olivier Goudet", "Isabelle Guyon", "David Lopez-Paz", "Michèle Sebag" ], "title": "Sam: Structural agnostic model, causal discovery and penalized adversarial learning", "venue": "arXiv preprint arXiv:1803.04929,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Murat Kocaoglu", "Amin Jaber", "Karthikeyan Shanmugam", "Elias Bareinboim" ], "title": "Characterization and learning of causal graphs with latent variables from soft interventions", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kevin B Korb", "Ann E Nicholson" ], "title": "Bayesian artificial intelligence", "venue": "CRC press,", "year": 2010 }, { "authors": [ "Kristian Kristensen", "Ilse A Rasmussen" ], "title": "The use of a bayesian network in the design of a decision support system for growing malting barley without use of pesticides", "venue": "Computers and Electronics in Agriculture,", "year": 2002 }, { "authors": [ "Sébastien Lachapelle", "Philippe Brouillard", "Tristan Deleu", "Simon Lacoste-Julien" ], "title": "Gradient-based neural dag learning", "venue": "arXiv preprint arXiv:1906.02226,", "year": 2019 }, { "authors": [ "Steffen L Lauritzen", "David J Spiegelhalter" ], "title": "Local computations with probabilities on graphical structures and their application to expert systems", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1988 }, { "authors": [ "Yang Li", "Shoaib Akbar", "Junier B Oliva" ], "title": "Flow models for arbitrary conditional likelihoods", "venue": "arXiv preprint arXiv=1909.06319,", "year": 2019 }, { "authors": [ "David Lopez-Paz", "Krikamol Muandet", "Bernhard Schölkopf", "Iliya Tolstikhin" ], "title": "Towards a learning theory of cause-effect inference", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Ricardo Pio Monti", "Kun Zhang", "Aapo Hyvarinen" ], "title": "Causal discovery with general non-linear relationships using non-linear ica", "venue": "arXiv preprint arXiv:1904.09096,", "year": 2019 }, { "authors": [ "Joris M Mooij", "Sara Magliacane", "Tom Claassen" ], "title": "Joint causal inference from multiple contexts", "venue": "arXiv preprint arXiv:1611.10351,", "year": 2016 }, { "authors": [ "Judea Pearl" ], "title": "Causal diagrams for empirical research", "venue": "Biometrika, 82(4):669–688,", "year": 1995 }, { "authors": [ "Judea Pearl", "Dana Mackenzie" ], "title": "The book of why: the new science of cause and effect", "venue": "Basic Books,", "year": 2018 }, { "authors": [ "Jonas Peters", "Peter Bühlmann", "Nicolai Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2016 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of causal inference: foundations and learning algorithms", "venue": "MIT press,", "year": 2017 }, { "authors": [ "Mateo Rojas-Carulla", "Bernhard Schölkopf", "Richard Turner", "Jonas Peters" ], "title": "Invariant models for causal transfer learning", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Dominik Rothenhäusler", "Christina Heinze", "Jonas Peters", "Nicolai Meinshausen" ], "title": "Backshift: Learning causal cyclic graphs from unknown shift interventions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Karen Sachs", "Omar Perez", "Dana Pe’er", "Douglas A Lauffenburger", "Garry P Nolan" ], "title": "Causal proteinsignaling networks derived from multiparameter single-cell data", "venue": null, "year": 2005 }, { "authors": [ "Bernhard Schölkopf", "Dominik Janzing", "Jonas Peters", "Eleni Sgouritsa", "Kun Zhang", "Joris Mooij" ], "title": "On causal and anticausal learning", "venue": "Proceedings of the 29th International Conference on Machine Learning (ICML),", "year": 2012 }, { "authors": [ "Shohei Shimizu", "Patrik O Hoyer", "Aapo Hyvärinen", "Antti Kerminen" ], "title": "A linear non-gaussian acyclic model for causal discovery", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Chandler Squires", "Yuhao Wang", "Caroline Uhler" ], "title": "Permutation-based causal structure learning with unknown intervention targets", "venue": "In Conference on Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Xiaohai Sun", "Dominik Janzing", "Bernhard Schölkopf", "Kenji Fukumizu" ], "title": "A kernel-based causal learning algorithm", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Ioannis Tsamardinos", "Laura E Brown", "Constantin F Aliferis" ], "title": "The max-min hill-climbing bayesian network structure learning algorithm", "venue": "Machine learning,", "year": 2006 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela Van Der Schaar" ], "title": "Gain: Missing data imputation using generative adversarial nets", "venue": "arXiv preprint arXiv:1806.02920,", "year": 2018 }, { "authors": [ "Yue Yu", "Jie Chen", "Tian Gao", "Mo Yu" ], "title": "Dag-gnn: Dag structure learning with graph neural networks", "venue": "arXiv preprint arXiv:1904.10098,", "year": 2019 }, { "authors": [ "Kun Zhang", "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Kernel-based conditional independence test and application in causal discovery", "venue": "arXiv preprint arXiv:1202.3775,", "year": 2012 }, { "authors": [ "Xun Zheng", "Bryon Aragam", "Pradeep K Ravikumar", "Eric P Xing" ], "title": "DAGs with NO TEARS: Continuous optimization for structure learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shengyu Zhu", "Zhitang Chen" ], "title": "Causal discovery with reinforcement learning", "venue": "arXiv preprint arXiv:1906.04477,", "year": 2019 }, { "authors": [ "Peters" ], "title": "Adam(γt, g) graph is always identifiable in principle (Eberhardt et al., 2012; Heinze-Deml et al., 2018a). We also consider here situations where a single variable is randomly selected and intervened upon with a soft or imprecise intervention, its identity is unknown and must be inferred", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Structure learning concerns itself with the recovery of the graph structure of Bayesian networks (BNs) from data samples. A natural application of Bayesian networks is to describe cause-effect relationships between variables. In that context, one may speak of causal structure learning. Causal structure learning is challenging because purely observational data may be satisfactorily explained by multiple Bayesian networks (a Markov equivalence class), but only one is the most robust to distributional shifts: The one with the correct graph. A more powerful tool than BNs is thus needed to model causal relationships.\nStructural Causal Models (SCMs) are that tool. An SCM over a set of random variables is a collection of assignments to these variables and a directed acyclic graph of dependencies between them (Peters et al., 2017, §6.2). Each assignment is a function of only the direct causes of a variable, plus an independent noise source. An SCM entails precisely one (observational) data distribution. Interventions on an SCM’s assignments, such as setting a random variable to a fixed value (a hard intervention), entail new interventional data distributions (Peters et al., 2017, §6.3).\nSCMs can be used to answer higher-order questions of cause-and-effect, up the ladder of causation (Pearl & Mackenzie, 2018). Causal structure learning using SCMs has been attempted in several disciplines including biology (Sachs et al., 2005; Hill et al., 2016), weather forecasting (Abramson et al., 1996) and medicine (Lauritzen & Spiegelhalter, 1988; Korb & Nicholson, 2010).\nCausal structure is most frequently learned from data drawn from observational distributions. Structure learning methods generally cannot do more than identify the causal graph up to a Markov equivalence class (Spirtes et al., 2000). In order to fully identify the true causal graph, a method must either make restrictive assumptions about the underlying data-generating process, such as linear but non-Gaussian data (Shimizu et al., 2006), or must access enough data from outside the observational distribution (i.e., from interventions).\nUnder certain assumptions about the number, diversity, and nature of the interventions, the true underlying causal graph is always identifiable, given that the method knows the intervention performed (Heckerman et al., 1995). In much of the prior work on causal model induction it is assumed that\nthere is an experimenter and this experimenter performs interventions. However, in the real world, interventions can also be performed by other agents, which could lead to unknown interventions (interventions with unknown target variables). A few works have attempted to learn structures from unknown-intervention data (Eaton & Murphy, 2007a; Squires et al., 2020; Huang et al., 2020). A notable such work, (Mooij et al., 2016), has been extended in (Kocaoglu et al., 2019; Jaber et al., 2020). Although there is no theoretical guarantee that the true causal graph can be identified in that setting, evidence so far points to that still being the case.\nAnother common setting is when the graph structure is partially provided, but must be completed. An example is protein structure learning in biology, where we may have definitive knowledge of some causal edges in the protein-protein interactome, but the remaining causal edges must be discovered. We will call this setting “partial graph completion”. This is an easier task compared to learning the entire graph, since it limits the number of edges that have to be learned.\nRecently, a flurry of work on structure learning using continuous optimization methods has appeared (Zheng et al., 2018; Yu et al., 2019). These methods operate on observational data and are competitive with other methods. Because of the theoretical limitations on identification from purely observational data cited above, it would be interesting to extend these methods to interventional data. However, it is not straightforward to apply continuous optimization methods to structure learning from interventional data. Our key contributions are to answer the following questions experimentally:\n1. Can the proposed model recover true causal structure? Yes, see Figure §4. 2. How does the proposed model compare against state of the art causal methods on real-world\ndatasets? Favourably; see §5.4 and Table §1. 3. Does a proposed model generalize well to unseen interventions? Yes, see §5.5. 4. How does the proposed model perform on partial graph recovery? It scales to∼ 50 variables\nwhile the other baselines can’t. see §5.7." }, { "heading": "2 PRELIMINARIES", "text": "Causal modeling. A Structural Causal Model (SCM) (Peters et al., 2017) over a finite number M of random variables Xi is a set of structural assignments\nXi := fi(Xpa(i,C), Ni) , ∀i ∈ {0, . . . ,M − 1} (1)\nIdentifiability. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph structure, interventional data is needed (Eberhardt et al., 2012).\nInterventions. There are several types of common interventions which may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft intervention because they include hard intervention as a limiting case and hence are more general.\nStructure discovery using continuous optimization. Structure discovery is a super-exponential search problem that searches though all possible directed acyclic graphs (DAGs). Previous continuousoptimization structure learning works (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) mitigate the problem of searching in the super-exponential set of graph structures by considering the degree to which a hypothesis graph violates “DAG-ness” as an additional penalty to be optimized. If there are M such variables, the strategy of considering all the possible structural graphs as separate hypotheses is not feasible because it would require maintaining O(2M 2\n) models of the data.2" }, { "heading": "3 RELATED WORK", "text": "The recovery of the underlying structural causal graph from observational and interventional data is a fundamental problem (Pearl, 1995; 2009; Spirtes et al., 2000). Different approaches have been studied: score-based, constraint-based, asymmetry-based and continuous optimization methods. Score-based methods search through the space of all possible directed acyclic graphs (DAGs) representing the causal structure based on some form of scoring function for network structures (Heckerman et al., 1995; Chickering, 2002; Tsamardinos et al., 2006; Hauser & Bühlmann, 2012; Goudet et al., 2017; Cooper & Yoo, 1999; Zhu & Chen, 2019). Constraint-based methods (Spirtes et al., 2000; Sun et al., 2007; Zhang et al., 2012; Monti et al., 2019; Zhu & Chen, 2019) infer the DAG by analyzing conditional independences in the data. Eaton & Murphy (2007c) use dynamic programming techniques to accelerate Markov Chain Monte Carlo (MCMC) sampling in a Bayesian approach to structure learning for discrete variable DAGs. Peters et al. (2016); Ghassami et al. (2017); Rojas-Carulla et al. (2018) exploit invariance across environments to infer causal structure, which faces difficulty scaling due to the iteration over the super-exponential set of possible graphs. Recently, (Zheng et al., 2018; Yu et al., 2019; Lachapelle et al., 2019) framed the structure search as a continuous optimization problem, however, the methods only uses observational data and are non-trivial to extend to interventional data. In our paper, we present a method that uses continuous optimization methods that works on both observational and interventional data.\nFor interventional data, it is often assumed that the models have access to full intervention information, which is rare in the real world. Rothenhäusler et al. (2015) have investigated the case of additive shift interventions, while Eaton & Murphy (2007b) have examined the situation where the targets of experimental interventions are imperfect or uncertain. This is different from our setting where the intervention is unknown to start with and is assumed to arise from other agents and the environment.\nLearning based methods have been proposed (Guyon, 2013; 2014; Lopez-Paz et al., 2015) and there also exist recent approaches using the generalization ability of neural networks to learn causal signals from purely observational data (Kalainathan et al., 2018; Goudet et al., 2018). Neural network methods equipped with learned masks, such as (Ivanov et al., 2018; Li et al., 2019; Yoon et al., 2018; Douglas et al., 2017), exist in the literature, but only a few (Kalainathan et al., 2018) have been adapted to causal inference. This last work is, however, tailored for causal inference on continuous variables and from observations only. Adapting it to a discrete-variable setting is made difficult by its use of a Generative Adversarial Network (GAN) Goodfellow et al. (2014) framework." }, { "heading": "4 STRUCTURE DISCOVERY FROM INTERVENTIONS METHOD", "text": "Scope of Applicability and Objective. The proposed method, like any structure learning algorithm, assumes the availability of a data-generating process based on ancestral sampling of a ground-truth SCM of M variables, which can be queried for samples. The SCM supports applying and retracting known or unknown interventions. The method can support infinite- or finite-data as well as infiniteor finite-intervention regimes.\nThe objective is, then, to learn the SCM’s structure from the insights that each intervention gives about cause-effect relationships between variables in the SCM." }, { "heading": "4.1 PROBLEM SETTING AND ASSUMPTIONS", "text": "In this paper, we restrict the problem setting to specific, but still broad classes of SCMs and interventions. In particular, we assume that:\nData is discrete-valued. The SCM’s random variables are all categorical.\nCausal sufficiency. For every data sample, the value of all variables are available; There are no latent confounders.\nInterventions are localized. They affect only a single variable (but which one may not be known).\nInterventions are soft. An intervention does not necessarily pin its target random variable to a fixed value (though it may, as a special case). It changes the relationship of a variable with its parents.\nInterventions do not stack. Before a new intervention is made, the previous one is fully retracted. This stops the SCM from wandering away from its initial, observational configuration after a long series of interventions.\nNo control over interventions. The structure learning algorithm has control neither of the target, nor the nature of the next intervention on the SCM.\nFor a detailed description of the interventions, refer to §A.2." }, { "heading": "4.2 VARIATIONS AND PRIOR KNOWLEDGE", "text": "In the problem setting above, the ground-truth SCM is completely opaque to us. However, we consider two interesting relaxations of this formulation:\nComplete or partial graph recovery. We may already know the existence of certain cause-effect edges and non-edges within the ground-truth SCM. If such prior information is available, it turns a complete graph recovery problem into one of partial graph recovery. Larger SCMs can be tackled if only parts of the graph need to be recovered.\nKnown or unknown interventions: The interventions can either be known or unknown to the learned model.\nWe demonstrate that the proposed method can naturally incorporate this prior information to improve its performance.\n4.3 METHOD OVERVIEW\nThe proposed method is a score-based, iterative, continuousoptimization method consisting of three phases that flow into one other (See Figure 2). During the three-phase procedure, a structural representation of a DAG and a functional representation of a set of independent causal mechanisms are trained jointly until convergence. Because the structural and functional parameters are not independent and do influence each other, we train them in alternating phases, a form of block coordinate descent optimization." }, { "heading": "4.3.1 PARAMETRIZATION", "text": "We distinguish two sets of parameters: The structural parameters γ and the functional parameters θ. Given a graph of M variables, we parametrize the structure γ as a matrix RM×M such that σ(γij) is our belief in random variable Xj being a direct cause of Xi, where\nσ(x) = 1/(1 + exp(−x)) is the sigmoid function. The matrix σ(γ) is thus a soft adjacency matrix. The set of functional parameters θi parametrizes the conditional probability distribution of Xi given its parent set Xpa(i,C), with C ∼ Ber(σ(γ)) a hypothesized configuration of the SCM’s DAG." }, { "heading": "4.3.2 PHASE 1: GRAPH FITTING ON OBSERVATIONAL DATA", "text": "During Phase 1, the functional parameters θ are trained to maximize the likelihood of randomly drawn observational data under graphs randomly drawn from our current beliefs about the edge structure. We draw graph configurations Cij ∼ Ber(σ(γij)) and batches of observational data from the unintervened ground-truth SCM, then maximize the log-likelihood of the batch under that configuration using SGD. The use of graph configurations sampling from Bernoulli distributions is analogous to dropout on the inputs of the functional models (in our implementation, MLPs), giving us an ensemble of neural networks that can model the observational data." }, { "heading": "4.3.3 PHASE 2: GRAPH SCORING ON INTERVENTIONAL DATA", "text": "During Phase 2, a number of graph configurations are sampled from the current edge beliefs parametrized by γ, and scored on data samples drawn from the intervention SCM.\nIntervention applied: At the beginning of Phase 2, an intervention is applied to the ground-truth SCM. This intervention is not under the control of the method. In our implementation, and unbeknownst to the model, the target variable is chosen uniformly randomly from all M variables throughout the optimization process.\nIntervention predicted: If the target of the intervention is not known, it is predicted using a simple heuristic. A small number of interventional data samples are drawn from the SCM and more graphs are sampled from our current edge beliefs. The average log-likelihood of each individual variable Xi across the samples is then computed using the functional model parameters θ fine-tuned on observational data in Phase 1. The variable Xi showing the greatest deterioration in log-likelihood is assumed to be the target because the observational distribution most poorly predicts that variable.\nIf the target of the intervention is known, then this is taken as ground-truth knowledge for the purpose of subsequent steps, and no prediction needs to be done.\nGraphs Sampled and Scored: A new set of interventional data samples and graph configurations are now drawn from the intervention SCM and edge beliefs respectively. The log-likelihood of the data batches under the hypothesized configurations is computed, with one modification: The contribution to the total log-likelihood of a sample X coming from the target (or predicted-target) intervention variable Xi is masked. Because Xi was intervened upon (in the manner of a Pearl do-operation, soft or hard), the values one gets for that variable should be taken as givens, not as contributors to the total log-likelihood of the sample. As well, no gradient should be allowed to propagate into the variable’s learned functional parametrization θi, because it was not actually responsible for the outcome.\nIntervention retracted: After Phase 2, the intervention is retracted, per our modelling assumptions." }, { "heading": "4.3.4 PHASE 3: CREDIT ASSIGNMENT TO STRUCTURAL PARAMETERS", "text": "During Phase 3, the scores of the interventional data batches over various graph configurations are aggregated into a gradient for the structural parameters γ. Because a discrete Bernoulli random sampling process was used to sample graph configurations under which the log-likelihoods were computed, we require a gradient estimator to propagate gradient through to the γ structural parameters. Several alternatives exist, but we adopt for this purpose the REINFORCE-like gradient estimator gij proposed by Bengio et al. (2019):\ngij =\n∑ k(σ(γij)− c\n(k) ij )LC(k),i (X)∑\nk LC(k),i (X) , ∀i, j ∈ {0, . . . ,M−1} (2)\nwhere the (k) superscript indicates the values obtained for the k-th draw of C under the current edge beliefs parametrized by γ. Therefore, L(k)C,i(X) can be read as the log-likelihood of variable Xi in the data sample X under the k’th configuration, C(k), drawn from our edge beliefs. Using the estimated gradient, we then update γ with SGD, and return to Phase 1 of the continuous optimization process.\nThe gradient estimator gij minimizes an implicit empirical risk objective with respect to γij . When the functional and structural parameters θ and γ are “sufficiently close” to their minima, the estimator gij empirically converges quickly towards that minimum γ∗ as shown in Figure 16 of Appendix A.13.\nAcyclic Constraint: We include a regularization term JDAG(γ) that penalizes length-2 cycles in the learned adjacency matrix σ(γ), with a tunable strength λDAG. The regularization term is JDAG(γ) =∑ i 6=j cosh(σ(γij)σ(γji)), ∀i, j ∈ {0, . . . ,M−1} and is derived from Zheng et al. (2018). The details of the derivation are in the Appendix. We explore several different values of λDAG and their effects in our experimental setup. Suppression of longer-length cycles was not found to be worthwhile for the increased computational expense." }, { "heading": "5 EXPERIMENTAL SETUP AND RESULTS", "text": "We first evaluate the proposed method on a synthetic dataset where we have control over the number of variables and causal edges in the ground-truth SCM. This allows us to analyze the performance of the proposed method under various conditions. We then evaluate the proposed method on real-world datasets from the BnLearn dataset repository. We also consider the two variations of §4.2: Recovering only part of the graph (when the rest is known), and exploiting knowledge of the intervention target.\nThe summary of our findings is: 1) We show strong results for graph recovery for all synthetic graphs in comparisons with other baselines, measured by Hamming distance. 2) The proposed method achieves high accuracy on partial graph recovery for large, real-world graphs. 3) The proposed method’s intervention target prediction heuristic closes the gap between the known- and unknowntarget intervention scenarios. 4) The proposed method generalizes well to unseen interventions. 5) The proposed method’s time-to-solution scaling appears to be driven by the number of edges in the groundtruth graph moreso than the number of variables.\n5.1 MODEL DESCRIPTION\nLearner model. Without loss of generality, we let θi = {W0i,B0i,W1i,B1i} define a stack of M one-hidden-layer MLPs, one for each random variable Xi. A more appropriate model, such as a CNN, can be chosen using domainspecific knowledge; the primary advantage of using MLPs is that the hypothesized DAG configurations cij can be readily used to mask the inputs of MLP i, as shown in Figure 3.\nTo force the structural equation fi corresponding to Xi to rely exclusively on its direct ancestor set pa(i, C) under hypothesis adjacency matrix C (See Eqn. 1), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . An example of the multi-MLP architecture with M=4 categorical variables of N=3 categories is shown in Figure 3. For more details, refer to Appendix A.4.\nGround-truth model. Ground-truth SCM models are parametrized either as CPTs with parameters from BnLearn (in the case of real-world graphs), or as a second stack of MLPs similar to the learner model, with randomly-initialized functional parameters θGT and the desired adjacency matrix γGT.\nInterventions. In all experiments, at most one (soft) intervention is concurrently performed. To simulate a soft intervention on variable Xi, we reinitialize its ground-truth conditional distribution’s MLP parameters or CPT table randomly, while leaving the other variables untouched. For more details about the interventions, please refer to Appendix A.2.\n5.2 SYNTHETIC DATASETS EXPERIMENTS\nWe first evaluate the model’s performance on several randomlyinitialized SCMs with specific, representative graph structures. Since the number of possible DAGs grows super-exponentially with the number of variables, for M=4 up to 13 a selection of\nrepresentative and edge-case DAGs are chosen. chainM and fullM (M=3-13) are the minimallyand maximally-connected M -variable DAGs, while treeM and jungleM are tree-like intermediate graphs. colliderM is the (M−1)→ 1 collider graph. The details of the setup is in Appendix A.6.\nResults. The model can recover most synthetic DAGs with high accuracy, as measured by Structural Hamming Distance (SHD) between learned and ground-truth DAGs. Table 1 shows our proposed method outperforming all other baseline methods, and learns all graphs perfectly for 3 to 13 variables (excepting full). For DAGs ranging from 3 to 8 variables, the AUROCs all eventually reach 1.0 (indicating perfect classification into edge/non-edge; Refer to Figure 4). For both large (M > 10)\nand dense DAGs (e.g. full13) the model begins encountering difficulties, as shown in Table 1 and Appendix §A.6.1.\nSmall graphs (M < 10) are less sensitive than larger ones to our hyperparameters, notably the sparsity and acyclic regularization (§4.3.4) terms. In §A.5, we perform an analysis of these hyperparameters." }, { "heading": "5.3 REAL-WORLD DATASETS: BNLEARN", "text": "The Bayesian Network Repository is a collection of commonly-used causal Bayesian networks from the literature, suitable for Bayesian and causal learning benchmarks. We evaluate the proposed method on the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010), Asia (Lauritzen & Spiegelhalter, 1988) and Sachs (Sachs et al., 2005) datasets (M =5, 5, 8 and 11-variables respectively, maximum in-degree 3) in the BnLearn dataset repository.\nResults. As shown in Table 1, the proposed method perfectly recovers the DAG of Asia, while making a small number of errors (SHD=6) for Sachs (11-variables). It thus significantly outperforms all other baselines models. Figures 8 & 9 visualize what the model has learned at several stages of learning. Results for Cancer and Asia can be found in the appendices, Figure 17 and 18." }, { "heading": "5.4 COMPARISONS WITH OTHER METHODS", "text": "As shown in Table 1, we compared the proposed SDI method to ICP ((Peters et al., 2016)), non-linear ICP ((Heinze-Deml et al., 2018b)), and (Eaton & Murphy, 2007b; Zheng et al., 2018; Yu et al., 2019) on Asia (Lauritzen & Spiegelhalter, 1988), Sachs (Sachs et al., 2005) and representative synthetic graphs. Eaton & Murphy (2007b) handles uncertain interventions and Peters et al. (2016), Heinze-Deml et al. (2018b) handles unknown interventions. However, neither attempts to predict the intervention. As shown in Table 1, we significantly outperform ICP, non-linear ICP, and the methods in (Yu et al., 2019) and (Zheng et al., 2018). Furthermore, Eaton & Murphy (2007b) runs out of memory for graphs larger than M = 10 because modelling of uncertain interventions is done using “shadow” random variables (as suggested by the authors), and thus recovering the DAG internally requires solving a d = 2M -variable problem. Their method’s extremely poor time- and space-scaling of O(d2d) makes it unusable beyond d > 20.\nFor SDIs, we threshold our edge beliefs at σ(γ) = 0.5 to derive a graph, but the continued decrease of the cross-entropy loss (Figure 4) hints at SDI’s convergence onto the correct causal model. Please refer to Appendix §A.8 for full details and results." }, { "heading": "5.5 GENERALIZATION TO PREVIOUSLY UNSEEN INTERVENTIONS", "text": "It is often argued that machine learning approaches based purely on capturing joint distributions do not necessarily yield models that generalize to unseen experiments, since they do not explicitly model changes through interventions. By way of contrast,\ncausal models use the concept of interventions to explicitly model changing environments and thus hold the promise of robustness under distributional shifts (Pearl, 2009; Schölkopf et al., 2012; Peters et al., 2017). To test the robustness of causal modelling to previously unseen interventions (new values for an intervened variable), we evaluate a well-trained causal model against a variant, non-causal model trained with cij = 1, i 6= j. An intervention is performed on the ground-truth SCM, fresh interventional data is drawn from it, and the models, with knowledge of the intervention target, are asked to predict the other variables given their parents. The average log-likelihoods of the data under both models are computed and contrasted. The intervention variable’s contribution to the loglikelihood is masked. For all 3-variable graphs (chain3, fork3, collider3, confounder3), the causal model attributes higher log-likelihood to the intervention distribution’s samples than the non-causal variant, thereby demonstrating causal models’ superior generalization ability in transfer tasks. Table 2 collects these results." }, { "heading": "5.6 VARIANT: PREDICTING INTERVENTIONS", "text": "In Phase 2 (§4.3.3), we use a simple heuristic to predict the intervention target variable. Experiments show that this heuristic functions well in practice, yielding correct predictions far more often than by chance alone (Table 3). Guessing the intervention variable\nrandomly, or not guessing it at all, leads to a significant drop in the model performance, even for 3-variable graphs (Fig. 11 Left). Training SDI with intervention prediction closely tracks training with leaked knowledge of the ground-truth intervention on larger, 7-variable graphs (Fig. 11 Right)." }, { "heading": "5.7 VARIANT: PARTIAL GRAPH RECOVERY", "text": "Instead of learning causal structures de novo, we may have partial information about the ground-truth SCM and may only need to fill in missing information (§4.2). An example is protein structure discovery in biology, where some causal relationships have been definitely established and others remain open hypotheses. This is an easier task compared to full graph recovery, since the model only has to search for missing edges.\nTable 4: Partial Graph Recovery on Alarm (Beinlich et al., 1989) and Barley (Kristensen & Rasmussen, 2002). The model is asked to predict 50 edges in Barley and 40 edges in Alarm. The accuracy is measured in Structural Hamming Distance (SHD). SDI achieved over 90% accuracy on both graphs.\nGraph Alarm Barley Number of variables 37 48 Total Edges 46 84\nEdges to recover 40 50 Recovered Edges 37 45 Errors (in SHD) 3 5\nWe evaluate the proposed method on Barley (Kristensen & Rasmussen, 2002) (M = 48) and Alarm (Beinlich et al., 1989) (M = 37) from the BnLearn repository. The model is asked to predict 50 edges from Barley and 40 edges from Alarm. The model reached ≥ 90% accuracy on both datasets, as shown in Table 4." }, { "heading": "5.8 ABLATION AND ANALYSIS", "text": "As shown in Figure 12, larger graphs (such as M > 6) and denser graphs (such as full8) are progressively more difficult to learn. For denser graphs, the learned models have higher sample complexity, higher variance and slightly worse results. Refer to Appendix §A.9 for complete results on all graphs. Hyperparameters. Hyperparameters for all experiments were kept identical unless otherwise stated. We study the effect of DAG and sparsity penalties in the following paragraph. For more details, please refer to Appendix §A.5 .\nImportance of regularization. Valid configurations C for a causal model are expected to be a) sparse and b) acyclic. To promote such solutions, we use DAG and sparsity regularization with tunable hyperparameters. We set the DAG penalty to 0.5 and sparsity penalty to 0.1. We run ablation studies on different values of the regularizers and study their effect. We find that smaller graphs are less sensitive to different values of regularizer than larger graphs. For details, refer to Appendix §A.12.\nImportance of dropout. To train functional parameter for an observational distribution, sampling adjacency matrices is required. We \"drop out\" each edge (with a probability of σ(γ)) in our experiments during functional parameter training of the conditional distributions of the SCM. Please refer to Appendix §A.14 for a more detailed analysis." }, { "heading": "6 CONCLUSION", "text": "In this work, we introduced an experimentally successful method (SDI) for causal structure discovery using continuous optimization, combining information from both observational and interventional data. We show in experiments that it can recover true causal structure, that it generalizes well to unseen interventions, that it compares very well against start-of-the-art causal discovery methods on real world datasets, and that it scales even better on problems where only part of the graph is known." }, { "heading": "Appendix", "text": "" }, { "heading": "Table of Contents", "text": "" }, { "heading": "A Annexes 13", "text": "A.1 Training Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.4 Model setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.5 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.6 Synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.7 BnLearn data repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.8 Comparisons to other methods . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.9 Sparsity of Ground-Truth Graph . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.10 Predicting interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.11 Sample complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.12 Effect of regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.13 Near-Optimum Performance of Gradient Estimator . . . . . . . . . . . . . . . . 20 A.14 Importance of dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21" }, { "heading": "A ANNEXES", "text": "" }, { "heading": "A.1 TRAINING ALGORITHM", "text": "Algorithm 1 shows the pseudocode of the method described in §4. Typical values for the loop trip counts are found in §A.11." }, { "heading": "A.2 PRELIMINARIES", "text": "Interventions. In a purely-observational setting, it is known that causal graphs can be distinguished only up to a Markov equivalence class. In order to identify the true causal graph intervention data is needed (Eberhardt et al., 2012). Several types of common interventions may be available (Eaton & Murphy, 2007b). These are: No intervention: only observational data is obtained from the ground truth causal model. Hard/perfect: the value of a single or several variables is fixed and then ancestral sampling is performed on the other variables. Soft/imperfect: the conditional distribution of the variable on which the intervention is performed is changed. Uncertain: the learner is not sure of which variable exactly the intervention affected directly. Here we make use of soft interventions for several reasons: First, they include hard interventions as a limiting case and hence are more general. Second, in many real-world scenarios, it is more difficult to perform a hard intervention compared to a soft one. We also deal with a special case of uncertain interventions, where the variable selected for intervention is random and unknown. We call these unidentified or unknown interventions.\nIntervention setup. For our experiments, the groundtruth models of the synthetic datasets are modeled by neural networks as described in section A.6. Each neural network models the relationship of the causal parents and a variable. We perform our intervention by first randomly selecting which variable to intervene on, then soft-intervening on it. The selected variable is sampled from a uniform distribution. The soft intervention is a reinitialization of its neural network’s parameters.\nCausal sufficiency. The inability to distinguish which causal graph, within a Markov equivalence class, is the correct one in the purely-observational setting is called the identifiability problem. In our setting, all variables are observed (there are no latent confounders) and all interventions are random and independent. Hence, within our setting, if the interventions are known, then the true causal\nAlgorithm 1 Training Algorithm 1: procedure TRAINING(SCM Ground-Truth Entailed Distribution D, with M nodes and N categories) 2: Let i an index from 0 to M − 1\n3: for I iterations, or until convergence, do 4: if I % reinitialization_period == 0 then 5: D ← reinitialize(D)\n6: for F functional parameter training steps do . Phase 1 7: X ∼ D 8: C ∼ Ber(σ(γ)) 9: L = − logP (X|C ; θ)\n10: θt+1 ← Adam(θt,∇θL)\n11: for Q interventions do . Phase 2 12: I_N← randint(0, M − 1) . Uniform selection of target 13: Dint :=D with intervention on node I_N . Apply intervention\n14: if predicting intervention then . Phase 2 Prediction 15: Li ← 0 ∀i 16: for NP prediction steps do 17: X ∼ Dint 18: for CP configurations do 19: C ∼ Ber(σ(γ)) 20: Li ← Li − logPi(X|Ci; θslow) ∀i 21: I_N← argmax(Li)\n22: gammagrads, logregrets = [], [] . Phase 2 Scoring 23: for NS scoring steps do 24: X ∼ Dint 25: gammagrad, logregret = 0, 0 26: for CS configurations do 27: C ∼ Ber(σ(γ)) 28: Li = − logPi(X|Ci; θslow) ∀i 29: gammagrad += σ(γ)− C . Collect σ(γ)− C for Equation 2 30: logregret +=\n∑ i6=I_N Li . Collect LC(k),i (X) for Equation 2\n31: gammagrads.append(gammagrad) 32: logregrets.append(logregret)\n. Phase 3\n33: gij =\n∑ k(σ(γij)− c (k) ij )LC\n(k) ,i (X)∑\nk LC (k) ,i (X)\n. Gradient Estimator, Equation 2\n34: g ← g +∇γ (λsparse Lsparse(γ) + λDAG LDAG(γ)) . Regularizers 35: γt+1 ← Adam(γt, g)\ngraph is always identifiable in principle (Eberhardt et al., 2012; Heinze-Deml et al., 2018a). We also consider here situations where a single variable is randomly selected and intervened upon with a soft or imprecise intervention, its identity is unknown and must be inferred. In this case, there is no theoretical guarantee that the causal graph is identifiable. However, there is existing work Peters et al. (2016) that handles this scenario and the proposed method is also proven to work empirically.\nFaithfulness. It is possible for causally-related variables to be probabilistically independent purely by happenstance, such as when causal effects along multiple paths cancel out. This is called unfaithfulness. We assume that faithfulness holds, since the γ gradient estimate is extracted from shifts in probability distributions. However, because of the “soft” nature of our interventions and their infinite variety, it would be exceedingly unlikely for cancellation-related unfaithfulness to persist throughout the causal-learning procedure." }, { "heading": "A.3 EXPERIMENTAL SETUP", "text": "For all datasets, the weight parameters for the learned model is initialized randomly. In order to not bias the structural parameters, all σ(γ) are initialized to 0.5 in the beginning of training. Details of hyperparameters of the learner model are described in Section A.5. The experimental setup for the groundtruth model for the synthetic data can be found in Section A.6 and the details for the real world data are described in Section A.7." }, { "heading": "A.4 MODEL SETUP", "text": "As discussed in section 4, we model the M variables in the graph using M independent MLPs, each possesses an input layer of M × N neurons (for M one-hot vectors of length N each), a single hidden layer chosen arbitrarily to have max(4M, 4N) neurons with a LeakyReLU activation of slope 0.1, and a linear output layer of N neurons representing the unnormalized log-probabilities of each category (a softmax then recovers the conditional probabilities from these logits). To force fi to rely exclusively on the direct ancestor set pa(i, C) under adjacency matrix C (See Eqn. 2), the one-hot input vector Xj for variable Xi’s MLP is masked by the Boolean element cij . The functional parameters of the MLP are the set θ = {W0ihjn,B0ih,W1inh,B1in}.An example of the multi-MLP architecture with M=3 categorical variables of N=2 categories is shown in Figure 3." }, { "heading": "A.5 HYPERPARAMETERS", "text": "Learner model. All experiments on the synthetic graphs of size 3-8 use the same hyperparameters. Both the functional and structural parameters are optimized using the Adam optimizer Kingma & Ba (2014). We use a learning rate of 5e− 2 with alpha of 0.9 for the functional parameters, and we use a learning rate of 5e− 3 with alpha of 0.1 for the structural parameters. We perform 5 runs of each experiment with random seeds 1− 5 and error bars are plotted for various graphs from size 3 to 8 in Figure 4. We use a batch size of 256. The L1 norm regularizer is set to 0.1 and the DAG regularizer is set to 0.5 for all experiments. For each γ update step, we sample 25 structural configurations from the current γ. In all experiments, we use 100 batches from the interventional distribution to predict the intervened node." }, { "heading": "A.6 SYNTHETIC DATA", "text": "Synthetic datasets. The synthetic datasets in the paper are modeled by neural networks. All neural networks are 2 layered feed forward neural networks (MLPs) with Leaky ReLU activations between layers. The parameters of the neural network are initialized orthogonally within the range of (−2.5, 2.5). This range was selected such that they output a non-trivial distribution. The biases are initialized uniformly between (−1.1, 1.1). SCM with n variables are modeled by n feedforward neural networks (MLPs) as described in §5.1. We assume an acyclic causal graph so that we may easily sample from them. Hence, given any pair of random variables A and B, either A −→ B, B −→ A or A and B are independent. The MLP representing the ground-truth SCM has its weights θ initialized use orthogonal initialization with gain 2.5 and the biases are initialized using a uniform initialization between−1.1 and 1.1, which was empirically found to yield \"interesting\" yet learnable random SCMs.\nWe study a variety of SCMs with different ground-truth edge structures γ. Our selection of synthetic graphs explores various extremes in the space of DAGs, stress-testing SDI. The chain graphs are the sparsest connected graphs possible, and are relatively easy to learn. The bidiag graphs are extensions of chain where there are 2-hops as well as single hops between nodes, doubling the number of edges and creating a meshed chain of forks and colliders. The jungle graphs are binary-tree-like graphs, but with each node connected directly to its grandparent in the tree as well. Half the nodes in a jungle graph are leaves, and the out-degree is up to 6. The collider graphs deliberately collide independent M − 1 ancestors into the last node; They stress maximum in-degree. Lastly, the full graphs are the maximally dense DAGs. All nodes are direct parents of all nodes below them in the topological order. The maximum in- and out-degree are both M − 1. These graphs are depicted in Figure 6." }, { "heading": "A.6.1 SYNTHETIC DATA RESULTS", "text": "The model can recover correctly all synthetic graphs with 10 variables or less, as shown in Figure 10 and Table 1. For graphs larger than 10 variables, the model found it more challenging to recover the denser graphs (e.g. fullM), as shown in Table 1. Plots of the training curves showing average cross entropy (CE) and Area-Under-Curve(AUC/AUCROC) for edge probabilities of the learned graph against the ground-truth graph for synthetic SCMs with 3-13 variables are available in Figure 10." }, { "heading": "A.7 BNLEARN DATA REPOSITORY", "text": "The repo contains many datasets with various sizes and structures modeling different variables. We evaluate the proposed method on 3 of the datasets in the repo, namely the Earthquake (Korb & Nicholson, 2010), Cancer (Korb & Nicholson, 2010) and Asia (Lauritzen & Spiegelhalter, 1988) datasets. The ground-truth model structure for the Cancer (Korb & Nicholson, 2010) and Earthquake (Korb & Nicholson, 2010) datasets are shown in Figure 7. Note that even though the structure for the two datasets seems to be the same, the conditional probability tables (CPTs) for these datasets are very different and hence results in different structured causal models (SCMs) for each." }, { "heading": "A.8 COMPARISONS TO OTHER METHODS", "text": "As described in section 5.4, we compare to 5 other methods. The full comparison between SDIs and other methods on various graphs can be found in Table 1.\nOne of these methods, DAG-GNN Yu et al. (2019), outputs 3 graphs based on different criteria: best mean square error (MSE), best negative loglikelihood (NLL) and best evidence lower bound (ELBO). We report performance of all outputs of DAG-GNN Yu et al. (2019) in Table 6, and the best one is selected for Table 1." }, { "heading": "A.9 SPARSITY OF GROUND-TRUTH GRAPH", "text": "We evaluated the performance of SDI on graphs of various size and sparsity to better understand the performance of the model. We evaluated the proposed model on 4 representative types of graphs in increasing order of density. They are the chain, jungle, bidiag and full graphs. As shown in the results in figure 12, for graphs of size 5 or smaller, there is almost no difference in the final results in terms of variance and sample complexity. However, as the graphs gets larger (than 6), the denser graphs (full graphs) gets progressively more difficult to learn compared to the sparser graphs (chain, jungle and bidiag). The models learned for denser graphs have higher complexity, higher variance and slightly worse results." }, { "heading": "A.10 PREDICTING INTERVENTIONS", "text": "In Phase 2, we score graph configurations based on how well they fit the interventional data. We find that it is necessary to avoid disturbing the learned parameters of intervened variables, and to ignore its contribution to the total negative log-likelihood of the sample. Intuitively, this is because, having been intervened upon, that variable should be taken as a given. It should especially not be interpreted as a poorly-learned variable requiring a tuning of its functional parameters, because those functional parameters were not responsible for the value of that variable; The extrinsic intervention was.\nSince an intervened variable is likely to be unusually poorly predicted, we heuristically determine that the most poorly predicted variable is the intervention variable. We then zero out its contribution to the log-likelihood of the sample and block gradient into its functional parameters.\nFigure 11 illustrates the necessity of this process. When using the prediction heuristic, the training curve closely tracks training with ground-truth knowledge of the identity of the intervention. If no prediction is made, or a random prediction is made, training proceeds much more slowly, or fails entirely." }, { "heading": "A.11 SAMPLE COMPLEXITY", "text": "Our method is heavily reliant on sampling of configurations and data in Phases 1 and 2. We present here the breakdown of the sample complexity. Let\n• I be the number of iterations of the method, (typical: 500-2000) • B the number of samples per batch, (typical: 256) • F the number of functional parameter training iterations in Phase 1, (typical: 10000) • Q the number of interventions performed in Phase 2, (typical: 100) • NP the number of data batches for prediction, (typical: 100)\n• CP the number of graph configurations drawn per prediction data batch, (typical: 10) • NS the number of data batches for scoring, (typical: 10) • CS the number of graph configurations drawn per scoring data batch. (typical: 20-30)\nThen the total number of interventions performed, and configurations and samples drawn, over an entire run are:\nInterventions = IQ = γ updates (3) Samples = I( F︸︷︷︸\nPhase 1 +Q(NP +NS)︸ ︷︷ ︸ Phase 2 )B (4)\nConfigurations = I( F︸︷︷︸ Phase 1 +Q(CPNP + CSNS)︸ ︷︷ ︸ Phase 2 ) (5)\nBecause of the multiplicative effect of these factors, the number of data samples required can quickly spiral out of control. For typical values, as many as 500 × 10000 × 256 = 1.28e9 observational and 500 × 100 × (100 + 10) × 256 = 1.408e9 interventional samples are required. To alleviate this problem slightly, we limit the number of samples generated for each intervention; This limit is usually 500-2000." }, { "heading": "A.12 EFFECT OF REGULARIZATION", "text": "Importance of sparsity regularizer. We use a L1 regularizer on the structure parameters γ to encourage a sparse representation of edges in the causal graph. In order to better understand the effect of the L1 regularizer, we conducted ablation studies on the L1 regularizer. It seems that the regularizer has an small effect on rate of converges and that the model converges faster with the regularizer, This is shown in Figure 13. However, this does not seem to affect the final value the model converges to, as is shown in Table 7.\nImportance of DAG regularizer. We use an acyclic regularizer to discourage length-2 cycles in the learned model. We found that for small models (≤ 5 variables), the acyclic regularizer helps with faster convergence, without improving significantly the final cross-entropy. This is illustrated for the 3-variable graphs in Figure 14. However, for graphs larger than 5 variables, the acyclic regularizer starts playing an important role in encouraging the model to learn the correct structure. This is shown in the ablation study in Table 7." }, { "heading": "A.13 NEAR-OPTIMUM PERFORMANCE OF GRADIENT ESTIMATOR", "text": "The gradient estimator gij we use to minimize the empirical risk w.r.t. the structural parameters γ, defined in Eq. 2 is adapted from Bengio et al. (2019). We verify that the estimator samples the correct gradient by an experiment that tests convergence near the optimum.\nTo do this, we pre-initialize the structural and functional parameters near the global minimum, and verify that γ converges. Specifically, the ground-truth functional parameters θ are copied and disturbed by a small Gaussian noise, while the ground-truth structural parameters γ are copied, but the confidences in an edge or non-edge are set to 88% and 12% rather than 100% and 0%. The experiment is then expected to quickly converge to the global minimum.\nAs shown in Figure 16, the gradient estimator correctly enables Stochastic Gradient Descent towards the minimum, for the chain and jungle graphs of size 15, 20 and 25. The average cross-entropy rapidly approaches its floor of 0.01, a consequence of our clamping of all γij to the range ±5 (equivalently, clamping σ(γij) to the range [0.0067, 0.9933]).\nA.14 IMPORTANCE OF DROPOUT\nTo train the functional parameters on an observational distribution, one would need sampling adjacency matrices. One may be tempted to make these “complete directed graph” (all-ones except for a zero diagonal), to give the MLP maximum freedom to learn any potential causal relations itself. We\ndemonstrate that functional parameter training cannot be carried out this way, and that it is necessary to “drop out” each edge (with probability of the current γ value in our experiments) during pretraining of the conditional distributions of the SCM. We attempt to recover the previously-recoverable graphs chain3, fork3 and confounder3 without dropout, but fail to do so, as shown in Figure 15.\nFigure 17: Cross-entropy for edge probability between learned and ground-truth SCM for Cancer at varying temperatures.\nFigure 18: Cross-entropy for edge probability between learned and ground-truth SCM. Left: The Earthquake dataset with 6 variables. Right: The Asia dataset with 8 variables" } ]
2,020
null
SP:d8c4980cf2187b549f2f2a4fbb2fba4101337459
[ ".** Autoregressive models have demonstrate their potential utility for modeling images and other types of complex data with high flexibility (particularly in density estimation). However, its sampling ability is not that good as explained in the paper. Authors show that one of the main weaknesses of autoregressive models comes from the propagation of mistakes due to the mismatch of conditionals. Inspired in the promising results of randomized smoothing in adversarial models (Cohen et al. 2019), authors propose a similar strategy. The addition of Gaussian noise and posterior modeling of the smoother data makes easier to the autoregressive density to capture the true data distribution. The benefits of this strategy are empirically proved and shown in the experiments." ]
While autoregressive models excel at image compression, their sample quality is often lacking. Although not realistic, generated images often have high likelihood according to the model, resembling the case of adversarial examples. Inspired by a successful adversarial defense method, we incorporate randomized smoothing into autoregressive generative modeling. We first model a smoothed version of the data distribution, and then reverse the smoothing process to recover the original data distribution. This procedure drastically improves the sample quality of existing autoregressive models on several synthetic and real-world image datasets while obtaining competitive likelihoods on synthetic datasets.
[ { "affiliations": [], "name": "Chenlin Meng" }, { "affiliations": [], "name": "Jiaming Song" }, { "affiliations": [], "name": "Yang Song" }, { "affiliations": [], "name": "Shengjia Zhao" }, { "affiliations": [], "name": "Stefano Ermon" } ]
[ { "authors": [ "Guillaume Alain", "Yoshua Bengio" ], "title": "What regularized auto-encoders learn from the data-generating distribution", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky TQ Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Laplacian eigenmaps for dimensionality reduction and data representation", "venue": "Neural computation,", "year": 2003 }, { "authors": [ "Yoshua Bengio", "Eric Laufer", "Guillaume Alain", "Jason Yosinski" ], "title": "Deep generative stochastic networks trainable by backprop", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2013 }, { "authors": [ "Mikołaj Bińkowski", "Dougal J Sutherland", "Michael Arbel", "Arthur Gretton" ], "title": "Demystifying mmd gans", "venue": "arXiv preprint arXiv:1801.01401,", "year": 2018 }, { "authors": [ "Chris M Bishop" ], "title": "Training with noise is equivalent to tikhonov regularization", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Rob Cornish", "Anthony L Caterini", "George Deligiannidis", "Arnaud Doucet" ], "title": "Relaxing bijectivity constraints with continuously indexed normalising flows", "venue": null, "year": 1909 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "arXiv preprint arXiv:1903.08689,", "year": 2019 }, { "authors": [ "Damien Garreau", "Wittawat Jitkrittum", "Motonobu Kanagawa" ], "title": "Large sample analysis of the median heuristic", "venue": "arXiv preprint arXiv:1707.07269,", "year": 2017 }, { "authors": [ "Mathieu Germain", "Karol Gregor", "Iain Murray", "Hugo Larochelle" ], "title": "Made: Masked autoencoder for distribution estimation", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "arXiv preprint arXiv:1706.08500,", "year": 2017 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": null, "year": 1902 }, { "authors": [ "Jonathan Ho", "Ajay Jain", "Pieter Abbeel" ], "title": "Denoising diffusion probabilistic models", "venue": "arXiv preprint arXiv:2006.11239,", "year": 2020 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Optimal approximation of signal priors", "venue": "Neural Computation,", "year": 2008 }, { "authors": [ "Matti Kääriäinen" ], "title": "Lower bounds for reductions", "venue": "In Atomic Learning Workshop,", "year": 2006 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex M Lamb", "Anirudh Goyal Alias Parth Goyal", "Ying Zhang", "Saizheng Zhang", "Aaron C Courville", "Yoshua Bengio" ], "title": "Professor forcing: A new algorithm for training recurrent networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Hugo Larochelle", "Iain Murray" ], "title": "The neural autoregressive distribution estimator", "venue": "In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Jacob Menick", "Nal Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "arXiv preprint arXiv:1812.01608,", "year": 2018 }, { "authors": [ "David Minnen", "Johannes Ballé", "George D Toderici" ], "title": "Joint autoregressive and hierarchical priors for learned image compression", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hariharan Narayanan", "Sanjoy Mitter" ], "title": "Sample complexity of testing the manifold hypothesis", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Georg Ostrovski", "Will Dabney", "Rémi Munos" ], "title": "Autoregressive quantile networks for generative modeling", "venue": "arXiv preprint arXiv:1806.05575,", "year": 2018 }, { "authors": [ "Martin Raphan", "Eero P Simoncelli" ], "title": "Least squares estimation without priors or supervision", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Sam T Roweis", "Lawrence K Saul" ], "title": "Nonlinear dimensionality reduction by locally linear embedding", "venue": null, "year": 2000 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "arXiv preprint arXiv:1606.03498,", "year": 2016 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Saeed Saremi", "Aapo Hyvarinen" ], "title": "Neural empirical bayes", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Jocelyn Sietsma", "Robert JF Dow" ], "title": "Creating artificial neural networks that generalize", "venue": "Neural networks,", "year": 1991 }, { "authors": [ "Jascha Sohl-Dickstein", "Eric A Weiss", "Niru Maheswaranathan", "Surya Ganguli" ], "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "venue": "arXiv preprint arXiv:1503.03585,", "year": 2015 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Improved techniques for training score-based generative models", "venue": "arXiv preprint arXiv:2006.09011,", "year": 2020 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "arXiv preprint arXiv:1511.01844,", "year": 2015 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochelle" ], "title": "Rnade: The real-valued neural autoregressive density-estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Pascal Vincent" ], "title": "A connection between score matching and denoising autoencoders", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Autoregressive models have exhibited promising results in a variety of downstream tasks. For instance, they have shown success in compressing images (Minnen et al., 2018), synthesizing speech (Oord et al., 2016a) and modeling complex decision rules in games (Vinyals et al., 2019). However, the sample quality of autoregressive models on real-world image datasets is still lacking.\nPoor sample quality might be explained by the manifold hypothesis: many real world data distributions (e.g. natural images) lie in the vicinity of a low-dimensional manifold (Belkin & Niyogi, 2003), leading to complicated densities with sharp transitions (i.e. high Lipschitz constants), which are known to be difficult to model for density models such as normalizing flows (Cornish et al., 2019). Since each conditional of an autoregressive model is a 1-dimensional normalizing flow (given a fixed context of previous pixels), a high Lipschitz constant will likely hinder learning of autoregressive models.\nAnother reason for poor sample quality is the “compounding error” issue in autoregressive modeling. To see this, we note that an autoregressive model relies on the previously generated context to make a prediction; once a mistake is made, the model is likely to make another mistake which compounds (Kääriäinen, 2006), eventually resulting in questionable and unrealistic samples. Intuitively, one would expect the model to assign low-likelihoods to such unrealistic images, however, this is not always the case. In fact, the generated samples, although appearing unrealistic, often are assigned high-likelihoods by the autoregressive model, resembling an “adversarial example” (Szegedy et al., 2013; Biggio et al., 2013), an input that causes the model to output an incorrect answer with high confidence.\nInspired by the recent success of randomized smoothing techniques in adversarial defense (Cohen et al., 2019), we propose to apply randomized smoothing to autoregressive generative modeling. More specifically, we propose to address a density estimation problem via a two-stage process. Unlike Cohen et al. (2019) which applies smoothing to the model to make it more robust, we apply smoothing to the data distribution. Specifically, we convolve a symmetric and stationary noise distribution with the data distribution to obtain a new “smoother” distribution. In the first stage, we model the smoothed version of the data distribution using an autoregressive model. In the second stage, we reverse the smoothing process—a procedure which can also be understood as “denoising”—by either applying a gradient-based denoising approach (Alain & Bengio, 2014) or introducing another conditional autoregressive model to recover the original data distribution from the smoothed one. By choosing an appropriate smoothing distribution, we aim to make each step easier than the original learning problem: smoothing facilitates learning in the first stage by making the input distribution\nfully supported without sharp transitions in the density function; generating a sample given a noisy one is easier than generating a sample from scratch.\nWe show with extensive experimental results that our approach is able to drastically improve the sample quality of current autoregressive models on several synthetic datasets and real-world image datasets, while obtaining competitive likelihoods on synthetic datasets. We empirically demonstrate that our method can also be applied to density estimation, image inpainting, and image denoising." }, { "heading": "2 BACKGROUND", "text": "We consider a density estimation problem. Given D-dimensional i.i.d samples {x1,x2, ...,xN} from a continuous data distribution pdata(x), the goal is to approximate pdata(x) with a model pθ(x) parameterized by θ. A commonly used approach for density estimation is maximum likelihood estimation (MLE), where the objective is to maximize L(θ) , 1N ∑N i=1 log pθ(xi)." }, { "heading": "2.1 AUTOREGRESSIVE MODELS", "text": "An autoregressive model (Larochelle & Murray, 2011; Salimans et al., 2017) decomposes a joint distribution pθ(x) into the product of univariate conditionals:\npθ(x) = D∏ i=1 pθ(xi|x<i), (1)\nwhere xi stands for the i-th component of x, and x<i refers to the components with indices smaller than i. In general, an autoregressive model parameterizes each conditional pθ(xi|x<i) using a prespecified density function (e.g. mixture of logistics). This bounds the capacity of the model by limiting the number of modes for each conditional.\nAlthough autoregressive models have achieved top likelihoods amongst all types of density based models, their sample quality is still lacking compared to energy-based models (Du & Mordatch, 2019) and score-based models (Song & Ermon, 2019). We believe this can be caused by the following two reasons." }, { "heading": "2.2 MANIFOLD HYPOTHESIS", "text": "Several existing methods (Roweis & Saul, 2000; Tenenbaum et al., 2000) rely on the manifold hypothesis, i.e. that real-world high-dimensional data tends to lie on a low-dimensional manifold (Narayanan & Mitter, 2010). If the manifold hypothesis is true, then the density of the data distribution is not well defined in the ambient space; if the manifold hypothesis holds only approximately and the data lies in the vicinity of a manifold, then only points that are very close to the manifold would have high density, while all other points would have close to zero density. Thus we may expect the data density around the manifold to have large first-order derivatives, i.e. the density function has a high Lipschitz constant (if not infinity).\nTo see this, let us consider a 2-d example where the data distribution is a thin ring distribution (almost a unit circle) formed by rotating the 1-d Gaussian distribution N (1, 0.012) around the origin. The density function of the ring has a high Lipschitz constant near the “boundary”. Let us focus on a data point travelling along the diagonal as shown in the leftmost panel in figure 2. We plot the first-order\ndirectional derivatives of the density for the point as it approaches the boundary from the inside, then lands on the ring, and finally moves outside the ring (see figure 2). As we can see, when the point is far from the boundary, the derivative has a small magnitude. When the point moves closer to the boundary, the magnitude increases and changes significantly near the boundary even with small displacements in the trajectory. However, once the point has landed on the ring, the magnitude starts to decrease. As it gradually moves off the ring, the magnitude first increases and then decreases just like when the point approached the boundary from the inside. It has been observed that certain likelihood models, such as normalizing flows, exhibit pathological behaviors on data distributions whose densities have high Lipschitz constants (Cornish et al., 2019). Since each conditional of an autoregressive model is a 1-d normalizing flow given a fixed context, a high Lipschitz constant on data density could also hinder learning of autoregressive models." }, { "heading": "2.3 COMPOUNDING ERRORS IN AUTOREGRESSIVE MODELING", "text": "Autoregressive models can also be susceptible to compounding errors from the conditional distributions (Lamb et al., 2016) during sampling time. We notice that an autoregressive model pθ(x) learns the joint density pdata(x) by matching each of the conditional pθ(xi|x<i) with pdata(xi|x<i). In practice, we typically have access to a limited amount of training data, which makes it hard for an autoregressive model to capture all the conditional distributions correctly due to the curse of dimensionality. During sampling, since a prediction is made based on the previously generated context, once a mistake is made at a previous step, the model is likely to make more mistakes in the later steps, eventually generating a sample x̂ that is far from being an actual image, but is mistakenly assigned a high-likelihood by the model.\nThe generated image x̂, being unrealistic but assigned a high-likelihood, resembles an adversarial example, i.e., an input that causes the model to make mistakes. Recent works (Cohen et al., 2019) in adversarial defense have shown that random noise can be used to improve the model’s robustness to adversarial perturbations — a process during which adversarial examples that are close to actual data are generated to fool the model. We hypothesize that such approach can also be applied to improve an autoregressive modeling process by making the model less vulnerable to compounding errors occurred during density estimation. Inspired by the success of randomized smoothing in adversarial defense (Cohen et al., 2019), we propose to apply smoothing to autoregressive modeling to address the problems mentioned above." }, { "heading": "3 GENERATIVE MODELS WITH DISTRIBUTION SMOOTHING", "text": "In the following, we propose to decompose a density estimation task into a smoothed data modeling problem followed by an inverse smoothing problem where we recover the true data density from the smoothed one." }, { "heading": "3.1 RANDOMIZED SMOOTHING PROCESS", "text": "Unlike Cohen et al. (2019) where randomized smoothing is applied to a model, we apply smoothing directly to the data distribution pdata(x). To do this, we introduce a smoothing distribution q(x̃|x) — a distribution that is symmetric and stationary (e.g. a Gaussian or Laplacian kernel) — and convolve it with pdata(x) to obtain a new distribution q(x̃) , ∫ q(x̃|x)pdata(x)dx. When q(x̃|x) is a normal distribution, this convolution process is equivalent to perturbing the data distribution with Gaussian\nnoise, which, intuitively, will make the data distribution smoother. In the following, we formally prove that convolving a 1-d distribution pdata(x) with a suitable noise can indeed “smooth” pdata(x). Theorem 1. Given a continuous and bounded 1-d distribution pdata(x) that is supported on R, for any 1-d distribution q(x̃|x) that is symmetric (i.e. q(x̃|x) = q(x|x̃)), stationary (i.e. translation invariant) and satisfies limx→∞ pdata(x)q(x|x̃) = 0 for any given x̃, we have Lip(q(x̃)) ≤ Lip(pdata(x)), where q(x̃) , ∫ q(x̃|x)pdata(x)dx and Lip(·) denotes the Lipschitz constant of the given 1-d function.\nTheorem 1 shows that convolving a 1-d data distribution pdata(x) with a suitable noise distribution q(x̃|x) (e.g.N (x̃|x, σ2)) can reduce the Lipschitzness (i.e. increase the smoothness) of pdata(x). We provide the proof of Theorem 1 in Appendix A.\nGiven pdata(x) with a high Lipschitz constant, we empirically verify that density estimation becomes an easier task on the smoothed distribution q(x̃) than directly on pdata(x). To see this, we visualize a 1-d example in figure 3a, where we want to model a ten-mode data distribution with a mixture of logistics model. If our model has three logistic components, there is almost no way for the model, which only has three modes, to perfectly fit this data distribution, which has ten separate modes with sharp transitions. The model, after training (see figure 3a), mistakenly assigns a much higher density to the low density regions between nearby modes. If we convolve the data distribution with q(x̃|x) = N (x̃|x, 0.52), the new distribution becomes smoother (see figure 3b) and can be captured reasonably well by the same mixture of logistics model with only three modes (see figure 3b). Comparing the same model’s performance on the two density estimation tasks, we can see that the model is doing a better job at modeling the smoothed version of the data distribution than the original data distribution, which has a high Lipschitz constant.\nThis smoothing process can also be understood as a regularization term for the original maximum likelihood objective (on the un-smoothed data distribution), encouraging the learned model to be smooth, as formalized by the following statement: Proposition 1 (Informal). Assume that the symmetric and stationary smoothing distribution q(x̃|x) has small variance and negligible higher order moments, then\nEpdata(x)Eq(x̃|x)[log pθ(x̃)] ≈ Epdata(x) [ log pθ(x) + η\n2 ∑ i ∂2 log pθ ∂x2i\n] ,\nfor some constant η.\nProposition 1 shows that our smoothing process provides a regularization effect on the original objective Epdata(x)[log pθ(x)] when no noise is added, where the regularization aims to maximize η 2 ∑ i ∂2 log pθ ∂x2i\n. Since samples from pdata should be close to a local maximum of the model, this encourages the second order gradients computed at a data point x to become closer to zero (if it were positive then x will not be a local maximum), creating a smoothing effect. This extra term is also the trace of the score function (up to a multiplicative constant) that can be found in the score matching objective (Hyvärinen, 2005), which is closely related to many denoising methods (Vincent, 2011; Hyvärinen, 2008). This regularization effect can, intuitively, increase the generalization capability of the model. In fact, it has been demonstrated empirically that training with noise can lead to improvements in network generalization (Sietsma & Dow, 1991; Bishop, 1995). Our argument is also\nsimilar to that used in (Bishop, 1995) except that we consider a more general generative modeling case as opposed to supervised learning with squared error. We provide the formal statement and proof of Proposition 1 in Appendix A." }, { "heading": "3.2 AUTOREGRESSIVE DISTRIBUTION SMOOTHING MODELS", "text": "Motivated by the previous 1-d example, instead of directly modeling pdata(x), which can have a high Lipschitz constant, we propose to first train an autoregressive model on the smoothed version of the data distribution q(x̃). Although the smoothing process makes the distribution easier to learn, it also introduces bias. Thus, we need an extra step to debias the learned distribution by reverting the smoothing process.\nIf our goal is to generate approximate samples for pdata(x), when q(x̃|x) = N (x̃|x, σ2I) and σ is small, we can use the gradient of pθ(x̃) for denoising (Alain & Bengio, 2014). More specifically, given smoothed samples x̃ from pθ(x̃), we can “denoise” samples via: x̄ = x̃ + σ2∇x̃ log pθ(x̃), (2) which only requires the knowledge of pθ(x̃) and the ability to sample from it. However, this approach does not provide a likelihood estimate and Eq. (2) only works when q(x̃|x) is Gaussian (though alternative denoising updates for other smoothing processes could be derived under the Empirical Bayes framework (Raphan & Simoncelli, 2011)). Although Eq. (2) could provide reasonable denoising results when the smoothing distribution has a small variance, x̄ obtained in this way is only a point estimation of x̄ = E[x|x̃] and does not capture the uncertainty of p(x|x̃). To invert more general smoothing distributions (beyond Gaussians) and to obtain likelihood estimations, we introduce a second autoregressive model pθ(x|x̃). The parameterized joint density pθ(x, x̃) can then be computed as pθ(x, x̃) = pθ(x|x̃)pθ(x̃). To obtain our approximation of pdata(x), we need to integrate over x̃ on the joint distribution pθ(x, x̃) to obtain pθ(x) =∫ pθ(x, x̃)dx̃, which is in general intractable. However, we can easily obtain an evidence lower bound (ELBO): log pθ(x) ≥ Eq(x̃|x)[log pθ(x̃)]− Eq(x̃|x)[log q(x̃|x)] + Eq(x̃|x)[log pθ(x|x̃)]. (3)\nNote that when q(x̃|x) is fixed, the entropy term Eq(x̃|x)[log q(x̃|x)] is a constant with respect to the optimization parameters. Maximizing ELBO on pdata(x) is then equivalent to maximizing:\nJ(θ) = Epdata(x) [ Eq(x̃|x)[log pθ(x̃)] + Eq(x̃|x)[log pθ(x|x̃)] ] . (4)\nFrom equation 4, we can see that optimizing the two models pθ(x̃) and pθ(x|x̃) separately via maximum likelihood estimation is equivalent to optimizing J(θ)." }, { "heading": "3.3 TRADEOFF IN MODELING", "text": "In general, there is a trade-off between the difficulty of modeling pθ(x̃) and pθ(x|x̃). To see this, let us consider two extreme cases for the variance of q(x̃|x) — when q(x̃|x) has a zero variance and an infinite variance. When q(x̃|x) has a zero variance, q(x̃|x) is a distribution with all its probability mass at x, meaning that no noise is added to the data distribution. In this case, modeling the smoothed distribution would be equivalent to modeling pdata(x), which can be hard as discussed above. The reverse smoothing process, however, would be easy since pθ(x|x̃) can simply be an identity map to perfectly invert the smoothing process. In the second case when q(x̃|x) has an infinite variance, modeling p(x̃) would be easy because all the information about the original data is lost, and p(x̃) would be close to the smoothing distribution. Modeling p(x|x̃), on the other hand, is equivalent to directly modeling pdata(x), which can be challenging.\nThus, the key here is to appropriately choose a smoothing level so that both q(x̃) and p(x|x̃) can be approximated relatively well by existing autoregressive models. In general, the optimal variance might be hard to find. Although one can train q(x̃|x) by jointly optimizing ELBO, in practice, we find this approach often assigns a very large variance to q(x̃|x), which can trade-off sample quality for better likelihoods on high dimensional image datasets. We find empirically that a pre-specified q(x̃|x) chosen by heuristics (Saremi & Hyvarinen, 2019; Garreau et al., 2017) is able to generate much better samples than training q(x̃|x) via ELBO. In this paper, we will focus on the sample quality and leave the training of q(x̃|x) for future work." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we demonstrate empirically that by appropriately choosing the smoothness level of randomized smoothing, our approach is able to drastically improve the sample quality of existing autoregressive models on several synthetic and real-world datasets while retaining competitive likelihoods on synthetic datasets. We also present results on image inpainting in Appendix C.2." }, { "heading": "4.1 CHOOSING THE SMOOTHING DISTRIBUTION", "text": "To help us build insights into the selection of the smoothing distribution q(x̃|x), we first focus on a 1-d multi-modal distribution (see figure 4 leftmost panel). We use model-based methods to invert the smoothed distribution and provide analysis on “single-step denoising” in Appendix B.1. We start with the exploration of three different types of smoothing distributions – Gaussian distribution, Laplace distribution, and uniform distribution. For each type of distribution, we perform a grid search to find the optimal variance. Since our approach requires the modeling of both pθ(x̃) and pθ(x|x̃), we stack x̃ and x together, and use a MADE model (Germain et al., 2015) with a mixture of two logistic components to parameterize pθ(x̃) and pθ(x|x̃) at the same time. For the baseline model, we train a mixture of logistics model directly on pdata(x). We compare the results in the middle two panels in figure 4.\nWe find that although the baseline with eight logistic components has the capacity to perfectly model the multi-modal data distribution, which has six modes, the baseline model still fails to do so. We believe this can be caused by optimization or initialization issues for modeling a distribution with a high Lipschitz constant. Our method, on the other hand, demonstrates more robustness by successfully modeling the different modes in the data distribution even when using only two mixture components for both pθ(x̃) and pθ(x|x̃). For all the three types of smoothing distributions, we observe a reverse U-shape correlation between the variance of q(x̃|x) and ELBO values — with ELBO first increasing as the variance increases and then decreasing as the variance grows beyond a certain point. The results match our discussion on the trade-off between modeling pθ(x̃) and pθ(x|x̃) in Section 3.3. We notice from the empirical results that Gaussian smoothing is able to obtain better ELBO than the other two distributions. Thus, we will use q(x̃|x) = N (x̃|x, σ2I) for the later experiments." }, { "heading": "4.2 2-D SYNTHETIC DATASETS", "text": "In this section, we consider two challenging 2-d multi-modal synthetic datasets (see figure 5). We focus on model-based denoising methods and present discussion on “single-step denoising” in Appendix B.2. We use a MADE model with comparable number of total parameters for both the baseline and our approach. For the baseline, we train the MADE model directly on the data. For\nour randomized smoothing model, we choose q(x̃|x) = N (x̃|x, 0.32I) to be the smoothing distribution. We observe that with this randomized smoothing approach, our model is able to generate better samples than the baseline (according to a human observer) even when using less logistic components (see figure 5). We provide more analysis on the model’s performance in Appendix B.2. We also provide the negative log-likelihoods in Tab. 1." }, { "heading": "4.3 IMAGE EXPERIMENTS", "text": "In this section, we focus on three common image datasets, namely MNIST, CIFAR-10 (Krizhevsky et al., 2009) and CelebA (Liu et al., 2015). We select q(x̃|x) = N (x̃|x, σ2I) to be the smoothing distribution. We use PixelCNN++ (Salimans et al., 2017) as the model architecture for both pθ(x̃) and pθ(x|x̃). We provide more details about settings in Appendix C.\nImage generation. For image datasets, we select the σ of q(x̃|x) = N (x̃|x, σ2I) according to analysis in (Saremi & Hyvarinen, 2019) (see Appendix C for more details). Since q(x̃|x) is a Gaussian distribution, we can apply “single-step denoising” to reverse the smoothing process for samples drawn from pθ(x̃). In this case, the model pθ(x|x̃) is not required for sampling since the gradient of pθ(x̃) can be used to denoise samples (also from pθ(x̃)) (see equation 2). We present smoothed samples from pθ(x̃), reversed smoothing samples processed by “single-step denoising” and processed by pθ(x|x̃) in figure 6. For comparison, we also present samples from a PixelCNN++ with parameters comparable to the sum of total parameters of pθ(x̃) and pθ(x|x̃). We find that by using this randomized smoothing approach, we are able to drastically improve the sample quality of PixelCNN++ (see the rightmost panel in figure 6). We note that with only pθ(x̃), a PixelCNN++ optimized on the smoothed data, we already obtain more realistic samples compared to the original PixelCNN++ method. However, pθ(x|x̃) is needed to compute the likelihood lower bounds. We report the sample quality evaluated by Fenchel Inception Distance (FID (Heusel et al., 2017)), Kernel Inception Distance (KID (Bińkowski et al., 2018)), and Inception scores (Salimans et al., 2016) in Tab. 2. Although our method obtains better samples compared to the original PixelCNN++, our model has worse likelihoods as evaluated in BPDs. We believe this is because likelihood and sample quality are not always directly correlated as discussed in Theis et al. (2015). We also tried training the variance for q(x̃|x) by jointly optimizing ELBO. Although training the variance can produce better likelihoods, it does not generate samples with comparable quality as our method (i.e. choosing variance by heuristics). Thus, it is hard to conclusively determine what is the best way of choosing q(x̃|x). We provide more image samples in Appendix C.4 and nearest neighbors analysis in Appendix C.5." }, { "heading": "5 ADDITIONAL EXPERIMENTS ON NORMALIZING FLOWS", "text": "In this section, we demonstrate empirically on 2-d synthetic datasets that randomized smoothing techniques can also be applied to improve the sample quality of normalizing flow models (Rezende & Mohamed, 2015). We focus on RealNVP (Dinh et al., 2016). We compare the RealNVP model trained with randomized smoothing, where we use pθ(x|x̃) (also a RealNVP) to revert the smoothing process, with a RealNVP trained with the original method but with comparable number of parameters. We observe that smoothing is able to improve sample quality on the datasets we consider (see figure 7) while also obtaining competitive likelihoods. On the checkerboard dataset, our method has negative log-likelihoods 3.64 while the original RealNVP has 3.72; on the Olympics dataset, our method has negative log-likelihoods 1.32 while the original RealNVP has 1.80. This example demonstrates that randomized smoothing techniques can also be applied to normalizing flow models." }, { "heading": "6 RELATED WORK", "text": "Our approach shares some similarities with denoising autoencoders (DAE, Vincent et al. (2008)) which recovers a clean observation from a corrupted one. However, unlike DAE which has a train-\nable encoder and a fixed prior distribution, our approach fixes the encoder and models the prior using an autoregressive model. Generative stochastic networks (GSN, Bengio et al. (2014)) use DAEs to train a Markov chain whose equilibrium distribution matches the data distribution. However, GSN needs to start the chain from a sample that is very close to the training distribution. Denoising diffusion model (Sohl-Dickstein et al., 2015; Ho et al., 2020) and NCSN (Song & Ermon (2019; 2020)) address the issue of GSNs by considering a sequence of distributions corresponding to data corrupted with various noise levels. By setting multiple noise levels that are close to each other, the sample from the previous level can serve as a proper initialization for the sampling process at the next level. This way, the model can start from a distribution that is easy to model and gradually move to the desired distribution. However, due to the large number of noise levels, such approaches require many steps for the chain to converge to the right data distribution.\nIn this paper, we instead propose to use only one level of smoothing by modeling each step with a powerful autoregressive model instead of deterministic autoencoders. Motivated by the success of “randomized smoothing” techniques in adversarial defense (Cohen et al., 2019), we perform randomized smoothing directly on the data distribution. Unlike denoising score matching (Vincent, 2011), a technique closely related to denoising diffusion models and NCSN, which requires the perturbed noise to be a Gaussian distribution, we are able to work with different noise distributions.\nOur smoothing method is also relevant to “dequantization” approaches that are common in normalizing flow models, where the discrete data distribution is converted to a continuous one by adding continuous noise (Uria et al., 2013; Ho et al., 2019). However the added noise for “dequantization” in flows is often indistinguishable to human eyes, and the reverse “dequantization” process is often ignored. In contrast, we consider noise scales that are significantly larger and thus a denoising process is required.\nOur method is also related to “quantization” approaches which reduce the number of “significant” bits that are modeled by a generative model (Kingma & Dhariwal, 2018; Menick & Kalchbrenner, 2018). For instance, Glow (Kingma & Dhariwal, 2018) only models the 5 most significant bits of an image, which improves the visual quality of samples but decreases color fidelity. SPN (Menick & Kalchbrenner, 2018) introduces another network to predict the remaining bits conditioned on the 3 most significant bits already modeled. Modeling the most significant bits can be understood as capturing a data distribution perturbed by bit-wise correlated noise, similar to modeling smoothed data in our method. Modeling the remaining bits conditioned on the most significant ones in SPN is then similar to denoising. However, unlike these quantization approaches which process an image at the “significant” bits level, we apply continuous data independent Gaussian noise to the entire image with a different motivation to smooth the data density function." }, { "heading": "7 DISCUSSION", "text": "In this paper, we propose to incorporate randomized smoothing techniques into autoregressive modeling. By choosing the smoothness level appropriately, this seemingly simple approach is able to drastically improve the sample quality of existing autoregressive models on several synthetic and real-world datasets while retaining reasonable likelihoods. Our work provides insights into how recent adversarial defense techniques can be leveraged to building more robust generative models. Since we apply randomized smoothing technique directly to the target data distribution other than the model, we believe our approach is also applicable to other generative models such as variational autoencoders (VAEs) and generative adversarial networks (GANs)." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors would like to thank Kristy Choi for reviewing the draft of the paper. This research was supported by NSF (#1651565, #1522054, #1733686), ONR (N00014-19-1-2145), AFOSR (FA9550-19-1-0024), ARO, and Amazon AWS." }, { "heading": "A PROOFS", "text": "Theorem 1. Given a continuous and bounded 1-d distribution pdata(x) that is supported on R, for any 1-d distribution q(x̃|x) that is symmetric (i.e. q(x̃|x) = q(x|x̃)), stationary (i.e. translation invariant) and satisfies limx→∞ pdata(x)q(x|x̃) = 0 for any given x̃, we have Lip(q(x̃)) ≤ Lip(pdata(x)), where q(x̃) , ∫ q(x̃|x)pdata(x)dx and Lip(·) denotes the Lipschitz constant of the given 1-d function.\nProof. First, we have that:\n|∇xp(x)| = |p(x)∇x log p(x)| ≤ Lip(p) (5) and if we assume symmetry, i.e. q(x|x̃) = q(x̃|x) then by integration by parts we have:\n∇x̃q(x̃) = Ep(x)[∇x̃q(x̃|x)] = Ep(x)[∇xq(x|x̃)] = −Ep(x)[q(x|x̃)∇x log p(x)] (6) Therefore,\nLip(q) = max x̃ |−Ep(x)[q(x|x̃)∇x log p(x)]| = max x̃ | ∑ x q(x|x̃)p(x)∇x log p(x)| (7)\n≤ max x̃ | ∑ x q(x|x̃)Lip(p)| = Lip(p) max x̃ | ∑ x q(x|x̃)| = Lip(p) (8)\nwhich proves the result.\nProposition 1 (Formal). Given a D-dimensional data distribution pdata(x) and model distribution pθ(x), assume that the smoothing distribution q(x̃|x) satisfies:\n• log pθ is infinitely differentiable on the support of pθ(x)\n• q(x̃|x) is symmetric (i.e. q(x̃|x) = q(x|x̃))\n• q(x̃|x) is stationary (i.e. translation invariant)\n• q(x̃|x) is bounded and fully supported on RD\n• q(x̃|x) is element-wise independent\n• Eq(x̃|x)[(x̃− x)2] is bounded, and Eq(x̃|x)[(x̃i − xi)2] = η at each dimension i.\nDenote = x̃− x, then\nEpdata(x)Eq(x̃|x)[log pθ(x̃)] = Epdata(x) [ log pθ(x) + η\n2 ∑ i ∂2 log pθ ∂x2i\n] + ∫ ∫ o( 2)pdata(x)p( )dxd ,\nwhere o( 2) : RD → R is a function of such that lim →0 o( 2) 2 = 0. Thus when∫ ∫\no( 2)pdata(x)p( )dxd → 0, we have\nEpdata(x)Eq(x̃|x)[log pθ(x̃)]→ Epdata(x) [ log pθ(x) + η\n2 ∑ i ∂2 log pθ ∂x2i\n] . (2)\nProof. To see this, we first note that the new training objective for the smoothed data distribution is Epdata(x)Eq(x̃|x)[log pθ(x̃)]. Let = x̃ − x, because of the assumptions we have, the PDF function q(x̃|x) can be reparameterized as p( ) which satisfies: p is bounded and fully supported on RD; p is element-wise independent and Ep( )[ 2i ] = Ep( i)[ 2i ] = η at each dimension i (i = 1, ..., D). Then we have\nEpdata(x)Eq(x̃|x)[log pθ(x̃)] = ∫ ∫ log pθ(x + )pdata(x)p( )dxd , (9)\nUsing Taylor expansion, we have:\nlog pθ(x + ) = log pθ(x) + ∑ i i ∂ log pθ ∂xi + 1 2 ∑ i,j i j ∂2 log pθ ∂xi∂xj + o( 2).\nSince is independent of x and∫ ∞ −∞ id i = 0, ∫ ∞ −∞ ∫ ∞ −∞ i jd id j = δi,jη,\nwhere δi,j is the Kronecker delta function, the right hand side of Equation 9 becomes Epdata(x)[log pθ(x)] + ∫ ∫ ( 1\n2 ∑ i 2i ∂2 log pθ ∂x2i + o( 2)\n) pdata(x)p( )dxd (10)\n= Epdata(x) [ log pθ(x) + η\n2 ∑ i ∂2 log pθ ∂x2i\n] + ∫ ∫ o( 2)pdata(x)p( )dxd . (11)\nWhen ∫ ∫ o( 2)pdata(x)p( )dxd → 0, (12)\nwe have\nEpdata(x)Eq(x̃|x)[log pθ(x̃)]→ Epdata(x) [ log pθ(x) + η\n2 ∑ i ∂2 log pθ ∂x2i\n] . (13)\nwhere the second term on the right hand side serves as a regularization for the original objective Epdata(x)[log pθ(x)]." }, { "heading": "B DENOISING EXPERIMENTS", "text": "B.1 ANALYSIS ON 1-D DENOISING\nTo provide more insights into denoising, we first study “single-step denoising” (see equation 2) on a 1-d dataset. We choose the data distribution to be a two mixture of Gaussian distribution 0.5N (−0.3, 0.12) + 0.5N (0.3, 0.12) and the smoothing distribution to be q(x̃|x) = N (x̃|x, 0.32) (see figure 8a). Since the convolution of two Gaussian distributions is also a Gaussian distribution, the smoothed data is a mixture of Gaussian distribution given by 0.5N (−0.3, 0.12 + 0.32) + 0.5N (0.3, 0.12 + 0.32). The ground truth of ∇x̃ log p(x̃) can then be calculated in closed form. Thus, given the smoothed data x̃, we can calculate the ground truth ∇x̃ log p(x̃) in equation 2 and obtain x̄ using “single-step denoising”. We visualize the denoising results in figure 8b. We find that the low density region between the two modes in pdata(x) are not modeled properly in figure 8b. However, this is very expected since “single-step denosing” uses x̄ = E[x|x̃] as the substitute for the denoised result. When the smoothing distribution has a large variance (like in figure 8a where the smoothed data has merged into a one mode distribution), datapoints like x̃0 in the middle low density region of pdata(x) can have high density in the smoothed distribution. Since x̃0, as well as other points in the middle low density region of pdata(x), can come from both modes of pdata(x) with high probability before the smoothing process (see figure 11a), the denoised x̄ = E[x|x̃ = x̃0] can still be located in the middle low density region (see figure 8b). Since a large proportion of the smoothed data is located in the middle low density region of pdata(x), we would expect certain proportion of the density to remain in the low density region after “single-step denoising” just as shown in figure 8b. However, when the smoothing distribution has a smaller variance, “single-step denoising” can achieve much better denoising results (see figure 9, where we use q(x̃|x) = N (x̃|x, 0.12)). Although denoising can be easier when the smoothing distribution has a smaller variance, modeling the smoothed distribution could be harder as we discussed before.\nIn general, the right denoising results should be samples coming from p(x|x̃), which is the reason why samples from pθ(x|x̃) (i.e. introducing the model pθ(x|x̃)) is more ideal than using Eθ[x|x̃] as a denoising substitute (i.e. “single-step denoising”). In general, the capacity of the denoising model pθ(x|x̃) also matters in terms of denoising results. Let us again consider the datapoint x̃0 shown in figure 8a. If the invert smoothing model pθ(x|x̃) is a one mode logistic distribution, due to the mode covering property of maximum likelihood estimation, given the smoothed observation x̃0, the best the model can do is to center its only mode at x̃0 for approximating p(x|x̃ = x̃0) (see figure 10c). Thus, like x̃0, the smoothed datapoints at the low density region between the two modes of pdata(x)\nare still likely to remain between the two modes after denoising (see figure 10b). To solve this issue, we can increase the capacity of pθ(x|x̃) by making it a two mixture of logistics. In this case, the distribution pθ(x|x̃ = x̃0) can be captured in a better way (see figure 11c and figure 11a). After the invert smoothing process, like x̃0, most smoothed datapoints in the low density can be mapped to one of the two high density modes (see figure 11b), resulting in much better denoising effects.\nB.2 ANALYSIS ON 2-D DENOISING\nOn the 2-d Olympics dataset in section 4.2, we find that the intersections between rings can be poorly modeled with the proposed smoothing approach when only two mixture of logistics are used (see figure 12e). We believe this can be caused if the denoising model is not flexible enough to capture the distribution p(x|x̃). More specifically, we note that the ground truth distribution for p(x|x̃) at the intersections of the rings is a highly complicated distribution and can be hard to capture using our model which only has two mixtures of logistics for each dimension. If we increase the flexibility of pθ(x|x̃) by using three or four mixtures of logistics components (note that we still use fewer mixture components than the MADE baseline and we use comparable number of parameters), the intersection of the rings can be modeled in an improved way (see figure 12).\nWe also provide “single-step denoising” results for the experiments in Section 4.2 (see figure 13), where we use the same smoothing distribution, and the MADE model with three mixture components as used in section 4.2. We note that “single-step denoising” results are not very good, which is also expected. As discussed in section B.1, when the smoothing distribution has a relatively large variance, Eθ[x|x̃] is not a good approximation for the denoised result, and we want the denoised sample to come from the distribution pθ(x|x̃), in which case introducing a denoising model pθ(x|x̃) could be a better option. Although we could select q(x̃|x) to have a smaller variance so that “single-step denoing” could work reasonably well, but modeling p(x̃) in this case could be more challenging.\nC IMAGE EXPERIMENTS\nC.1 SETTINGS\nFor the image experiments, we first rescale images to [−1, 1] and then perturb the images with q(x̃|x) = N (x̃|x, σ2I). We use σ = 0.5 for MNIST and σ = 0.3 for both CIFAR-10 and CelebA. The selection of σ is mainly based on analysis in (Saremi & Hyvarinen, 2019). More specifically, given an image, we consider the median value of the Euclidean distance between two data points in a dataset, and then divide it by 2 √ D, where D is the dimension of the data. This provides us with a way of selecting the variance of q(x̃|x), when q(x̃|x) is a Gaussian distribution. We find this selection of variance able to generate reasonably well samples in practice. We train all the models with Adam optimizer with learning rate 0.0002. To model pθ(x|x̃), we stack x̃ and x together at the second dimension to obtain x̂ = [x̃,x], which ensures that x̃ comes before x in the pixel ordering. For instance, this stacking would provide an image x̂ with size 1 × (2 × 28) × 28 on a MNIST image, and an image with size 3 × (2 × 32) × 32 on a CIFAR-10 image. Since PixelCNN++ consists of convolutional layers, we can directly feed x̂ into the default architecture without modifying the model architecture. As the latter pixels of the input only depend on the previous pixels in an autoregressive model and x̃ comes before x, we can parameterize pθ(x|x̃) by computing the likelihoods only on x using the outputs from the autoregressive model.\nC.2 IMAGE INPAINTING\nSince both pθ(x̃) and pθ(x|x̃) are parameterized by an autoregressive model, we can also perform image inpainting using our method. We present the inpainting results on CIFAR-10 in figure 14a and CelebA in figure 14b, where the bottom half of the input image is being inpainted.\nC.3 IMAGE DENOISING\nWe notice that the reverse smoothing process can also be understood as a denoising process. Besides the “single-step denoising” approach shown above, we can also apply pθ(x|x̃) to denoise images. To visualize the denoising performance, we sample xtest from the test set and perturb xtest with q(x̃|x) to obtain a noisy sample x̃test. We feed x̃test into pθ(x|x̃ = x̃test) and draw samples from the model. We visualize the results in figure 15. As we can see, the model exhibits reasonable denoising results, which shows that the autoregressive model is capable of learning the data distribution when conditioned on the smoothed data.\nC.4 MORE SAMPLES\nC.5 NEAREST NEIGHBORS\nC.6 ABLATION STUDIES\nIn this section, we show that gradient-based “single-step denoising” will not improve sample qualities without performing randomized smoothing. To see this, we draw samples from a PixelCNN++ pθ(x) trained directly on pdata(x) (i.e. without smoothing). We perform “single-step denoising” update defined as x = x + σ2∇x log pθ(x). (14) We explore various values for σ, and report the results in figure 26. This shows that “single-step denoising” alone (without randomized smoothing) will not improve sample quality of PixelCNN++." } ]
2,021
IMPROVED AUTOREGRESSIVE MODELING WITH DISTRIBUTION SMOOTHING
SP:5918a2c105a901f8de4bba248dc283a476d9beac
[ "This work considers an important problem of generating adversarial examples to attack a black-box model. The paper proposes a new approach to consider an adversarial example as a result of a sequence of pixel changes from a benign instance. Therefore, the adversarial generation problem can be considered as a bandit problem, and thus we can leverage Bayesian optimization to search for an instance that maximize the changes on the loss function through a sequence of pixel changes. The evaluation is comprehensive, and demonstrates that fewer number of black-box queries are needed to achieve a higher attack success rate." ]
We present a new method for score-based adversarial attack, where the attacker queries the loss-oracle of the target model. Our method employs a parameterized search space with a structure that captures the relationship of the gradient of the loss function. We show that searching over the structured space can be approximated by a time-varying contextual bandits problem, where the attacker takes feature of the associated arm to make modifications of the input, and receives an immediate reward as the reduction of the loss function. The time-varying contextual bandits problem can then be solved by a Bayesian optimization procedure, which can take advantage of the features of the structured action space. The experiments on ImageNet and the Google Cloud Vision API demonstrate that the proposed method achieves the state of the art success rates and query efficiencies for both undefended and defended models.
[]
[ { "authors": [ "Abdullah Al-Dujaili", "Una-May O’Reilly" ], "title": "Sign bits are all you need for black-box attacks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Abdullah Al-Dujaili", "Una-May O’Reilly" ], "title": "Sign bits are all you need for black-box attacks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Maksym Andriushchenko", "Francesco Croce", "Nicolas Flammarion", "Matthias Hein" ], "title": "Square attack: a query-efficient black-box adversarial attack via random search", "venue": null, "year": 2020 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Jianbo Chen", "Michael I Jordan", "Martin J Wainwright" ], "title": "Hopskipjumpattack: A query-efficient decision-based attack", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2020 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Kun Dong", "David Eriksson", "Hannes Nickisch", "David Bindel", "Andrew G Wilson" ], "title": "Scalable log determinants for gaussian process kernel learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jiawei Du", "Hu Zhang", "Joey Tianyi Zhou", "Yi Yang", "Jiashi Feng" ], "title": "Query-efficient meta attack to deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jacob Gardner", "Geoff Pleiss", "Kilian Q Weinberger", "David Bindel", "Andrew G Wilson" ], "title": "Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nikolaus Hansen" ], "title": "The cma evolution strategy: A tutorial", "venue": "arXiv preprint arXiv:1604.00772,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Zhichao Huang", "Tong Zhang" ], "title": "Black-box adversarial attack with transferable model-based embedding", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Aleksander Madry" ], "title": "Prior convictions: Black-box adversarial attacks with bandits and priors", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Huichen Li", "Xiaojun Xu", "Xiaolu Zhang", "Shuang Yang", "Bo Li" ], "title": "Qeba: Query-efficient boundarybased blackbox attack", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yandong Li", "Lijun Li", "Liqiang Wang", "Tong Zhang", "Boqing Gong" ], "title": "Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks", "venue": null, "year": 1905 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "arXiv preprint arXiv:1611.02770,", "year": 2016 }, { "authors": [ "Laurent Meunier", "Jamal Atif", "Olivier Teytaud" ], "title": "Yet another but more efficient black-box adversarial attack: tiling and evolution strategies", "venue": null, "year": 1910 }, { "authors": [ "Jonas Mockus", "Vytautas Tiesis", "Antanas Zilinskas" ], "title": "The application of bayesian methods for seeking the extremum", "venue": "Towards global optimization,", "year": 1978 }, { "authors": [ "Seungyong Moon", "Gaon An", "Hyun Oh Song" ], "title": "Parsimonious black-box adversarial attacks via efficient combinatorial optimization", "venue": "arXiv preprint arXiv:1905.06635,", "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "arXiv preprint arXiv:1605.07277,", "year": 2016 }, { "authors": [ "Carl Edward Rasmussen" ], "title": "Gaussian processes in machine learning", "venue": "In Summer School on Machine Learning,", "year": 2003 }, { "authors": [ "Binxin Ru", "Adam Cobb", "Arno Blaas", "Yarin Gal" ], "title": "Bayesopt adversarial attack", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Satya Narayan Shukla", "Anit Kumar Sahu", "Devin Willmott", "J Zico Kolter" ], "title": "Black-box adversarial attacks with bayesian optimization", "venue": null, "year": 1909 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Chun-Chen Tu", "Paishun Ting", "Pin-Yu Chen", "Sijia Liu", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh", "Shin-Ming Cheng" ], "title": "Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks", "venue": "arXiv preprint arXiv:1805.11770,", "year": 2018 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence),", "year": 2008 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "arXiv preprint arXiv:1812.03411,", "year": 2018 } ]
[ { "heading": null, "text": "We present a new method for score-based adversarial attack, where the attacker queries the loss-oracle of the target model. Our method employs a parameterized search space with a structure that captures the relationship of the gradient of the loss function. We show that searching over the structured space can be approximated by a time-varying contextual bandits problem, where the attacker takes feature of the associated arm to make modifications of the input, and receives an immediate reward as the reduction of the loss function. The time-varying contextual bandits problem can then be solved by a Bayesian optimization procedure, which can take advantage of the features of the structured action space. The experiments on ImageNet and the Google Cloud Vision API demonstrate that the proposed method achieves the state of the art success rates and query efficiencies for both undefended and defended models." }, { "heading": "1 INTRODUCTION", "text": "Although deep learning has many applications, it is known that neural networks are vulnerable to adversarial examples, which are small perturbations of inputs that can fool neural networks into making wrong predictions (Szegedy et al., 2014). While adversarial noise can easily be found when the neural models are known (referred to as white-box attack) (Kurakin et al., 2016). However, in real world scenarios models are often unknown, this situation is referred to as black-box attack.\nSome methods (Liu et al., 2016; Papernot et al., 2016) use the transfer-based attack, which generates adversarial examples on a substitute model and transfer the adversarial noise to the target model. However, the transferability is limited and its effectiveness relies highly on the similarity between the networks (Huang & Zhang, 2020). If two networks are very different, transfer-based methods will have low success rates.\nIn practice, most computer vision API such as the Google Cloud Vision API allow users to access the scores or probabilities of the classification results. Therefore, the attacker may query the black-box model and perform zeroth order optimization to find an adversarial example without the knowledge of the target model. Due to the availability of scores, this scenario is called score-based attack.\nThere have been a line of studies on black-box attack which directly estimate the gradient direction of the underlying model, and apply (stochastic) gradient descent to the input image (Ilyas et al., 2018; 2019; Chen et al., 2017; Huang & Zhang, 2020; Tu et al., 2018; Li et al., 2019). In this paper, we take another approach and formulate score-based attack as a time-varying contextual bandits problem. At each state, the attacker may change the adversarial perturbation and get the reward as the reduction of the loss. And the attacker would receive some features about the arms before making the decision. By limiting the action space to image blocks, the associated bandits problem exhibits local correlation structures and the slow varying property suitable for learning. Therefore, we may use the location and other features of the blocks to estimate the reward for the future selection of the actions.\nUsing the above insights, we propose a new method called CorrAttack, which utilizes the local correlation structure and the slow varying property of the underlying bandits problem. CorrAttack uses Bayesian optimization with Gaussian process regression (Rasmussen, 2003) to model the correlation and select optimal actions. A forgetting strategy is added to the algorithm so that the Gaussian process regression can handle the time-varying changes. CorrAttack can effectively find\nblocks with the largest rewards. The resulting method achieves much lower numbers of average queries and higher success rates than prior methods with a similar action space (Moon et al., 2019).\nIt is worth noting that BayesOpt (Ru et al., 2020) and Bayes-Attack (Shukla et al., 2019) also employ Bayesian optimization for score-based attack. However, their Gaussian process regression directly models the loss as a function of the image, whose dimension can be more than one thousand. Therefore, their speed is slow especially for BayesOpt, which uses slow additive kernel. CorrAttack, on the other hand, searches over a much limited action space and models the reward as a function of the low dimensional feature. Therefore, the optimization of CorrAttack is more efficient, and the method is significantly faster than BayesOpt.\nWe summarize the contributions of this work as follows:\n1. We formulate the score-based adversarial attack as a time-varying contextual bandits, and show that the reward function has slow varying properties. In our new formulation, the attacker could take advantage of the features to model the reward of the arms with learning techniques. Compared to the traditional approach, the use of learning in the proposed framework greatly improves the efficiency of searching over optimal actions.\n2. We propose a new method, CorrAttack, which uses Bayesian optimization with Gaussian process regression to learn the reward of each action, by using the feature of the arms.\n3. The experiments show that CorrAttack achieves the state of the art performance on ImageNet and Google Cloud Vision API for both defended and undefended models." }, { "heading": "2 RELATED WORK", "text": "There have been a line of works focusing on black-box adversarial attack. Here, we give a brief review of various existing methods.\nTransfer-Based Attack Transfer-based attack assumes the transferability of adversarial examples across different neural networks. It starts with a substitute model that is in the same domain as the target model. The adversaries can be easily generated on the white-box substitute model, and be transferred to attack the target model (Papernot et al., 2016). The approach, however, depends highly on the similarity of the networks. If two networks are distinct, the success rate of transferred attack would rapidly decrease (Huang & Zhang, 2020). Besides, we may not access the data for training the substitute model in practice.\nScore-based Attack Many approaches estimate the gradient with the output scores of the target network. However, the high dimensionality of input images makes naive coordinate-wise search impossible as it requires millions of queries. ZOO (Chen et al., 2017) is an early work of gradient estimation, which estimates the gradient of an image block and perform block-wise gradient descent. NES (Wierstra et al., 2008) and CMA-ES (Hansen, 2016) are two evolution strategies that can perform query efficient score-based attack Ilyas et al. (2018); Meunier et al. (2019). Instead of the gradient itself, SignHunter (Al-Dujaili & O’Reilly, 2020a) just estimates the sign of gradient to reduce the complexity. AutoZOOM (Tu et al., 2018) uses bilinear transformation or autoencoder to reduce the sampling space and accelerate the optimization process. In the same spirit, data prior can be used to improve query efficiency (Ilyas et al., 2019). Besides, MetaAttack (Du et al., 2020) takes a meta learning approach to learn gradient patterns from prior information, which reduces queries for attacking targeted model.\nMany zeroth order optimization methods for black-box attacks rely on gradient estimation. However, there are some research works using gradient free methods to perform black-box attack. BayesOpt and Bayes-Attack (Ru et al., 2020; Shukla et al., 2019) employ Bayesian optimization to find the adversarial examples. They use Gaussian process regression on the embedding and apply bilinear transformation to resize the embedding to the size of image. Although the bilinear transformation could alleviate the high dimensionality of images, the dimension of their embeddings are still in the thousands, which makes Bayesian optimization very ineffective and computationally expensive. A different method, PARSI, poses the attack on `∞ norm as a discrete optimization problem over {−ε, ε}d (Moon et al., 2019). It uses a Lazy-Greedy algorithm to search over the space {−ε, ε}d to find an adversarial example. SimBA (Guo et al., 2018) also employs a discrete search space targeted at `2 norm.\nDecision-based Attack Decision-based attack assumes the attacker could only get the output label of the model. Boundary Attack and its variants (Brendel et al., 2017; Chen et al., 2020; Li et al., 2020) are designed for the setting. However, the information received by the attacker is much smaller than score-based attack, and it would take many more queries than score-based attack to successfully attack an image." }, { "heading": "3 PRELIMINARIES", "text": "A Gaussian process (Rasmussen, 2003) is a prior distribution defined on some bounded set Z , and is determined by a mean function µ : Z → R and a covariance kernel κ : Z × Z → R. Given n observations Dn = {(zi, f(zi))}ni=1, the prior distribution on f(z1:n) is\nf(z1:n) ∼ Normal(µ0(z1:n), κ0(z1:n, z1:n)), (1) where we use compact notation for functions applied to collections of input points: z1:n indicates the sequence z1, · · · , zn, f(z1:n) = [f(z1), · · · , f(zn)], µ0(z1:n) = [µ0(z1), · · · , µ0(zn)], κ0(z1:n, z1:n) = [κ0(z1, z1), · · · , κ0(z1, zn); · · · ;κ0(zn, z1), · · · , κ0(zn, zn); ]. Now we wish to infer the value of f(z) at some new point z, the posterior process f(z)|Dn is also a Gaussian process (GP) with mean µn and covariance σ2n:\nf(z)|Dn ∼ Normal(µn(z), σ2n(z)), (2) µn(z) = κ0(z, z1:n)κ0(z1:n, z1:n)\n−1(f(z1:n)− µ0(z1:n)) + µ0(z), σ2n(z) = κ0(z, z)− κ0(z, z1:n)κ0(z1:n, z1:n)−1κ0(z1:n, z).\nAs a optimization method to maximize a function f , Bayesian optimization models the function to make decisions about where to evaluate the next point z. Assuming we already obtained observations Dt−1 = {(zi, f(zi))}t−1i=1 , to determine the next point zt for evaluation, we first use the posterior GP to define an acquisition function ϕt : Z → R, which models the utility of evaluating f(z) for any z ∈ Z . We then evaluate f(zt) with\nzt = arg max Z ϕt(z). (3)\nIn this work, we use the expected improvement (EI) acquisition function (Mockus et al., 1978) ϕt(z) = √ σ2n(z)(γ(z)Φ(γ(z)) + φ(γ(z))) with γ(z) =\nµn(z)− f(zbest)√ σ2n(z) , (4)\nwhich measures the expected improvement over the current best value zbest = arg maxzi f(zi) according to the posterior GP. Here Φ(·) and φ(·) are the cdf and pdf of N (0, I) respectively." }, { "heading": "4 SCORE-BASED BLACK-BOX ATTACK", "text": "Suppose a classifier F (x) has input x and label y. An un-targeted adversarial example xadv satisfies:\narg max j∈{1,···C}\nF (xadv)j 6= y and ‖xadv − x‖p ≤ ε, (5)\nwhere C is the number of classes. While an adversarial example for targeted attack means the maximum position of F (x) should be the targeted class q: arg maxj∈{1,···C} F (xadv)j = q. In order to find xadv , we may optimize a surrogate loss function `(x, y) (e.g hinge loss).\nIn this work, we consider adversarial attack as a time-varying contextual bandits problem. At each time t, we observe a state xt which is a modification of the original input x0. Before taking arm at ∈ A ⊂ Rd, we could observe the feature z of arms. And at would modify state xt to xt+1 according to\nxt+1 = arg min s∈{xt+at,xt} `(ΠBp(x,ε) (s) , y) (6)\nwith reward function r(xt, at) = `(xt+1, y)− `(xt, y) and the checking step tries to remove negative reward. In this frame, we would like to estimate the reward r(xt, at) with feature zt using learning, and then pick at to maximize the reward. Observe that\nr(xt, at) ≈ ∇x`(xt, y)>(xt+1 − xt), (7)\nwhere the gradient ∇xt`(xt, y) is unknown. It follows from the formulation that we may rewrite r(xt, at) as a function r(xt, at) ≈ f(xt, xt+1 − xt). Since in general, we make small steps from one iteration to the next iteration, δt(at) = xt+1 − xt is small. We may approximate the reward with fixed gradient locally with\nf(xt, δt) = f̃t(at),\nWe may consider the learning of reward as a time-varying contextual bandits problem with reward function f̃t(at) for arm at at time t. Since xt+1 − xt is small, this time-varying bandits has slowvarying property: the function f̃t changes slowly from time t to time t+ 1.\nIn the proposed framework, our goal is to learn the time-varying bandits reward f̃t(at) with feature zt. We use Gaussian process regression to model the reward function using recent historic data since the reward function is slow-varying, and describe the details in the subsequent sections.\nWe note that the most general action space contains all at ∈ Rd, where d is the number of image pixels. However, it is impossible to explore the arms in such a large space. In this work, we choose a specific class of actions A = {ai}ni=1, n is the image blocks of different sizes. It covers the space of the adversarial perturbations while maintaining good complexity. We also find the location and the PCA of the blocks a good component of the feature z associated with the arm. Besides, modifying a block only affects the state locally. Therefore the reward function remains similar after state changes." }, { "heading": "4.1 STRUCTURED SEARCH WITH GAUSSIAN PROCESS REGRESSION AND BAYESIAN OPTIMIZATION", "text": "Define the block size as b, we divide the image into several blocks E = {e000, e001, · · · , ehwc}, where the block is b× b square of pixels and (h,w, c) = (height/b,width/b, channel). Each block eijk is associated with the feature zeijk such as the location of the block.\nSuppose we have time-varying bandits with state xt and unknown reward function f̃t at time t. By taking the action aeijk , we change the individual block eijk of xt and get xt+1 with reward f̃t(aeijk). We consider two ways of taking action aeijk on block eijk : CorrAttackDiff and CorrAttackFlip.\nFinite Difference CorrAttackDiff: For action aeijk , the attacker will query `(xt + ηeijk, y) and `(xt − ηeijk, y), and choose\naeijk = arg min s∈{ηeijk,−ηeijk} `(xt + s, y). (8)\nThe action space A = {aeijk |eijk ∈ E}. In our framework, the bandits problem can also be regarded as learning the conditional gradient over actions. That is, when η is small, we try to choose action at with\nat = arg min eijk∈E\ne>ijk∇xt`(xt, y) (9)\nwhich is the conditional gradient over the set of blocks.\nDiscrete Approximation CorrAttackFlip: In general, adversarial attack with `∞ budget can be formulated as constrained optimization with ‖xadv − x‖∞ ≤ . However, PARSI (Moon et al., 2019) limits the space to {−ε,+ε}d, which leads to better performance for black-box attack (Moon et al., 2019). The continuous optimization problem becomes a discrete optimization problems as follows:\nmaximize `(xadv, y) =⇒ maximize `(xadv, y) (10)\nsubject to ‖xadv − x‖∞ ≤ subject to xadv − x ∈ { ,− }d.\nFollowing PARSI, we consider two stages to perform structured search. When flipping ε to −ε, aeijk changes the block to −ε and A = {−2εeijk|eijk ∈ E}. When changing −ε to ε, A = {2εeijk|eijk ∈ E} instead. Gaussian Process (GP) Regression: We model the difference function\ngt(at) = `(ΠBp(x,ε) (xt + at) , y)− `(xt, y) (11)\ninstead of the reward function f̃t(at) ≥ 0, as the difference function could be negative, providing more information about the negative arms in A. We would collect historic actions with feature and difference {zk, gk(ak))}tk=1 and learn the difference to make choices at a later stage. At each time t, we use the Gaussian process regression to model the correlation between the features zeijk and use Bayesian optimization to select the next action. More specifically, the same as eq. (2), we let\ngt(aeijk)|Dt ∼ Normal(µt(zeijk), σ2t (zeijk)), (12) where Dt = {zk, gk(ak))}tk=t−τ is the difference of evaluated blocks et−τ :t with feature zet−τ:t and τ is a parameter to forget old samples. Then we use EI acquisition function to pick up the next action at+1 in A. More specifically, the same as eq. (4), we let\nat+1 = arg max A\n( √ σ2t (zeijk)(γ(zeijk)Φ(γ(zeijk)) + φ(γ(zeijk)))) (13)\nAs the difference function gt is varying, we take two strategies in Algorithm 2 to update the previous samples to make sure GP regression learns the current difference function well. The first strategy is to remove old samples in Dt. Even if the bandits are slowly varying, the difference function will change significantly after a significant number of rounds. Therefore, we need to forget samples before t− τ . The second strategy is to remove samples near the last block eitjtkt in Dt. As we discuss later, the difference function may change significantly in a local region near the last selected block. Therefore previous samples in this local region will be inaccurate. The resulting algorithm for CorrAttack is shown in Algorithm 1, which mainly follows standard procedure of Bayesian optimization.\nAlgorithm 1 CorrAttack Require: Loss function `(·, ·), Input x0 and its label y, Action space A = {aeijk |eijk ∈ E}, Parameter c, τ ,\nα 1: Build set D0 = {(zeipjpkp , g0(aeipjpkp ))} m p=1 using latin hypercube sampling from A 2: repeat 3: Fit the parameter of Normal(µt(zeijk ), σ 2 t (zeijk )) on Dt according to Equation (12) 4: Calculate acquisition function ϕt(zeijk ) and according to Equation (13) 5: Select aeitjtkt = arg maxA ϕt(zeijk ) according to Equation (13) 6: xt+1 = arg mins∈{xt+aeitjtkt ,xt} `(ΠBp(x,ε) (s) , y) 7: Update sample set Dt with Algorithm 2 Dt+1 = UPDATESAMPLES(Dt, xt, xt+1, eitjtkt , gt+1(aeitjtkt ), τ, α) 8: until maxA ϕt(zeijk ) < c 9: return xT ;\nAlgorithm 2 Update Samples Require: Sample set Dt, State xt, xt+1, Block eitjtkt , Difference gt+1(aeitjtkt ), Paramter τ , α 1: if xt+1 6= xt then 2: Dt+1 = Dt \\ {(zeijk , g) ∈ Dt||i− it|+ |j − jt| ≤ α} 3: else 4: Dt+1 = Dt ∪ {(zeitjtkt , gt+1(aeitjtkt ))} 5: end if 6: Remove the earliest sample from D if the cardinality |D| > τ 7: return Dt+1" }, { "heading": "4.2 FEATURES AND SLOW VARYING PROPERTY", "text": "Features of Contextual Bandits: We use a four dimensional vector as the feature zeijk : zeijk = (i, j, k, pca) (14)\nwhere i, j, k is the location of the block. And pca is the first component of PCA decomposition of [x0(e000), x0(e001), · · ·x0(ehwc)]. x0(eijk) means the block of natural image at the given position. The reward function depends on the gradient in Equation (7). It has been shown that the gradient ∇x`(x, y) has local dependencies (Ilyas et al., 2019). Suppose two coordinates eijk and elpq are close, then∇x`(x, y)ijk ≈ ∇x`(x, y)lpq. We consider the finite difference of the block eijk\n∆t(eijk) = ` (xt + ηeijk, y)− ` (xt − ηeijk, y) ≈ 2ηe>ijk∇xt` (xt, y) (15)\nwhere η is a small step size. When η is small, the reward can be approximated by the average of the gradients around a small region, which also has local dependencies. In fact, the local structure of the reward will also be persevered when the block size and η is large. Figure 1 shows one example of the finite difference ∆t(eijk) obtained on ImageNet dataset with ResNet50. This shows blocks with closer locations are more likely to have similar reward. Therefore, we add the location of the block as the feature so that it uses historic data to find the arm with the largest reward.\nIn addition to the location of the difference, we may add other features. The block of the image itself forms a strong feature for the regression, but the dimension of the block is too high for GP regression. Therefore, we use PCA to lower the dimension and add the first component into the feature vector.\nSlow Varying Property In addition to the local dependencies of finite difference, the difference would also be slow varying if we just change a small region of xt. Let xt+1 = xt− ηeitjtkt , Figure 2 shows the difference of ∆t(eijk) and ∆t+1(eijk), which is centralized in a small region near eitjtkt . Reward function is based on the finite difference, which also has the slow varying property. It could be explained by the local property of convolution. When η is small, the finite difference can be approximated with gradient and the local Hessian:\n∆t+1(eijk)−∆t(eijk) ≈ η2e>ijk∇2xt`(xt, y)eitjtkt (16) The difference is much smaller than ∆t(eijk). Today’s neural networks are built with stacks of convolutions and non-linear operations. Since these operations are localized in a small region, the Hessian of a neural network is also localized and the reward function only changes near eitjtkt ." }, { "heading": "4.3 HIERARCHICAL BAYESIAN OPTIMIZATION SEARCH", "text": "Recent black-box approaches (Chen et al., 2017; Moon et al., 2019) exploit the hierarchical image structure for query efficiency. Following these approaches, we take a hierarchical approach and perform the accelerated local search in Algorithm 1 from a coarse grid (large blocks) to a fine grid (smaller blocks). The algorithm for hierarchical attack iteratively performs Algorithm 1 at one block size, and then divides the blocks into smaller sizes. At each block size, we build a Gaussian process to model the difference function, and perform structured search with the blocks until maxA ϕt(zeijk) < c. When dividing the blocks into smaller sizes, we will build a new block set E with action aeijk and new feature zeijk , but keep the xt in last block size as x0 in new block size. Define the stage as S = {0, 1, · · · , s} and initial block size as b. The block at stage s is b2s × b 2s square of pixels and (h,w, c) = (2s ∗ height/b, 2s ∗ width/b, channel). The overall hierarchical accelerated local search algorithm is shown in Appendix A. It is important to note that most of the attacks terminate in the early stages and rarely need to run on fine scales." }, { "heading": "5 EXPERIMENTS", "text": "We evaluated the number of queries versus the success rates of CorrAttack on both undefended and defended network on ImageNet (Russakovsky et al., 2015). Moreover, we attacked Google Cloud Vision API to show that CorrAttack can generalize to a true black-box model.\nWe used the common hinge loss proposed in the CW attack (Carlini & Wagner, 2017). We compared two versions of CorrAttack : CorrAttackDiff and CorrAttackFlip, to ZOO (Chen et al., 2017), NES (Ilyas et al., 2018), NAttack (Li et al., 2019), Bandits (Ilyas et al., 2019), PARSI (Moon et al., 2019), Square Attack (Andriushchenko et al., 2020), SignHunter(Al-Dujaili & O’Reilly, 2020b), BayesOpt (Ru et al., 2020) and Bayes-Attack (Shukla et al., 2019). We only test adversarial attack on `∞ norm. The details of the Gaussian processes regression and the hyperparameters of CorrAttack are given in the Appendix B. We shall mention that CorrAttack is not sensitive to the hyperparameters. The hyperparameters of other methods follow those suggested by the original papers." }, { "heading": "5.1 UNDEFENDED NETWORK", "text": "We randomly select 1000 images from the validation set of ImageNet and only attack correctly classified images. The query efficiency of CorrAttack is tested on VGG16 (Simonyan & Zisserman, 2014), Resnet50 (He et al., 2016) and Densenet121 (Huang et al., 2017), which are the most commonly used network structures. We set ε = 0.05 and the query limit to be 10000 except for BayesOpt and Bayes-Attack. For targeted attacks, we randomly choose the target class for each image and the target classes are maintained the same for the evaluation of different algorithms. The results are shown in Table 1 and 2. CorrAttackFlip outperforms other methods by a large margin.\nAs BayesOpt and Bayes-Attack takes tens of thousands of hours to attack 1000 images, we compare them with CorrAttackFlip only on 20 images and un-targeted attack. The query limit is also reduced to 1000 as the time for BayesOpt and Bayes-Attack quickly increases as more samples add into the Gaussian distribution. The time comparison between three models is shown in Appendix C.6.\nOptimality of Bayesian optimization Appendix C.1 shows the rank the actions found by CorrAttack. The attacker could find the action with large reward quickly.\nVarying ε We also test the algorithms at different budget of adversarial perturbations at ε = 0.04 and ε = 0.06 on Resnet50. As it is shown in Appendix C.2, CorrAttack shows a consistently better performance at different ε.\nAblation study on random choices Appendix C.3 shows the ablation study of random version of CorrAttackDiff and CorrAttackFlip. In both cases, Bayesian optimization helps to gain better query efficiency.\nAblation study on hierarchical attack We perform ablation study on the hierarchical attack and the result is shown in Appendix C.4. Hierarchical structure accelerates the CorrAttack and eliminates the sensitivity of choosing initial block size.\nAblation study on features Appendix C.5 demonstrates how the feature of the contextual bandits affects the performance of attack. PCA would help to improve the efficiency of attack." }, { "heading": "5.2 DEFENDED NETWORK", "text": "To evaluate the effectiveness of CorrAttack on adversarially defended networks, we tested our method on one of the SOTA robust model (Xie et al., 2018) on ImageNet. The weight is downloaded from Github1. \"ResneXt DenoiseAll\" is chosen as the target model as it achieves the best performance. We set ε = 0.05 and the maximum number of queries is 10000. As BayesOpt runs very slowly, the attack is also performed on 10 images and the query limit is 1000. The result is shown in Table 3. CorrAttackFlip still outperforms other methods." }, { "heading": "5.3 GOOGLE CLOUD VISION API", "text": "We also attacked Google Cloud Vision API, a real world black-box model for classification. The target is to remove the top-1 label out of the classification output. We choose 10 images for the ImageNet dataset and set the query limit to be 500 due to high cost to use the API. We compare CorrAttackFlip with NAttack, BayesOpt and PARSI. The result is shown in Table 4. We also show one example of the classification output in Appendix C.9\n1https://github.com/facebookresearch/ImageNet-Adversarial-Training" }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We formulate the score-based adversarial attack as a time-varying contextual bandits and propose a new method CorrAttack. By performing structured search on the blocks of the image, the bandits has the slow varying property. CorrAttack takes advantage of the the features of the arm, and uses Bayesian optimization with Gaussian process regression to learn the reward function. The experiment shows that CorrAttack can quickly find the action with large reward and CorrAttack achieves superior query efficiency and success rate on ImageNet and Google Cloud Vision API.\nWe only include basic features for learning the bandits. Other features like embedding from the transfer-based attack Huang & Zhang (2020) may be taken into account in the future work. While our work only focuses on adversarial attack on `∞ norm, the same contextual bandits formulation could be generalized to other `p norm to improve query efficiency. Besides, defense against CorrAttack may be achieved with adversarial training on CorrAttack , but it may not be able to defend other attacks in the meantime." }, { "heading": "A ALGORITHM", "text": "Algorithm 3 Split Block Require: Set of blocks E, Block size b, E′ = ∅ 1: for each block e ∈ E do 2: Split the block e into 4 blocks {e1, e2, e3, e4} with size b/2 3: E′ ← E′ ∪ {e1, e2, e3, e4} 4: end for 5: return E′;\nAlgorithm 4 Hierarchical CorrAttackDiff Require: Loss function `(·, ·), Input image x and its label y, Initial Block size b, Set of blocks E containing all\nblocks of the image, Threshold c, τ , α, Step size η, Adversarial budget ε 1: x0 = x 2: repeat 3: Choose A = {aeijk |eijk ∈ E} with Equation (8) 4: Run CorrAttack on current block size x = CORRATTACK (`(·, ·), x, y, A, c, τ, α) 5: if b > 1 then 6: Split the blocks into finer blocks using Algorithm 3 E = SPLITBLOCK(E, b) 7: b← b/2 8: end if 9: until ` converges\n10: return xK ;\nAlgorithm 5 Hierarchical CorrAttackFlip Require: Loss function `(·, ·), Input image x and its label y, Block size b, Set of blocks E containing all blocks\nof the image, Threshold c, τ , α, Adversarial budget ε 1: x0 = x 2: for eijk ∈ E do 3: Randomly draw v from {−ε, ε} 4: x0[eijk] = v + x0[eijk] 5: end for 6: repeat 7: An = {2εeijk ∈ E|e>ijk(xk − x) < 0} 8: Run CorrAttack flipping −ε to ε x̃k = CORRATTACK (`(·, ·), xk, y, An, c, τ, α) 9: Ap = {−2εeijk ∈ E|e>ijk(x̃k − x) > 0} 10: Run CorrAttack flipping ε to −ε xk+1 = CORRATTACK (`(·, ·), x̃k, y, Ap, c, τ, α) 11: if b > 1 then 12: Split the blocks into finer blocks using Algorithm 3 E = SPLITBLOCK(E, b) 13: b← b/2 14: end if 15: until ` converges 16: return xK ;" }, { "heading": "B DETAILS OF EXPERIMENT SETTING", "text": "We use the hinge loss for all the experiments. For un-targeted attacks,\n`untarget(x, y) = max { F (x)y −max\nj 6=y F (x)j ,−ω\n} (17)\nand for targeted attacks,\n`target(x, y) = max { max j F (x)j − F (x)t,−ω } . (18)\nHere F represents the logits of the network outputs, t is the target class, and ω denotes the margin. The image will be projected into the ε-ball. Besides, the value of the image will be clipped to range [0, 1]." }, { "heading": "B.1 GAUSSIAN PROCESS REGRESSION AND BYAESIAN OPTIMIZATION", "text": "We further provide details on both the computational scaling and modeling setup for the GP regression.\nTo address computational issues, we use GPyTorch (Gardner et al., 2018) for scalable GP regression. GPyTorch follows (Dong et al., 2017) to solve linear systems using the conjugate gradient (CG) method and approximates the log-determinant via the Lanczos process. Without GPyTorch, running BO with a GP regression for more than a few thousand evaluations would be infeasible as classical approaches to GP regression scale cubically in the number of data points.\nOn the modeling side, the GP is parameterized using a Matérn-5/2 kernel with ARD and a constant mean function for all experiments. The GP hyperparameters are fitted before proposing a new batch by optimizing the log-marginal likelihood. The domain is rescaled to [0, 1]d and the function values are standardized before fitting the GP regression. We use a Matérn-5/2 kernel with ARD for CorrAttack and use the following bounds for the hyperparameters: (length scale) λi ∈ [0.005, 2.0 ], (output scale) λ′i ∈ [0.05, 20.0], (noise variance) σ2 ∈ [0.0005, 0.1]." }, { "heading": "B.2 HYPERPARAMETERS", "text": "For CorrAttack in Algorithm 4 and Algorithm 5, we set the initial block size b to be 32 and the step size η for CorrAttackDiff is 0.03. In Algorithm 1, we use the initial sampling ratio m = 0.03n at the start point for Gaussian process regression, the threshold c = 10−4 to decide when to stop the search of current block size. In Algorithm 2, the threshold is different for different block size. For CorrAttackFlip, α = 1, 1, 2, 2, 3 for block size 32, 16, 8, 4, 2 and for CorrAttackDiff, α = 0, 0, 1, 1, 2 for block size 32, 16, 8, 4, 2. We set τ = 3m = 0.09n to remove the earliest samples from D once |D| > α. The Adam optimizer is used to optimize the mean µ and covariance κ of Gaussian process, where the iteration is 1 and the learning rate is 0.1.\nFor PARSI, the block size is set to 32 as CorrAttack , other hyperparameters are the same as the original paper.\nFor Bandits, Bayes-Attack and BayesOpt, the hyperparameters are the same as the original paper.\nWe optimize the hyperparameters for ZOO, NES. For un-targeted attack on NES, we set the sample size to be 50, learning rate to be 0.1. For targeted attack on NES, the sample size is also 50 and the learning rate is 0.05. The learning is decay by 50% if the loss doesn’t decrease for 20 iterations.\nFor NAttack, we set the hyperparameters the same as NES and add momentum and learning rate decay, which are not mentioned in the original paper.\nFor ZOO, we set the learning rate to 1.0 and sample size to be 50. Other setting follows the original paper." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "C.1 OPTIMALITY OF BAYESIAN OPTIMIZATION", "text": "Figure 3 shows the reward function that the Bayesian optimization could find in the action set. CorrAttack could find the action with high reward within just a few queries. It shows that the Gaussian process regression could model the correlation of the reward function and the Bayesian optimization could use it to optimize the time-varying contextual bandits.\nC.2 VARYING THE ADVERSARIAL BUDGET\nWe test CorrAttack on different adversarial budget on ImageNet for both un-targeted attack and targeted attack. Table 5 and Table 6 show the success rate and average queries for ε = 0.04, 0.05, 0.06. CorrAttackFlip achieves the best performance among all methods." }, { "heading": "C.3 ABLATION STUDY ON RANDOM CHOICES", "text": "Table 7 and Table 8 show the ablation study on the strategy to choose action xt+1 in the line 6 of Algorithm 1. The process of Bayesian optimization helps to accelerate the optimization. As targeted attack is more complicated and requires larger number of queries, CorrAttack has more advantage in this scenario." }, { "heading": "C.4 ABLATION STUDY ON HIERARCHICAL ATTACK", "text": "We perform un-targeted attack on Resnet50 as shown in Table 10. Hierarchical attack lowers the average queries and improves the query efficiency. Besides, hierarchical attack avoids the problem of choosing block size. As shown in Table 10, block size for non-hierarchical is essential for the performance." }, { "heading": "C.5 ABLATION STUDY ON FEATURES", "text": "Table 11 shows the success rate and average queries for CorrAttackwith different features. We perform ablation study on the features of the contextual bandits. One contains just the location of the\nblock and the other contains both the location and the PCA feature. PCA helps the learning process of the reward and achieve higher success rate and lower number of queries. PCA feature achieves significant improvement on CorrAttackFlip. We may find more useful features in the future." }, { "heading": "C.6 COMPARISON BETWEEN CORRATTACKFLIP , BAYESOPT AND BAYES-ATTACK", "text": "The main difference between BayesOpt and Bayes-Attack is using different types of GP regression (Standard GP for Bayes-Attack and Additive GP for BayesOpt), so we will consider these two models as a group when comparing with our model CorrAttack.\nDifference between CorrAttack, BayesOpt and Bayes-Attack: For l∞ attacks, assume there are no hierarchical structure, we have blocksE = {e000, e001, · · · , ehwc}, where the block is b×b square of pixels and (h,w, c) = (height/b,width/b, channel). CorrAttack, BayesOpt (Ru et al., 2020) and Bayes-Attack (Shukla et al., 2019) all try to search the adversarial noise on E with perturbation δ ∈ [− , ]d where d = h× w × c, the perturbation of block eijk at time t is δteijk .\nBayesOpt and Bayes-Attack use a GP regression directly on δ ∈ [− , ]d (all blocks),\nf(δ)|Dn ∼ Normal(µn(δ), σ2n(δ)). (19)\nCorrAttack define an action spaceA and use a standard GP regression on features zeijk = (i, j, k, pca) (single block),\ngt(aeijk)|Dt ∼ Normal(µt(zeijk), σ2t (zeijk)). (20)\nAt each iteration, in BayesOpt and Bayes-Attack, the changes of overall perturbation is\nδt − δt−1 = {δte000 ∪ δ t e001 ∪ · · · ∪ δ t ehwc } − {δt−1e000 ∪ δ t−1 e001 ∪ · · · ∪ δ t−1 ehwc }. (21)\nHowever, in CorrAttack,\nδt − δt−1 = δteijk − δ t−1 eijk . (22)\nIn conclusion, BayesOpt and Bayes-Attack view each block as a dimension, try to search the overall perturbation directly. CorrAttack defines a low dimension feature space, keep an overall perturbation and try to search an action on single block.\nTime complexity and running time: The time complexity of fitting GP regression is O(dn2) where d is the dimension of input and n is the number of samples. And the dimension for CorrAttack (d = 4 for zeijk = (i, j, k, pca)) is much smaller than BayesOpt and Bayes-Attack (d = 6912 if h = w = 48, c = 3). Moreover, we can convert the continuous search space of BayesOpt and Bayes-Attack from [− , ]6912 to discrete search space E = {e000, e001, · · · , ehwc}, whose number is only 6912, smaller search space could save the computation time of acquisition function.\nWe compare the running time for CorrAttackFlip with BayesOpt and Bayes-Attack on 20 images from ImageNet. Table 12 shows the running time for the un-targeted attack. We use PyTorch2 to develop these two models. All experiments were conducted on a personal workstation with 28 Intel(R) Xeon(R) Gold 5120 2.20GHz CPUs, an NVIDIA GeForce RTX2080Ti 11GB GPU and 252G memory.\nBayesOpt models the loss function with a very high dimensional Gaussian process. The decomposition of additive kernel also needs to be restarted several times. Even though we try to optimize the speed of BayesOpt with GPU acceleration, it is still very slow and takes hundreds of times more computational resources than CorrAttack .\nBayes-Attack could be regarded as a simpler version of BayesOpt, which does not add additive kernel. We do not evaluate it on targeted task (when query>1000) since GP inference time grows fast as evaluated query increases, e.g. For Bayes-Attack, when 150 <query< 200, Time=1.6s/query; 800 <query< 1000, Time = 10.5s/query. CorrAttack solves this problem with Time=0.1s/query even when query reaches 10000. Since we forget the previous samples before t − τ , our input sample n will be smaller than τ . The forgetting technique can not be applied into the Bayes-Attack and BayesOpt since they are searching the perturbation of all blocks so each sample needs to be remembered." }, { "heading": "C.7 GROWING CURVE OF SUCCESS RATE", "text": "The number of average queries is sometimes misleading due to the the heavy tail distribution of queries. Therefore in Figure 4, we plot the success rates at different query levels to show the detailed behaviors of different attacks. It shows that CorrAttack is much more efficient than other methods at all query levels.\n2https://pytorch.org/\nC.8 VISUALIZATION OF LOCAL PROPERTY AND SLOW VARYING PROPERTY\nFigure 5 shows more examples of finite difference for different network architectures and different dataset. They all have local correlation structure as shown in Figure 1. And Figure 6 shows more examples like Figure 2, the slow varying properties exist for different architectures and different datasets." }, { "heading": "C.9 GOOGLE CLOUD VISION API", "text": "Figure 7 shows the example of attacking the Google Cloud Vision API. CorrAttackFlip and PARSI successfully change the classification result. BeyesOpt, however, can not remove the top 1 classification result out of the output.\nC.10 VISUALIZATION OF ADVERSARIAL EXAMPLES" } ]
2,020
CORRATTACK: BLACK-BOX ADVERSARIAL ATTACK
SP:9403fa2679f18af78aed2e81b75eb39abeb722eb
[ "The paper develops a density diffusion theory to reveal how minima selection quantitatively depends on the minima sharpness and the hyperparameters. It shows theoretically and empirically that SGD favors flat minima exponentially more than sharp minima. In particular, the paper analyzed the dependence of mean escape time from the valley with the Hessians on local minima and saddle points for both SGD and SGLD, and revealed the exponential dependence of the mean escape time with the sharpness. Experiments on real-world data have verified the theoretical results on the mean escape time. " ]
Stochastic Gradient Descent (SGD) and its variants are mainstream methods for training deep networks in practice. SGD is known to find a flat minimum that often generalizes well. However, it is mathematically unclear how deep learning can select a flat minimum among so many minima. To answer the question quantitatively, we develop a density diffusion theory to reveal how minima selection quantitatively depends on the minima sharpness and the hyperparameters. To the best of our knowledge, we are the first to theoretically and empirically prove that, benefited from the Hessian-dependent covariance of stochastic gradient noise, SGD favors flat minima exponentially more than sharp minima, while Gradient Descent (GD) with injected white noise favors flat minima only polynomially more than sharp minima. We also reveal that either a small learning rate or large-batch training requires exponentially many iterations to escape from minima in terms of the ratio of the batch size and learning rate. Thus, large-batch training cannot search flat minima efficiently in a realistic computational time.
[ { "affiliations": [], "name": "DESCENT EXPONEN" }, { "affiliations": [], "name": "TIALLY FAVORS" }, { "affiliations": [], "name": "FLAT MINIMA" }, { "affiliations": [], "name": "Zeke Xie" }, { "affiliations": [], "name": "Issei Sato" }, { "affiliations": [], "name": "Masashi Sugiyama" } ]
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Where is the information in a deep neural network", "venue": "arXiv preprint arXiv:1905.12213,", "year": 2019 }, { "authors": [ "George B Arfken", "Hans J Weber" ], "title": "Mathematical methods for physicists", "venue": null, "year": 1999 }, { "authors": [ "Devansh Arpit", "Stanisław Jastrzębski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron Courville", "Yoshua Bengio" ], "title": "A closer look at memorization in deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nils Berglund" ], "title": "Kramers’ law: Validity, derivations and generalisations", "venue": "Markov Processes and Related Fields,", "year": 2013 }, { "authors": [ "Pratik Chaudhari", "Stefano Soatto" ], "title": "Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks", "venue": "Information Theory and Applications Workshop (ITA),", "year": 2018 }, { "authors": [ "Pratik Chaudhari", "Anna Choromanska", "Stefano Soatto", "Yann LeCun", "Carlo Baldassi", "Christian Borgs", "Jennifer Chayes", "Levent Sagun", "Riccardo Zecchina" ], "title": "Entropy-sgd: Biasing gradient descent into wide valleys", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "William Coffey", "Yu P Kalmykov" ], "title": "The Langevin equation: with applications to stochastic problems in physics, chemistry and electrical engineering, volume 27", "venue": "World Scientific,", "year": 2012 }, { "authors": [ "Claudio De Stefano", "Marilena Maniaci", "Francesco Fontanella", "A Scotto di Freca" ], "title": "Reliable writer identification in medieval manuscripts through page layout features: The “avila", "venue": "bible case. Engineering Applications of Artificial Intelligence,", "year": 2018 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp minima can generalize for deep nets", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Felix Draxler", "Kambis Veschgini", "Manfred Salmhofer", "Fred Hamprecht" ], "title": "Essentially no barriers in neural network energy landscape", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": "arXiv preprint arXiv:1703.11008,", "year": 2017 }, { "authors": [ "Henry Eyring" ], "title": "The activated complex in chemical reactions", "venue": "The Journal of Chemical Physics,", "year": 1935 }, { "authors": [ "BV Gnedenko", "AN Kolmogorov" ], "title": "Limit distributions for sums of independent", "venue": "Am. J. Math,", "year": 1954 }, { "authors": [ "Guy Gur-Ari", "Daniel A Roberts", "Ethan Dyer" ], "title": "Gradient descent happens in a tiny subspace", "venue": "arXiv preprint arXiv:1812.04754,", "year": 2018 }, { "authors": [ "Peter Hanggi" ], "title": "Escape from a metastable state", "venue": "Journal of Statistical Physics,", "year": 1986 }, { "authors": [ "Peter Hänggi", "Peter Talkner", "Michal Borkovec" ], "title": "Reaction-rate theory: fifty years after kramers", "venue": "Reviews of modern physics,", "year": 1990 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Fengxiang He", "Tongliang Liu", "Dacheng Tao" ], "title": "Control batch size and learning rate to generalize well: Theoretical and empirical evidence", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Haowei He", "Gao Huang", "Yang Yuan" ], "title": "Asymmetric valleys: Beyond sharp and flat local minima", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Simplifying neural nets by discovering flat minima", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Wenqing Hu", "Chris Junchi Li", "Lei Li", "Jian-Guo Liu" ], "title": "On the diffusion approximation of nonconvex stochastic gradient descent", "venue": "Annals of Mathematical Sciences and Applications,", "year": 2019 }, { "authors": [ "Stanisław Jastrzębski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos Storkey" ], "title": "Three factors influencing minima in sgd", "venue": "arXiv preprint arXiv:1711.04623,", "year": 2017 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Robert Kleinberg", "Yuanzhi Li", "Yang Yuan" ], "title": "An alternative view: When does sgd escape local minima", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hendrik Anthony Kramers" ], "title": "Brownian motion in a field of force and the diffusion model of chemical reactions", "venue": null, "year": 1940 }, { "authors": [ "Alex Krizhevsky" ], "title": "One weird trick for parallelizing convolutional neural networks", "venue": "arXiv preprint arXiv:1404.5997,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Qianxiao Li", "Cheng Tai" ], "title": "Stochastic modified equations and adaptive stochastic gradient algorithms", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Seymour Lipschutz", "Murray R Spiegel", "Dennis Spellman" ], "title": "Vector analysis and an introduction to tensor analysis", "venue": null, "year": 2009 }, { "authors": [ "Stephan Mandt", "Matthew D Hoffman", "David M Blei" ], "title": "Stochastic gradient descent as approximate bayesian inference", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Thanh Huy Nguyen", "Umut Simsekli", "Mert Gurbuzbalaban", "Gaël Richard" ], "title": "First exit time analysis of stochastic gradient descent under heavy-tailed gradient noise", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Abhishek Panigrahi", "Raghav Somani", "Navin Goyal", "Praneeth Netrapalli" ], "title": "Non-gaussianity of stochastic gradient noise", "venue": "arXiv preprint arXiv:1910.09626,", "year": 2019 }, { "authors": [ "Yudi Pawitan" ], "title": "In all likelihood: statistical modelling and inference using likelihood", "venue": null, "year": 2001 }, { "authors": [ "Maxim Raginsky", "Alexander Rakhlin", "Matus Telgarsky" ], "title": "Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis", "venue": "In Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Levent Sagun", "Utku Evci", "V Ugur Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the hessian of over-parametrized neural networks", "venue": "arXiv preprint arXiv:1706.04454,", "year": 2017 }, { "authors": [ "Issei Sato", "Hiroshi Nakagawa" ], "title": "Approximation analysis of stochastic gradient langevin dynamics by using fokker-planck equation and ito process", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Umut Simsekli", "Levent Sagun", "Mert Gurbuzbalaban" ], "title": "A tail-index analysis of stochastic gradient noise in deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Samuel L Smith", "Quoc V Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Samuel L Smith", "Pieter-Jan Kindermans", "Quoc V Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using pac-bayesian analysis", "venue": null, "year": 1901 }, { "authors": [ "Nicolaas Godfried Van Kampen" ], "title": "Stochastic processes in physics and chemistry, volume 1", "venue": null, "year": 1992 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Lei Wu", "Zhanxing Zhu" ], "title": "Towards understanding generalization of deep learning: Perspective of loss landscapes", "venue": "arXiv preprint arXiv:1706.10239,", "year": 2017 }, { "authors": [ "Lei Wu", "Chao Ma", "E Weinan" ], "title": "How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zeke Xie", "Fengxiang He", "Shaopeng Fu", "Issei Sato", "Dacheng Tao", "Masashi Sugiyama" ], "title": "Artificial neural variability for deep learning: On overfitting, noise memorization, and catastrophic forgetting", "venue": "arXiv preprint arXiv:2011.06220,", "year": 2020 }, { "authors": [ "Zeke Xie", "Issei Sato", "Masashi Sugiyama" ], "title": "Stable weight decay regularization", "venue": "arXiv preprint arXiv:2011.11152,", "year": 2020 }, { "authors": [ "Zeke Xie", "Xinrui Wang", "Huishuai Zhang", "Issei Sato", "Masashi Sugiyama" ], "title": "Adai: Separating the effects of adaptive learning rate and momentum inertia", "venue": "arXiv preprint arXiv:2006.15815,", "year": 2020 }, { "authors": [ "Pan Xu", "Jinghui Chen", "Difan Zou", "Quanquan Gu" ], "title": "Global convergence of langevin dynamics based algorithms for nonconvex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhewei Yao", "Amir Gholami", "Qi Lei", "Kurt Keutzer", "Michael W Mahoney" ], "title": "Hessian-based analysis of large batch training and robustness to adversaries", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Guodong Zhang", "Lala Li", "Zachary Nado", "James Martens", "Sushant Sachdeva", "George Dahl", "Chris Shallue", "Roger B Grosse" ], "title": "Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yuchen Zhang", "Percy Liang", "Moses Charikar" ], "title": "A hitting time analysis of stochastic gradient langevin dynamics", "venue": "In Conference on Learning Theory, pp. 1980–2022,", "year": 2017 }, { "authors": [ "Huan-Xiang Zhou" ], "title": "Rate theories for biologists", "venue": "Quarterly reviews of biophysics,", "year": 2010 }, { "authors": [ "Zhanxing Zhu", "Jingfeng Wu", "Bing Yu", "Lei Wu", "Jinwen Ma" ], "title": "The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects", "venue": "In ICML,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, deep learning (LeCun et al., 2015) has achieved great empirical success in various application areas. Due to the over-parametrization and the highly complex loss landscape of deep networks, optimizing deep networks is a difficult task. Stochastic Gradient Descent (SGD) and its variants are mainstream methods for training deep networks. Empirically, SGD can usually find flat minima among a large number of sharp minima and local minima (Hochreiter & Schmidhuber, 1995; 1997). More papers reported that learning flat minima closely relate to generalization (Hardt et al., 2016; Zhang et al., 2017a; Arpit et al., 2017; Hoffer et al., 2017; Dinh et al., 2017; Neyshabur et al., 2017; Wu et al., 2017; Dziugaite & Roy, 2017; Kleinberg et al., 2018). Some researchers specifically study flatness itself. They try to measure flatness (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017; Sagun et al., 2017; Yao et al., 2018), rescale flatness (Tsuzuku et al., 2019; Xie et al., 2020b), and find flatter minima (Hoffer et al., 2017; Chaudhari et al., 2017; He et al., 2019b; Xie et al., 2020a). However, we still lack a quantitative theory that answers why deep learning dynamics selects a flat minimum.\nThe diffusion theory is an important theoretical tool to understand how deep learning dynamics works. It helps us model the diffusion process of probability densities of parameters instead of model parameters themselves. The density diffusion process of Stochastic Gradient Langevin Dynamics (SGLD) under injected isotropic noise has been discussed by (Sato & Nakagawa, 2014; Raginsky et al., 2017; Zhang et al., 2017b; Xu et al., 2018). Zhu et al. (2019) revealed that anisotropic diffusion of SGD often leads to flatter minima than isotropic diffusion. A few papers has quantitatively studied the diffusion process of SGD under the isotropic gradient noise assumption. Jastrzębski et al. (2017) first studied the minima selection probability of SGD. Smith & Le (2018) presented a Beyesian perspective on generalization of SGD. Wu et al. (2018) studied the escape problems of\nSGD from a dynamical perspective, and obtained the qualitative conclusion on the effects of batch size, learning rate, and sharpness. Hu et al. (2019) quantitatively showed that the mean escape time of SGD exponentially depends on the inverse learning rate. Achille & Soatto (2019) also obtained a related proposition that describes the mean escape time in terms of a free energy that depends on the Fisher Information. Li et al. (2017) analyzed Stochastic Differential Equation (SDE) of adaptive gradient methods. Nguyen et al. (2019) mainly contributed to closing the theoretical gap between continuous-time dynamics and discrete-time dynamics under isotropic heavy-tailed noise.\nHowever, the related papers mainly analyzed the diffusion process under parameter-independent and isotropic gradient noise, while stochastic gradient noise (SGN) is highly parameter-dependent and anisotropic in deep learning dynamics. Thus, they failed to quantitatively formulate how SGD selects flat minima, which closely depends on the Hessian-dependent structure of SGN. We try to bridge the gap between the qualitative knowledge and the quantitative theory for SGD in the presence of parameter-dependent and anisotropic SGN. Mainly based on Theorem 3.2 , we have four contributions:\n• The proposed theory formulates the fundamental roles of gradient noise, batch size, the learning rate, and the Hessian in minima selection. • The SGN covariance is approximately proportional to the Hessian and inverse to batch size. • Either a small learning rate or large-batch training requires exponentially many iterations to\nescape minima in terms of ratio of batch size and learning rate. • To the best of our knowledge, we are the first to theoretically and empirically reveal that\nSGD favors flat minima exponentially more than sharp minima." }, { "heading": "2 STOCHASTIC GRADIENT NOISE AND SGD DYNAMICS", "text": "We mainly introduce the necessary foundation for the proposed diffusion theory in this section. We denote the data samples as {xj}mj=1, the model parameters as θ and the loss function over data samples x as L(θ, x). For simplicity, we denote the training loss as L(θ). Following Mandt et al. (2017), we may write SGD dynamics as\nθt+1 = θt − η ∂L̂(θt)\n∂θt = θt − η\n∂L(θt)\n∂θt + ηC(θt)\n1 2 ζt, (1)\nwhere L̂(θ) is the loss of one minibatch, ζt ∼ N (0, I), and C(θ) represents the gradient noise covariance matrix. The classic approach is to model SGN by Gaussian noise, N (0, C(θ)) (Mandt et al., 2017; Smith & Le, 2018; Chaudhari & Soatto, 2018).\nStochastic Gradient Noise Analysis. We first note that the SGN we study is introduced by minibatch training, C(θt) 1 2 ζt =\n∂L(θt) ∂θt − ∂L̂(θt)∂θt , which is the difference between gradient descent and stochastic\ngradient descent. According to Generalized Central Limit Theorem (Gnedenko et al., 1954), the mean of many infinite-variance random variables converges to a stable distribution, while the mean of many finite-variance random variables converges to a Gaussian distribution. As SGN is finite in practice, we believe the Gaussian approximation of SGN is reasonable.\nSimsekli et al. (2019) argued that SGN is Lévy noise (stable variables), rather than Gaussian noise. They presented empirical evidence showing that SGN seems heavy-tailed, and the heavy-tailed distribution looks closer to a stable distribution than a Gaussian distribution. However, this research line (Simsekli et al., 2019; Nguyen et al., 2019) relies on a hidden strict assumption that SGN must be isotropic and obey the same distribution across dimensions. Simsekli et al. (2019) computed “SGN” across n model parameters and regarded “SGN\" as n samples drawn from a single-variant distribution. This is why one tail-index for all parameters was studied in Simsekli et al. (2019). The arguments in Simsekli et al. (2019) did not necessarily hold for parameter-dependent and anisotropic Gaussian noise. In our paper, SGN computed over different minibatches obeys a n-variant Gaussian distribution, which can be parameter-dependent and anisotropic.\nIn Figure 1, we empirically verify that SGN is highly similar to Gaussian noise instead of heavy-tailed Lévy noise. We recover the experiment of Simsekli et al. (2019) to show that gradient noise is approximately Lévy noise only if it is computed across parameters. Figure 1 actually suggests that\nthe contradicted observations are from the different formulations of gradient noise. Simsekli et al. (2019) studied the distribution of SGN as a single-variant distribution, while we relax it as a n-variant distribution. Our empirical analysis in Figure 1 holds well at least when the batch size B is larger than 16, which is common in practice. Similar empirical evidence can be observed for training ResNet18 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2009), seen in Appendix C.\nPanigrahi et al. (2019) also observed that for batch sizes 256 and above, the distribution of SGN is best described as Gaussian at-least in the early phases of training. Comparing our results with Panigrahi et al. (2019), we noticed that the Gaussianity of SGN may depend on more unknown factors. First, SGN on random models is more Gaussian than well-trained models. Second, the layer/network matters. Because SGN on some layers/networks is more Gaussian than other layers/networks.\nThe isotropic gradient noise assumption is too rough to capture the Hessian-dependent covariance structure of SGN, which we will study in Figure 2 later. Our theory that focuses on parameterdependent and anisotropic SGN brings a large improvement over existing parameter-independent and isotropic noise, although Simsekli et al. (2019) brought an improvement over more conventional parameter-independent and isotropic Gaussian noise. A more sophisticated theory is interesting under parameter-independent anisotropic heavy-tailed noise, when the batch size is too small (B ∼ 1) to apply Central Limit Theorem. We will leave it as future work.\nSGD Dynamics. Let us replace η by dt as unit time. Then the continuous-time dynamics of SGD (Coffey & Kalmykov, 2012) is written as\ndθ = −∂L(θ) ∂θ dt+ [2D(θ)] 1 2 dWt, (2)\nwhere dWt ∼ N (0, Idt) and D(θ) = η2C(θ). We note that the dynamical time t in the continuoustime dynamics is equal to the product of the number of iterations T and the learning rate η: t = ηT . The associated Fokker-Planck Equation is written as\n∂P (θ, t)\n∂t =∇ · [P (θ, t)∇L(θ)] +∇ · ∇D(θ)P (θ, t) (3)\n= ∑ i ∂ ∂θi [ P (θ, t) ∂L(θ) ∂θi ] + ∑ i ∑ j ∂2 ∂θi∂θj Dij(θ)P (θ, t), (4)\nwhere∇ is a nabla operator, and Dij is the element in the ith row and jth column of D. In standard SGLD, the injected gradient noise is fixed and isotropic Gaussian, D = I .\nThe next question is how to formulate the SGN covariance C(θ) for SGD? Based on Smith & Le (2018), we can express the SGN covariance as\nC(θ) = 1\nB 1 m m∑ j=1 ∇L(θ, xj)∇L(θ, xj)> −∇L(θ)∇L(θ)> ≈ 1 Bm m∑ j=1 ∇L(θ, xj)∇L(θ, xj)>.\n(5)\nThe approximation is true near critical points, due to the fact that the gradient noise variance dominates the gradient mean near critical points. We know the observed fisher information matrix satisfies FIM(θ) ≈ H(θ) near minima, referring to Chapter 8 of (Pawitan, 2001). Following Jastrzębski et al. (2017); Zhu et al. (2019), we obtain\nC(θ) ≈ 1 Bm m∑ j=1 ∇L(θ, xj)∇L(θ, xj)> = 1 B FIM(θ) ≈ 1 B H(θ), (6)\nwhich approximately gives\nD(θ) = η\n2 C(θ) =\nη\n2B H(θ) (7)\nnear minima. It indicates that the SGN covariance C(θ) is approximately proportional to the Hessian H(θ) and inverse to the batch size B. Obviously, we can generalize Equation 7 by D(θ) = ηC(θ)2 = η 2B [H(θ)]\n+ near critical points, when there exist negative eigenvalues in H along some directions. We use [·]+ to denote the positive semidefinite transformation of a symmetric matrix: if we have the eigendecomposation H = U diag(H1, · · · , Hn−1, Hn)U>, then [H]+ = U diag(|H1|, · · · , |Hn−1|, |Hn|)U>. We empirically verify this relation in Figure 2 for pretrained fully-connected networks, and a followup paper Xie et al. (2020c) first verified this relation for randomly initialized fully-connected networks on real-world datasets. The Pearson Correlation is up to 0.999 for pretrained networks. We note that, the relation still approximately holds for even the randomly network, which is far from critical points. The correlation is especially high along the flat directions with small-magnitude eigenvalues of the Hessian (Xie et al., 2020c). We emphasize that previous papers with the isotropic Lévy or Gaussian noise approximation all failed to capture this core relation in deep learning dynamics." }, { "heading": "3 SGD DIFFUSION THEORY", "text": "We start the theoretical analysis from the classical Kramers Escape Problem (Kramers, 1940). We assume there are two valleys, Sharp Valley a1 and Flat Valley a2, seen in Figure 3. Also Col b is the\nboundary between two valleys. What is the mean escape time for a particle governed by Equation 2 to escape from Sharp Valley a1 to Flat Valley a2? The mean escape time is widely used in related statistical physics and stochastic process (Van Kampen, 1992; Nguyen et al., 2019).\nGauss’s Divergence Theorem (Arfken & Weber, 1999; Lipschutz et al., 2009) states that the surface integral of a vector field over a closed surface, which is called the flux through the surface, is equal to the volume integral of the divergence over the region inside the surface. We respectively denote the mean escape time as τ , the escape rate as γ, and the probability current as J . We apply Gauss’s Divergence Theorem to the Fokker-Planck Equation resulting in\n∇ · [P (θ, t)∇L(θ)] +∇ · ∇D(θ)P (θ, t) = ∂P (θ, t) ∂t = −∇ · J(θ, t). (8)\nThe mean escape time is expressed (Van Kampen, 1992) as\nτ = 1 γ = P (θ ∈ Va)∫ Sa J · dS , (9)\nwhere P (θ ∈ Va) = ∫ Va P (θ)dV is the current probability inside Valley a, J is the probability\ncurrent produced by the probability source P (θ ∈ Va), j = ∫ Sa J · dS is the probability flux (surface integrals of probability current), Sa is the surface (boundary) surrounding Valley a, and Va is the volume surrounded by Sa. We have j = J in the case of one-dimensional escape.\nClassical Assumptions. We state three classical assumptions first for the density diffusion theory. Assumption 1 is the common second order Taylor approximation, which was also used by (Mandt et al., 2017; Zhang et al., 2019). Assumptions 2 and 3 are widely used in many fields’ Kramers Escape Problems, including statistical physics (Kramers, 1940; Hanggi, 1986), chemistry (Eyring, 1935; Hänggi et al., 1990), biology (Zhou, 2010), electrical engineering (Coffey & Kalmykov, 2012), and stochastic process (Van Kampen, 1992; Berglund, 2013). Related machine learning papers (Jastrzębski et al., 2017) usually used Assumptions 2 and 3 as the background of Kramers Escape Problems.\nAssumption 1 (The Second Order Taylor Approximation). The loss function around critical points θ? can be approximately written as\nL(θ) = L(θ?) + g(θ?)(θ − θ?) + 1 2 (θ − θ?)>H(θ?)(θ − θ?).\nAssumption 2 (Quasi-Equilibrium Approximation). The system is in quasi-equilibrium near minima. Assumption 3 (Low Temperature Approximation). The gradient noise is small (low temperature).\nWe will dive into these two assumptions deeper than previous papers for SGD dynamics. Assumptions 2 and 3 both mean that our diffusion theory can better describe the escape processes that cost more iterations. As this class of “slow” escape processes takes main computational time compared with “fast” escape processes, this class of “slow” escape process is more interesting for training of deep neural networks. Our empirical analysis in Section 4 supports that the escape processes in the\nwide range of iterations (50 to 100,000 iterations) can be modeled by our theory very well. Thus, Assumption 2 and 3 are reasonable in practice. More discussion can be found in Appendix B.\nEscape paths. We generalize the concept of critical points into critical paths as the path where 1) the gradient perpendicular to the path direction must be zero, and 2) the second order directional derivatives perpendicular to the path direction must be nonnegative. The Most Possible Paths (MPPs) for escaping must be critical paths. The most possible escape direction at one point must be the direction of one eigenvector of the Hessian at the point. Under Assumption 3, the probability density far from critical points and MPPs is very small. Thus, the density diffusion will concentrate around MPPs. Draxler et al. (2018) reported that minima in the loss landscape of deep networks are connected by Minimum Energy Paths (MEPs) that are essentially flat and Local MEPs that have high-loss saddle points. Obviously, MPPs in our paper correspond to Local MEPs. The density diffusion along MEPs, which are strictly flat, is ignorable according to our following analysis.\nThe boundary between Sharp Valley a1 and Flat Valley a2 is the saddle point b. The Hessian at b, Hb, must have only one negative eigenvalue and the corresponding eigenvector is the escape direction. Without losing generality, we first assume that there is only one most possible path through Col b existing between Sharp Valley a1 and Flat Valley a2.\nSGLD diffusion. We first analyze a simple case: how does SGLD escape sharp minima? Researchers are interested in SGLD, when the injected noise dominates SGN as η → 0 in final epochs. Because SGLD may work as a Bayesian inference method in this limit (Welling & Teh, 2011). SGLD is usually simplified as Gradient Descent with injected white noise, whose behavior is identical to Kramers Escape Problem with thermo noise in statistical physics. We present Theorem 3.1. We leave the proof in Appendix A.1. We also note that more precise SGLD diffusion analysis should study a mixture of injected white noise and SGN. Theorem 3.1 (SGLD Escapes Minima). The loss function L(θ) is of class C2 and n-dimensional. Only one most possible path exists between Valley a and the outside of Valley a. If Assumption 1, 2, and 3 hold, and the dynamics is governed by SGLD, then the mean escape time from Valley a to the outside of Valley a is\nτ = 1\nγ = 2π √ −det(Hb) det(Ha) 1 |Hbe| exp ( ∆L D ) .\nWe denote that Ha and Hb are the Hessians of the loss function at the minimum a and the saddle point b, ∆L = L(b)− L(a) is the loss barrier height, e indicates the escape direction, and Hbe is the eigenvalue of the Hessian Hb corresponding to the escape direction. The diffusion coefficient D is usually set to 1 in SGLD.\nSGD diffusion. However, SGD diffusion is essentially different from SGLD diffusion in several aspects: 1) anisotropic noise, 2) parameter-dependent noise, and 3) the stationary distribution of SGD is far from the Gibs-Boltzmann distribution, P (θ) = 1Z exp ( −L(θ)D ) . These different characteristics make SGD diffusion behave differently from known physical dynamical systems and much less studied than SGLD diffusion. We formulate Theorem 3.2 for SGD. We leave the proof in Appendix A.2.The theoretical analysis of SGD can be easily generalized to the dynamics with a mixture of SGN and injected white noise, as long as the eigenvectors of D(θ) are closely aligned with the eigenvectors of H(θ). Theorem 3.2 (SGD Escapes Minima). The loss function L(θ) is of class C2 and n-dimensional. Only one most possible path exists between Valley a and the outside of Valley a. If Assumption 1, 2, and 3 hold, and the dynamics is governed by SGD, then the mean escape time from Valley a to the outside of Valley a is\nτ = 2π 1\n|Hbe| exp\n[ 2B∆L\nη\n( s\nHae + (1− s) |Hbe|\n)] ,\nwhere s ∈ (0, 1) is a path-dependent parameter, and Hae and Hbe are, respectively, the eigenvalues of the Hessians at the minimum a and the saddle point b corresponding to the escape direction e.\nMultiple-path escape. Each escape path contributes to the total escape rate. Multiple paths combined together have a total escape rate. If there are multiple parallel from the start valley to the end valley, we can compute the total escape rate easily based on the following computation rule. The computation\nrule is based on the fact that the probability flux integrals are additive. We can easily generalize the mean escape time analysis into the cases that there are multiple parallel escape paths indexed by p. As for multiple-valley escape problems, we can always reduce a multiple-valley escape problem into multiple two-valley escape problems. We also note that, while Theorem A.2 does not depend the dimensionality directly, higher dimensionality may increase the number of escape paths and loss valleys, and change the spectrum of the Hessians.\nRule 1. If there are multiple MPPs between the start valley and the end valley, then γtotal = ∑ p γp.\nThus, we only need to find the saddle points that connect two valleys as we analyzed in the paper and analyze the escape rates.\nMinima selection. Now, we may formulate the probability of minima selection as Proposition 1. We leave the proof in Appendix A.3. In deep learning, one loss valley represents one mode and the landscape contain many good modes and bad modes. SGD transits from one mode to another mode during training. The mean escape time of one mode corresponds to the number of iterations which SGD spends on this mode during training, which is naturally proportional to the probability of selecting this mode after training.\nProposition 1. Suppose there are two valleys connected by an escape path. If all assumptions of Theorem 3.2 hold, then the stationary distribution of locating these valleys is given by\nP (θ ∈ Va) = τa∑ v τv ,\nwhere v is the index of valleys, and τv is the mean escape time from Valley v to the outside of Valley v." }, { "heading": "4 EMPIRICAL ANALYSIS", "text": "In this section, we try to directly validate the escape formulas on real-world datasets. Each escape process, from the inside of loss valleys to the outside of loss valleys, are repeatedly simulated for 100 times under various gradient noise scales, batch sizes, learning rates, and sharpness.\nHow to compare the escape rates under the same settings with various minima sharpness? Our method is to multiply a rescaling factor √ k to each parameter, and the Hessian will be proportionally rescaled by a factor k. If we let L(θ) = f(θ)→ L(θ) = f( √ kθ), then H(θ) = ∇2f(θ)→ H(θ) = k∇2f(θ). Thus, we can use k to indicate the minima sharpness. The theoretical relations of SGD we try to validate can be formulated as: (1) − log(γ) = O( 1k ), (2) − log(γ) = O(B), and (3) − log(γ) = O( 1η ).\nThe mean escape time analysis of SGD. Styblinski-Tang Function, which has multiple minima and saddle points, is a common test function for nonconvex optimization. We conduct an intuitional 10-dimensional experiment, where the simulations start from a given minimum and terminate when reaching the boundary of the loss valley. The number of iterations is recorded for calculating\nthe escape rate. We also train fully connected networks on four real-world datasets, including a) Avila, b) Banknote Authentication, c) Cardiotocography, d) Dataset for Sensorless Drive Diagnosis (De Stefano et al., 2018; Dua & Graff, 2017). Figure 4 and Figure 5 clearly verifies that the escape rate exponentially depends on the minima sharpness (reflected by k), the batch size, and the learning rate on both test functions and real-world training, which fully supports our theoretical results.\nModel architecture and details: We used fully-connected networks with the depth 2 and the width 10 in Figure 5. The experiments using Logistic Regression and Fully-connected networks with the depth 3 are presented in Appendix E. We leave more experimental details and results in Appendix D.1 and Appendix E.\nThe mean escape time analysis of SGLD. We try to validate γ = O(k) and − log(γ) = O( 1D ) for SGLD (dominated by injected Gaussian noise). Figure 6 shows that SGLD only favors flat minima polynomially more than sharp minima as Theorem 3.1 indicates. Figure 6 also verifies that the injected gradient noise scale exponentially affects flat minima selection." }, { "heading": "5 DISCUSSION", "text": "SGD favors flat minima exponentially more than sharp minima. We can discover a few interesting insights about SGD by Theorem 3.2. Most importantly, the mean escape time exponentially depends on the eigenvalue of the Hessian at minima along the escape direction, Hae. Thus, SGD favors flat minima exponentially more than sharp minima. We claim one main advantage of SGD comes from the exponential relation of the mean escape time and the minima sharpness. The measure of “sharpness” has reformed in contexts of SGLD and SGD. In the context of SGLD, the “sharpness” is quantified by the determinant of the Hessian. In the context of SGD, the “sharpness” is quantified by the top eigenvalues of the Hessian along the escape direction. Based on the proposed diffusion theory, recent work (Xie et al., 2020c) successfully proved that SGD favors flat minima significantly more than Adam.\nThe ratio of the batch size and the learning rate exponentially matters. Theorem 3.2 explains why large-batch training can easily get trapped near sharp minima, and increasing the learning rate proportionally is helpful for large-batch training (Krizhevsky, 2014; Keskar et al., 2017; Sagun et al., 2017; Smith et al., 2018; Yao et al., 2018; He et al., 2019a). We argue that the main cause is large-batch training expects exponentially longer time to escape minima. Note that, as the mean escape time in the theorems is equivalent to the product of the learning rate and the number of iterations, both the number of iterations and dynamical time exponentially depend on the ratio of the batch size and the learning rate. The practical computational time in large-batch training is usually too short to search many enough flat minima. We conjecture that exponentially increasing training iterations may be helpful for large batch training, while this is often too expensive in practice.\nLow dimensional diffusion. Most eigenvalues of the Hessian at the loss landscape of overparametrized deep networks are close to zero, while only a small number of eigenvalues are large (Sagun et al., 2017; Li et al., 2018). Zero eigenvalues indicate zero diffusion along the corresponding directions. Thus, we may theoretically ignore these zero-eigenvalue directions. This also indicates that the density diffusion is ignorable along an essentially flat MEP in Draxler et al. (2018).\nAs the escape rate exponentially depends the corresponding eigenvalues, a small number of large eigenvalues means that the process of minima selection mainly happens in the relatively low dimensional subspace corresponding to top eigenvalues of the Hessian. Gur-Ari et al. (2018) also reported a similar finding. Although the parameter space is very high-dimensional, SGD dynamics hardly depends on those “meaningless” dimensions with small second order directional derivatives. This novel characteristic of SGD significantly reduces the explorable parameter space around one minimum into a much lower dimensional space.\nHigh-order effects. As we have applied the second-order Taylor approximation near critical points, our SGD diffusion theory actually excludes the third-order and higher-order effect. The asymmetric valley in He et al. (2019b), which only appears in high-order analysis, is beyond the scope of this paper. However, we also argue that the third-order effect is much smaller than the second-order effect under the low temperature assumption in Kramers Escape Problems. We will leave the more refined high-order theory as future work." }, { "heading": "6 CONCLUSION", "text": "In this paper, we demonstrate that one essential advantage of SGD is selecting flat minima with an exponentially higher probability than sharp minima. To the best of our knowledge, we are the first to formulate the exponential relation of minima selection to the minima sharpness, the batch size, and the learning rate. Our work bridges the gap between the qualitative knowledge and the quantitative theoretical knowledge on the minima selection mechanism of SGD. We believe the proposed theory not only helps us understand how SGD selects flat minima, but also will provide researchers a powerful theoretical tool to analyze more learning behaviors and design better optimizers in future." }, { "heading": "ACKNOWLEDGEMENT", "text": "We thanks Dr. Yuanqian Tang for helpful discussion. MS was supported by the International Research Center for Neurointelligence (WPI-IRCN) at The University of Tokyo Institutes for Advanced Study." }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PROOF OF THEOREM 3.1", "text": "Proof. This proposition is a well known conclusion in statistical physics under Assumption 1, 2 and 3. We still provide an intuitional proof here, and the following proof of SGD Diffusion will closely relate to this proof. We decompose the proof into two steps: 1) compute the probability of locating in valley a, P (θ ∈ Va), and 2) compute the probability flux j = ∫ Sa J · dS.\nWithout losing generality, we first prove the one-dimensional case.\nStep 1: Under Assumption 1, the stationary distribution around minimum a is P (θ) = P (a) exp[−L(θ)−L(a)T ], where T = D. Under Assumption 3, we may only consider the second order Taylor approximation of the density function around critical points. We use the T notation as the temperature parameter in the stationary distribution, and use the D notation as the diffusion coefficient in the dynamics, for their different roles.\nP (θ ∈ Va) (10)\n= ∫ θ∈Va P (θ)dV (11)\n= ∫ θ∈Va P (a) exp [ −L(θ)− L(a) T ] dθ (12)\n=P (a) ∫ θ∈Va exp [ − 1 2 (θ − a) >Ha(θ − a) +O(∆θ3) T ] dθ (13)\n=P (a) (2πT )\n1 2\nH 1 2 a\n. (14)\nStep 2:\nJ =P (θ)∇L(θ) + P (θ)∇D +D∇P (θ) (15)\nJ =P (θ) ( ∇L(θ) +∇D − D\nT ∇L(θ)\n) (16)\n∇D = ( D T − 1 ) ∇L (17)\nApply this result to the Fokker-Planck Equation 4, we have\n∇ · ∇[D(θ)P (θ, t)] (18) =∇ ·D∇P (θ, t) +∇ · [( D T − 1 ) ∇L(θ) ] P (θ, t) (19)\nAnd thus we obtain the Smoluchowski equation and a new form of J\n∂P (θ, t)\n∂t = ∇ ·\n[ D ( 1\nT ∇L(θ) +∇\n) P (θ, t) ] = −∇ · J(θ, t), (20)\nJ(θ) = D exp ( −L(θ) T ) ∇ [ exp ( L(θ) T ) P (θ) ] . (21)\nWe note that the probability density outside Valley a must be zero, P (c) = 0. As we want to compute the probability flux escaping from Valley a in the proof, the probability flux escaping from other valleys into Valley a should be ignored. Under Assumption 2, we integrate the equation from Valley a to the outside of Valley a along the most possible escape path∫ c\na\n∂\n∂θ\n[ exp ( L(θ)\nT\n) P (θ) ] dθ = ∫ c a − J D exp ( L(θ) T ) dθ (22)\nexp\n( L(θ)\nT\n) P (θ)|ca =− J\nD ∫ c a exp ( L(θ) T ) dθ (23)\n0− exp ( L(a)\nT\n) P (a) =− J\nD ∫ c a exp ( L(θ) T ) dθ (24)\nJ = D exp\n( L(a) T ) P (a)∫ c\na exp ( L(θ) T ) dθ . (25)\nWe move J to the outside of integral based on Gauss’s Divergence Theorem, because J is fixed on the escape path from one minimum to another. As there is no field source on the escape path,∫ V ∇ · J(θ)dV = 0. Then ∇J(θ) = 0. Obviously, only minima are probability sources in deep learning. Under Assumption 3 and the second-order Taylor approximation, we have∫ c a exp ( L(θ) T ) dθ (26)\n= ∫ c a exp [ L(b) + 12 (θ − b) >Hb(θ − b) +O(∆θ3) T ] dθ (27)\n≈ exp ( L(b)\nT )∫ +∞ −∞ exp [ 1 2 (θ − b) >Hb(θ − b) T ] dθ (28)\n= exp\n( L(b)\nT\n)√ 2πT\n|Hb| . (29)\nBased on the results of Step 1 and Step 2, we obtain\nγ =\n∫ Sa J · dS P (θ ∈ Va) =\nJ\nP (θ ∈ Va) (30)\n= DP (a) exp\n( L(a) T ) exp ( L(b) T )√ 2πT |Hb|\n1 P (a) √\n2πT Ha\n(31)\n= D √ Ha|Hb|\n2πT exp\n( −∆Lab\nT\n) (32)\n= √ Ha|Hb| 2π exp ( −∆Lab D ) (33)\nWe generalize the proof of one-dimensional diffusion to high-dimensional diffusion\nStep 1: P (θ ∈ Va) (34)\n= ∫ θ∈Va P (θ)dV (35)\n= ∫ θ∈Va P (a) exp [ −L(θ)− L(a) T ] dV (36)\n=P (a) ∫ θ∈Va exp [ − 1 2 (θ − a) >Ha(θ − a) +O(∆θ3) T ] dV (37)\n=P (a) (2πT )\nn 2\ndet(Ha) 1 2\n(38)\nStep 2: Based on the formula of the one-dimensional probability current and flux, we obtain∫ Sb J · dS (39)\n=Jb ∫ Sb exp [ − 1 2 (θ − b) >H+b (θ − b) T ] dS (40)\n=Jb (2πT )\nn−1 2 ( ∏n−1 i=1 Hbi) 1 2\n(41)\nSo we have\nτ =2π\n√ ∏n−1 i=1 Hbi\ndet(Ha)|Hbe| exp\n( ∆L\nT\n) (42)\n=2π √ −det(Hb) det(Ha) 1 |Hbe| exp ( ∆L D ) . (43)" }, { "heading": "A.2 PROOF OF THEOREM 3.2", "text": "Proof. We decompose the proof into two steps and analyze the one-dimensional case like before. The following proof is similar to the proof of SGLD except that we make Ta the temperature near the minimum a and Tb the temperature near the saddle point b.\nOne-dimensional SGD Diffusion:\nStep 1: Under Assumption 3, we may only consider the second order Taylor approximation of the density function around critical points.\nP (θ ∈ Va) (44)\n= ∫ θ∈Va P (θ)dV (45)\n= ∫ θ∈Va P (a) exp [ −L(θ)− L(a) Ta ] dV (46)\n=P (a) ∫ θ∈Va exp [ − 1 2 (θ − a) >Ha(θ − a) +O(∆θ3) Ta ] dθ (47)\n=P (a) (2πTa)\n1 2\nH 1 2 a\n(48)\nStep 2: J =P (θ)∇L(θ) + P (θ)∇D +D∇P (θ) (49)\nJ =P (θ) [ ∇L(θ) +∇D − D\nT ∇L(θ)−DL(θ)∇\n( 1\nT\n)] (50)\nAccording to Equation 7, ∇ (\n1 T\n) is ignorable near the minimum a and the col b, thus\n∇D = ( D T − 1 ) ∇L. (51)\nApply this result to the Fokker-Planck Equation 4, we have\n∇ · ∇[D(θ)P (θ, t)] (52) =∇ ·D∇P (θ, t) +∇ · [( D T − 1 ) ∇L(θ) ] P (θ, t) (53)\nAnd thus we obtain the Smoluchowski equation and a new form of J\n∂P (θ, t)\n∂t = ∇ ·\n[ D ( 1\nT ∇L(θ) +∇\n) P (θ, t) ] = −∇ · J, (54)\nJ = D exp ( −L(θ) T ) ∇ [ exp ( L(θ) T ) P (θ) ] . (55)\nWe note that the Smoluchowski equation is true only near critical points. We assume the point s is the midpoint on the most possible path between a and b, where L(s) = (1− s)L(a) + sL(b). The temperature Ta dominates the path a→ s, while temperature Tb dominates the path s→ b. So we have\n∇ [ exp ( L(θ)− L(s)\nT\n) P (θ) ] = JD−1 exp ( L(θ)− L(s)\nT\n) . (56)\nUnder Assumption 2, we integrate the equation from Valley a to the outside of Valley a along the most possible escape path\nLeft = ∫ c a ∂ ∂θ [exp ( L(θ)− L(s) T ) P (θ)]dθ (57)\n= ∫ s a ∂ ∂θ [ exp ( L(θ)− L(s) Ta ) P (θ) ] dθ (58)\n+ ∫ c s ∂ ∂θ [ exp ( L(θ)− L(s) Tb ) P (θ) ] dθ (59)\n=[P (s)− exp ( L(a)− L(s)\nTa\n) P (a)] + [0− P (s)] (60)\n=− exp ( L(a)− L(s)\nTa\n) P (a) (61)\nRight =− J ∫ c a D−1 exp ( L(θ)− L(s) T ) dθ (62)\nWe move J to the outside of integral based on Gauss’s Divergence Theorem, because J is fixed on the escape path from one minimum to another. As there is no field source on the escape path,∫ V ∇·J(θ)dV = 0 and∇J(θ) = 0. Obviously, only minima are probability sources in deep learning. So we obtain\nJ = exp\n( L(a)−L(s)\nTa ) P (a)∫ c\na D−1 exp\n( L(θ)−L(s)\nT\n) dθ . (63)\nUnder Assumption 3, we have∫ c a D−1 exp ( L(θ)− L(s) T ) dθ (64)\n≈ ∫ c a D−1 exp [ L(b)− L(s) + 12 (θ − b) >Hb(θ − b) Tb ] dθ (65)\n≈D−1b ∫ +∞ −∞ exp [ L(b)− L(s) + 12 (θ − b) >Hb(θ − b) Tb ] dθ (66)\n=D−1b exp\n( L(b)− L(s)\nTb\n)√ 2πTb |Hb| . (67)\nBased on the results of Step 1 and Step 2, we have\nγ =\n∫ Sa J · dS P (θ ∈ Va) =\nJ\nP (θ ∈ Va) (68)\n= P (a) exp\n( L(a)−L(s)\nTa ) D−1b exp ( L(b)−L(s)\nTb\n)√ 2πTb |Hb|\n1 P (a) √\n2πTa Ha\n(69)\n= √ TbHa|Hb| 2π √ Ta exp ( −L(s)− L(a) Ta − L(b)− L(s) Tb ) (70)\n= √ TbHa|Hb| 2π √ Ta exp ( −s∆L Ta − (1− s)∆L Tb ) (71)\nSo we have\nτ = 1\nγ = 2π\n√ Ta\nTbHa|Hb| exp\n( s∆L\nTa + (1− s)∆L Tb\n) . (72)\nIn the case of pure SGN, Ta = η2BHa and Tb = − η 2BHb gives\nτ = 1\nγ = 2π\n1\n|Hb| exp\n[ 2B∆L\nη ( s Ha + (1− s) |Hb| )\n] . (73)\nWe generalize the proof above into the high-dimensional SGD diffusion.\nStep 1: P (θ ∈ Va) (74)\n= ∫ θ∈Va P (θ)dV (75)\n=P (a) ∫ θ∈Va exp [ −1 2 (θ − a)>(D− 1 2 a HaD − 12 a )(θ − a) ] dV (76)\n=P (a) (2π)\nn 2\ndet(D−1a Ha) 1 2\n(77)\nStep 2: Based on the formula of the one-dimensional probability current and flux, we obtain the high-dimensional flux escaping through Col b:∫\nSb\nJ · dS (78)\n=J1d ∫ Sb exp [ −1 2 (θ − b)>[D− 1 2 b HbD − 12 b ] ⊥e(θ − b) ] dS (79)\n=J1d (2π)\nn−1 2 ( ∏ i6=e(D −1 bi Hbi)) 1 2 , (80)\nwhere [·]⊥e indicates the directions perpendicular to the escape direction e. So we have\nγ = 1\n2π\n√ det(HaD −1 a )\n−det(HbD−1b ) |Hbe| exp\n( −s∆L\nTa − (1− s)∆L Tb\n) (81)\nTa and Tb are the eigenvalues of H−1a Da and H −1 b Db corresponding to the escape direction. We know Da = η2BHa and Db = η 2B [Hb] +. As D must be positive semidefinite, we replace Hb = U>b diag(Hb1, · · · , Hb(n−1), Hbe)Ub by its positive semidefinite analog [Hb]+ = U>b diag(Hb1, · · · , Hb(n−1), |Hbe|)Ub. Thus, we have\nτ = 1\nγ = 2π\n1\n|Hbe| exp\n[ 2B∆L\nη\n( s\nHae + (1− s) |Hbe|\n)] . (82)" }, { "heading": "A.3 PROOF OF PROPOSITION 1", "text": "Proof. A stationary distribution must have a balanced probability flux between valleys. So the probability flux of each valley must be equivalent,\nP (θ ∈ V1)γ12 = P (θ ∈ V2)γ21 (83)\nAs τ = γ−1, it leads to P (θ ∈ Vv) ∝ τv . We normalize the total probability to 1, then we obtain the result." }, { "heading": "B ASSUMPTIONS", "text": "Assumption 2 indicates that the dynamical system is in equilibrium near minima but not necessarily near saddle points. It means that ∂P (θ,t)∂t = −∇ · J(θ, t) ≈ 0 holds near minima a1 and a2, but not necessarily holds near saddle point b. Quasi-Equilibrium Assumption is actually weaker but more useful than the conventional stationary assumption for deep learning (Welling & Teh, 2011; Mandt et al., 2017). Under Assumption 2, the probability density P can behave like a stationary distribution only inside valleys, but density transportation through saddle points can be busy. Quasi-Equilibrium is more like: stable lakes (loss valleys) is connected by rapid Rivers (escape paths). In contrast, the stationary assumption requires strictly zero flux between lakes (loss valleys). Little knowledge about density motion can be obtained under the stationary assumption.\nLow Temperature Assumption is common (Van Kampen, 1992; Zhou, 2010; Berglund, 2013; Jastrzębski et al., 2017), and is always justified when ηB is small. Under Assumption 3, the probability densities will concentrate around minima and MPPs. Numerically, the 6-sigma rule may often provide good approximation for a Gaussian distribution. Assumption 3 will make the second order Taylor approximation, Assumption 1, even more reasonable in SGD diffusion.\nHere, we try to provide a more intuitive explanation about Low Temperature Assumption in the domain of deep learning. Without loss of generality, we discuss it in one-dimensional dynamics. The temperature can be interpreted as a real number D. In SGD, we have the temperature as D = η2BH . In statistical physics, if ∆LD is large, then we call it Low Temperature Approximation. Note that ∆L D appears insides an exponential function in the theoretical analysis. People usually believe that, numerically, ∆LD > 6 can make a good approximation, for a similar reason of the 6-sigma rule in statistics. In the final training phase of deep networks, a common setting is η = 0.01 and B = 128. Thus, we may safely apply Assumption 3 to the loss valleys which satisfy the very mild condition ∆L H > 2.3 × 10 −4. Empirically, the condition ∆LH > 2.3 × 10 −4 holds well in SGD dynamics. It also suggests that, we can adjust the learning rate to let SGD search among loss valleys with certain barrier heights." }, { "heading": "C THE STOCHASTIC GRADIENT NOISE ANALYSIS", "text": "Figure 7 demonstrates that the SGN is also approximately Gaussian on a randomly initialized ResNet with B = 50 on CIFAR-10. We also note that the SGN on ResNet seems less Gaussian than SGN on\nfully-connected networks with the same batch size. Panigrahi et al. (2019) presented more results on the Gaussianity of SGN under various conditions.\nBy Figure 8, we validate C = HB in the original coordinates on MNIST. By Figure 9, we also validate C = HB on another dataset, Avila, in the space spanned by the eigenvectors of Hessian. The relation C = HB can still be observed in these two cases.\nData Precessing: We perform the usual per-pixel zero-mean and unit-variance normalization on MNIST. We leave the preprocessing of Avila in D. Model: Fully-connected networks." }, { "heading": "D MAIN EXPERIMENTS", "text": "Figure 10, 11, and 12 respectively validate that the exponential relation of the escape rate with the Hessian, the batch size and the learning rate." }, { "heading": "D.1 EXPERIMENTAL SETTINGS", "text": "Datasets: a) Avila, b) Banknote Authentication, c) Cardiotocography, d) Dataset for Sensorless Drive Diagnosis.\nData Precessing: We perform per-pixel zero-mean and unit-variance normalization on input data. For simplicity, we also transform multi-class problems into binary-class problems by grouping labels, although this is unnecessary.\nModel: Two-layer fully-connected networks with one hidden layer and 10 neurons per hidden layer.\nInitializations: To ensure the initialized models are near minima, we first pretrain models with 200-1000 epochs to fit each data set as well as possible. We set the pretrained models’ parameters as the initialized θt=0.\nValleys’ Boundary: In principle, any small neighborhood around θt=0 can be regarded as the inside of the start valleys. In our experiments, we set each dimension’s distance from θt=0 should be less than 0.05, namely |∆θi| ≤ 0.05 for each dimension i. If we rescale the landscape by a factor k, the neighborhood will also be rescaled by k. Although we don’t know which loss valleys exist inside the neighborhood, we know the landscape of the neighborhood is invariant in each simulation.\nHyperparameters: In Figure 10: (a) η = 0.001, B = 1, (b) η = 0.015, B = 1, (c) η = 0.005, B = 1, (d) η = 0.0005, B = 1. In Figure 11: (a) η = 0.02, (b) η = 0.6, (c) η = 0.18, (d) η = 0.01. In Figure 12: (a) B = 1, (b) B = 1, (c) B = 1, (d) B = 1. In Figure 13: (a) η = 0.0002, B = 100, (b) η = 0.001, B = 100, (c) η = 0.0002, B = 100, (d) η = 0.0001, B = 100. In Figure 14: (a) η = 0.0002, B = 100, D = 0.0002, (b) η = 0.001, B = 100, D = 0.0001, (c) η = 0.0002, B = 100, D = 0.0005, (d) η = 0.0001, B = 100, D = 0.0003. We note that the hyperparameters need be tuned for each initialized pretrained models, due to the stochastic property of deep learning.\nAccording to our experience, we can always find the hyperparameters to discover the quantitative relations as long as the pretrained model fits the data set well enough. The fined-tuned requirement can be avoided in Section E, because the models in Section E are artificially initialized.\nObservation: we observe the number of iterations from the initialized position to the terminated position. We repeat experiments 100 times to estimate the escape rate γ and the mean escape time τ . As the escape time is a random variable obeying an exponential distribution, t ∼ Exponential(γ), the estimated escape rate can be written as\nγ̂ = 100− 2∑100 i=1 ti . (84)\nThe 95% confidence interval of this estimator is\nγ̂(1− 1.96√ 100 ) ≤ γ̂ ≤ γ̂(1 + 1.96√ 100 ). (85)" }, { "heading": "D.2 EXPERIMENTS ON SGLD", "text": "Experimental Results: Figure 13 shows a highly precise exponential relation of the escape rate and the diffusion coefficient in the figure. Figure 14 shows a proportional relation of the escape rate and the Hessian determinant in the figure. Overall, the empirical results support the density diffusion theory in the dynamics of white noise. In experiments on SGLD, we carefully adjust the injected gradient noise scale in experiment to ensure that D is significantly smaller than the loss barrier’ height and large enough to dominate SGN scale. If D is too large, learning dynamics will be reduced to Free Brownian Motion." }, { "heading": "E EXPERIMENTS ON MORE MODELS", "text": "We supply experiments of training three models on artificial Gaussian datasets. In these experiments, we can analytically know the locations of the minima, Hessians and loss barriers, as each input feature is Gaussian noise." }, { "heading": "E.1 EXPERIMENTS SETTINGS", "text": "Data Set: We generate 50000 Gaussian samples and random two-class labels as the training data set, {(x(i), y(i))|x(i) ∼ N (0, I), y(i) ∈ {0, 1}, i ∈ {1, 2, · · · , 50000}}. Hyperparameters: In Figure 15: (a) η = 0.0001, B = 100, (b) η = 0.001, B = 100, (c) η = 0.0003, B = 100. In Figure 16: (a) η = 0.0001, B = 50, D = 0.2, (b) η = 0.001, B = 50, D = 0.0005, (c) η = 0.0003, B = 1, D = 0.0003. In Figure 17: (a) η = 0.006, B = 50, (b) η = 0.05, B = 50, (c) η = 0.005, B = 1. In Figure 18: (a) η = 0.006, (b) η = 0.06, (c) η = 0.1. In Figure 19: (a) B = 1, (b) B = 1, (c) B = 1. We note that the hyperparameters are recommended and needn’t be fine tuned again. The artificially initialized parameters avoids the stochastic property of the initial states.\nExperiment Setting 1: Styblinski-Tang Function is a commonly used function in nonconvex optimization, written as\nf(θ) = 1\n2 n∑ i=1 (θ4i − 16θ2i + 5θi).\nWe use high-dimensional Styblinski-Tang Function as the test function, and Gaussian samples as training data.\nL(θ) = f(θ − x),\nwhere data samples x ∼ N (0, I). The one-dimensional Styblinski-Tang Function has one global minimum located at a = −2.903534, one local minimum located at d, and one saddle point b = 0.156731 as the boundary separating Valley a1 and Valley a2. For a n-dimensional Styblinski-Tang Function, we initialize parameters θt=0 = 1√k (−2.903534, · · · ,−2.903534), and set the valley’s boundary as θi < 1√k0.156731, where i is the dimension index. We record the number of iterations required to escape from the valley to the outside of valley. The setting 1 does not need labels.\nExperiment Setting 2: We study the learning dynamics of Logistic Regression. Parameters Initialization: θt=0 = (0, · · · , 0). Valley Boundary: −0.1 < θi < 0.1. Due to the randomness of training data and the symmetry of dimension, the origin must be a minimum and there are a lot unknown valleys neighboring the origin valley. And we can set an arbitrary boundary surrounding the origin valley group, and study the mean escape time from the group of valleys.\nExperiment Setting 3: We study the learning dynamics of MLP with ReLu activations, cross entropy losses, depth as 3, and hidden layers’ width as 10. Parameters Initialization: θt=0 = (0.1, · · · , 0.1) with a small Gaussian noise = (0, 0.01I). Valley Boundary: 0.05 < θi < 0.15. To prevent the gradient disappearance problem of deep learning, we move the starting point from the origin. For symmetry breaking of deep learning, we add a small Gaussian noise to each parameter’s initial value. Due to the complex loss landscape of deep networks, we can hardly know the exact information about valleys and cols. However, the escape formula can still approximately hold even if an arbitrary boundary surrounding an arbitrary group of valleys. We set the batch size as 1 in this setting. When the batch size is small, the gradient noise is more like a heavy-tailed noise. We can validate whether or not the propositions can hold with very-small-batch gradient noise in practice." }, { "heading": "E.2 EXPERIMENTS RESULTS", "text": "Figure 15 shows the relation of the escape rate and the isotropic diffusion coefficient D. Figure 16 shows the relation of the escape rate and the Hessian determinant in the dynamics of white noise. Figure 17 shows the relation of the escape rate and the second order directional derivative in the dynamics of SGD. Figure 18 shows the relation of the escape rate and the batch size in the dynamics of SGD. Figure 19 shows the relation of the escape rate and the learning rate in the dynamics of SGD." } ]
2,021
A DIFFUSION THEORY FOR DEEP LEARNING DYNAM-
SP:d92fe94e29672783f906710a2ecb7a02aa4bd67d
[ "The value of the optimal objective as a function of the cost vector $c$ can be written as $z^*(c) = c^T u^*(c)$ where the optimal solution $u^*$ also depends on $c$. The function $u^*(c)$ is piecewise constant -- there are finitely (resp. countably) many feasible solutions; candidates for $u^*$ -- and so the function $z^*(c)$ is a piecewise linear function of $c$, with gradient $u^*(c)$, wherever it exists (otherwise there is analogous subgradient). Obviously, all it takes for computing $u^*(c)$ is solving -- anyhow -- the combinatorial problem. This is all trivial and well-known, yet the authors do precisely that." ]
Combinatorial problems with linear objective function play a central role in many computer science applications, and efficient algorithms for solving them are well known. However, the solutions to these problems are not differentiable with respect to the parameters specifying the problem instance – for example, shortest distance between two nodes in a graph is not a differentiable function of graph edge weights. Recently, attempts to integrate combinatorial and, more broadly, convex optimization solvers into gradient-trained models resulted in several approaches for differentiating over the solution vector to the optimization problem. However, in many cases, the interest is in differentiating over only the objective value, not the solution vector, and using existing approaches introduces unnecessary overhead. Here, we show how to perform gradient descent directly over the objective value of the solution to combinatorial problems. We demonstrate advantage of the approach in examples involving sequence-to-sequence modeling using differentiable encoder-decoder architecture with softmax or Gumbel-softmax, and in weakly supervised learning involving a convolutional, residual feed-forward network for image classification.
[]
[ { "authors": [ "Akshay Agrawal", "Brandon Amos", "Shane Barratt", "Stephen Boyd", "Steven Diamond", "J Zico Kolter" ], "title": "Differentiable convex optimization layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Brandon Amos", "J Zico Kolter" ], "title": "Optnet: Differentiable optimization as a layer in neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yuri Boykov", "Olga Veksler", "Ramin Zabih" ], "title": "Fast approximate energy minimization via graph cuts", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2001 }, { "authors": [ "Chien-Yi Chang", "De-An Huang", "Yanan Sui", "Li Fei-Fei", "Juan Carlos Niebles" ], "title": "D3TW: Discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Liqun Chen", "Yizhe Zhang", "Ruiyi Zhang", "Chenyang Tao", "Zhe Gan", "Haichao Zhang", "Bai Li", "Dinghan Shen", "Changyou Chen", "Lawrence Carin" ], "title": "Improving sequence-to-sequence learning via optimal transport", "venue": "In International Conference on Learning Representations, pp", "year": 2019 }, { "authors": [ "Frank H Clarke" ], "title": "Generalized gradients and applications", "venue": "Transactions of the American Mathematical Society,", "year": 1975 }, { "authors": [ "Thomas H. Cormen", "Charles E. Leiserson", "Ronald L. Rivest", "Clifford Stein" ], "title": "Introduction to Algorithms, Third Edition", "venue": null, "year": 2009 }, { "authors": [ "Daniel De Wolf", "Yves Smeers" ], "title": "Generalized derivatives of the optimal value of a linear program with respect to matrix coefficients", "venue": "Technical report. Université Catholique de Louvain,", "year": 2000 }, { "authors": [ "Josip Djolonga", "Andreas Krause" ], "title": "Differentiable learning of submodular models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "LawrenceCraig Evans" ], "title": "Measure theory and fine properties of functions", "venue": null, "year": 1992 }, { "authors": [ "Aaron Ferber", "Bryan Wilder", "Bistra Dilina", "Milind Tambe" ], "title": "MIPaaL: Mixed integer program as a layer", "venue": "arXiv preprint arXiv:1907.05912,", "year": 2019 }, { "authors": [ "Robert M Freund" ], "title": "Postoptimal analysis of a linear program under simultaneous changes in matrix coefficients", "venue": "In Mathematical Programming Essays in Honor of George B. Dantzig Part I,", "year": 1985 }, { "authors": [ "Tomas Gal" ], "title": "Rim multiparametric linear programming", "venue": "Management Science,", "year": 1975 }, { "authors": [ "Zhiting Hu", "Haoran Shi", "Bowen Tan", "Wentao Wang", "Zichao Yang", "Tiancheng Zhao", "Junxian He", "Lianhui Qin", "Di Wang" ], "title": "Texar: A modularized, versatile, and extensible toolkit for text generation", "venue": "arXiv preprint arXiv:1809.00794,", "year": 2018 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with Gumbel-softmax", "venue": "In International Conference on Learning Representations ICLR’17", "year": 2016 }, { "authors": [ "Stefan Lendl", "Ante Ćustić", "Abraham P Punnen" ], "title": "Combinatorial optimization with interaction costs: Complexity and solvable cases", "venue": "Discrete Optimization,", "year": 2019 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A package for automatic evaluation of summaries", "venue": "In Text summarization branches out,", "year": 2004 }, { "authors": [ "Di Lin", "Jifeng Dai", "Jiaya Jia", "Kaiming He", "Jian Sun" ], "title": "Scribblesup: Scribble-supervised convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Huidong Liu", "Xianfeng Gu", "Dimitris Samaras" ], "title": "A two-step computation of the exact GAN Wasserstein distance", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Dmitrii Marin", "Meng Tang", "Ismail Ben Ayed", "Yuri Boykov" ], "title": "Beyond gradient descent for regularized segmentation losses", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "R Kipp Martin" ], "title": "Using separation algorithms to generate mixed integer model reformulations", "venue": "Operations Research Letters,", "year": 1991 }, { "authors": [ "Arthur Mensch", "Mathieu Blondel" ], "title": "Differentiable dynamic programming for structured prediction and attention", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Saul B Needleman", "Christian D Wunsch" ], "title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins", "venue": "Journal of molecular biology,", "year": 1970 }, { "authors": [ "Nicholas J Redding", "Tom Downs" ], "title": "Learning in feedforward networks with nonsmooth functions", "venue": "In Advances in Neural Information Processing Systems,", "year": 1992 }, { "authors": [ "Michal Rolı́nek", "Paul Swoboda", "Dominik Zietlow", "Anselm Paulus", "Vı́t Musil", "Georg Martius" ], "title": "Deep graph matching via blackbox differentiation of combinatorial solvers", "venue": "arXiv preprint arXiv:2003.11657,", "year": 2020 }, { "authors": [ "Alexander Schrijver" ], "title": "Combinatorial optimization: polyhedra and efficiency, volume 24", "venue": "Springer Science & Business Media,", "year": 2003 }, { "authors": [ "Sebastian Tschiatschek", "Aytunc Sahin", "Andreas Krause" ], "title": "Differentiable submodular maximization", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2411 }, { "authors": [ "Marin Vlastelica Pogančić", "Anselm Paulus", "Vit Musil", "Georg Martius", "Michal Rolinek" ], "title": "Differentiation of blackbox combinatorial solvers", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Bryan Wilder", "Bistra Dilkina", "Milind Tambe" ], "title": "Melding the data-decisions pipeline: Decisionfocused learning for combinatorial optimization", "venue": "In The Thirty-Third Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Laurence Wolsey" ], "title": "Strong formulations for mixed integer programming: a survey", "venue": "Mathematical Programming,", "year": 1989 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In International Conference on Learning Representations ICLR’17", "year": 2016 }, { "authors": [ "Shuai Zheng", "Sadeep Jayasumana", "Bernardino Romera-Paredes", "Vibhav Vineet", "Zhizhong Su", "Dalong Du", "Chang Huang", "Philip HS Torr" ], "title": "Conditional random fields as recurrent neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 1529–1537,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Combinatorial optimization problems, such as shortest path in a weighted directed graph, minimum spanning tree in a weighted undirected graph, or optimal assignment of tasks to workers, play a central role in many computer science applications. We have highly refined, efficient algorithms for solving these fundamental problems (Cormen et al., 2009; Schrijver, 2003). However, while we can easily find, for example, the minimal spanning tree in a graph, the total weight of the tree as function of graph edge weights is not differentiable. This problem hinders using solutions to combinatorial problems as criteria in training models that rely on differentiability of the objective function with respect to the model parameters.\nLosses that are defined by objective value of some feasible solution to a combinatorial problem, not the optimal one, have been recently proposed for image segmentation using deep models (Zheng et al., 2015; Lin et al., 2016). These focus on a problem where some pixels in the image have segmentation labels, and the goal is to train a convolutional network that predicts segmentation labels for all pixels. For pixels with labels, a classification loss can be used. For the remaining pixels, a criterion based on a combinatorial problem – for example the maximum flow / minimal cut problem in a regular, lattice graph connecting all pixels (Boykov et al., 2001) or derived, higher-level super-pixels (Lin et al., 2016) – is often used as a loss, in an iterative process of improving discrete segmentation labels (Zheng et al., 2015; Marin et al., 2019). In this approach, the instance of the combinatorial problem is either fixed, or depends only on the input to the network; for example, similarity of neighboring pixel colors defines edge weights. The output of the neural network gives rise to a feasible, but rarely optimal, solution to that fixed instance a combinatorial problem, and its quality is used as a loss. For example, pixel labeling proposed by the network is interpreted as a cut in a pre-defined graph connecting then pixels. Training the network should result in improved cuts, but no attempt to use a solver to find an optimal cut is made.\nHere, we are considering a different setup, in which each new output of the neural network gives rise to a new instance of a combinatorial problem. A combinatorial algorithm is then used to find the optimal solution to the problem defined by the output, and the value of the objective function of\nthe optimal solution is used as a loss. After each gradient update, the network will produce a new combinatorial problem instance, even for the same input sample. Iteratively, the network is expected to learn to produce combinatorial problem instances that have low optimal objective function value. For example, in sequence-to-sequence modeling, the network will output a new sentence that is supposed to closely match the desired sentence, leading to a new optimal sequence alignment problem to be solved. Initially, the optimal alignment will be poor, but as the network improves and the quality of the output sentences get higher, the optimal alignment scores will be lower.\nRecently, progress in integrating combinatorial problems into differentiable models have been made by modifying combinatorial algorithms to use only differentiable elements (Tschiatschek et al., 2018; Mensch & Blondel, 2018; Chang et al., 2019), for example smoothed max instead of max in dynamic programming. Another approach involves executing two runs of a non-differentiable, black-box combinatorial algorithm and uses the two solutions to define a differentiable interpolation (Vlastelica Pogančić et al., 2020; Rolı́nek et al., 2020). Finally, differentiable linear programming and quadratic programming layers, which can be used to model many combinatorial problems, have been proposed recently (Amos & Kolter, 2017; Agrawal et al., 2019; Wilder et al., 2019; Ferber et al., 2019).\nThe approaches above allow for differentiating through optimal solution vectors. In many cases, we are interested only in the optimal objective value, not the solution vector, and the approaches above introduce unnecessary overhead. We propose an approach for gradient-descent based training of a network f(x;β) for supervised learning problems involving samples (x, y) with the objective criterion involving a loss term of the form L(β) = h(OptSolutionObjectiveValue(Π(F (x;β), y)), where h : R → R is some differentiable function, and Π is a combinatorial solver for a problem instance defined by the output of the β-parameterized network F for feature vector x and by the true label y. We show that a broad class of combinatorial problems can be integrated into models trained using variants of gradient descent. Specifically, we show that for an efficiently solvable combinatorial problem that can be efficiently expressed as an integer linear program, generalized gradients of the problem’s objective value with respect to real-valued parameters defining the problem exist and can be efficiently computed from a single run of a black-box combinatorial algorithm. Using the above result, we show how generalized gradients of combinatorial problems can provide sentence-level loss for text summarization using differentiable encoder-decoder models that involve softmax or Gumbel softmax (Jang et al., 2016), and a multi-element loss for training classification models when only weakly supervised, bagged training data is available." }, { "heading": "2 DIFFERENTIABLE COMBINATORIAL LOSSES", "text": "" }, { "heading": "2.1 BACKGROUND ON GENERALIZED GRADIENTS", "text": "A function f : X → R defined over a convex, bounded open set X ∈ Rp is Lipschitz continuous on an open set B ∈ X if there is a finite K ∈ R such that ∀x, y ∈ B |f(x)− f(y)| ≤ K||x− y||. A function is locally Lipschitz-continuous if for every point x0 in its domain, there is a neighborhood B0, an open ball centered at x0, on which the function is Lipschitz-continuous. For such functions, a generalized gradient can be defined.\nDefinition 1. (Clarke, 1975) Let f : X → R be Lipschitz-continuous in the neighborhood of x ∈ X . Then, the Clarke subdifferential ∂f(x) of f at x is defined as\n∂f(x) = conv { lim xk→x ∇f(xk) } ,\nwhere the limit is over all convergent sequences involving those xk for which gradient exists, and conv denotes convex hull, that is, the smallest polyhedron that contains all vectors from a given set. Each element of the set ∂f(x) is called a generalized gradient of f at x.\nThe Rademacher theorem (see e.g. (Evans, 1992)) states that for any locally Lipschitz-continuous function the gradient exists almost everywhere; convergent sequences can be found.\nIn optimization algorithms, generalized gradients can be used in the same way as subgradients (Redding & Downs, 1992), that is, nondifferentiability may affect convergence in certain cases." }, { "heading": "2.2 GRADIENT DESCENT OVER COMBINATORIAL OPTIMIZATION", "text": "Many combinatorial problems have linear objective function and can be intuitively expressed as integer linear programs (ILP), that is, linear programs with additional constraint that the solution vector involves only integers. Any ILP can be reduced to a linear program. Consider an ILP\nz∗ = ILP (c, A′, b′) := minu c Tu s.t. A′u = b′, u ≥ 0, u ∈ Zp,\nwith an optimal solution vector u∗ and optimal objective value z∗. Then, there exists a corresponding linear program LP (c, A, b)\nz∗ = LP (c, A, b) := minu c Tu s.t. Au = b, u ≥ 0,\ncalled ideal formulation (Wolsey, 1989), for which u∗ is also an optimal solution vector, with the same objective value z∗. For a feasible, bounded p-dimensional integer program, we can view the pair (A′, b′) as a convex polyhedron A′, the set of all feasible solutions. Then, the pair (A, b) in the ideal formulation LP is defined as the set of constraints specifying the feasible set A = conv {A′ ∩ Zp}. Convex hull of a subset of a convex set A′ cannot extend beyond A′, thus, A is convex, contains all integer solutions from A′, and no other integer solutions. The number of linear constraints in the ideal formulation may be exponential in p, and/or in m, the number of the original constraints in A′. Thus, the existence of the ideal formulation LP for an ILP may not have practical utility for solving the ILP.\nFor a combinatorial problem and its corresponding ILP, we use the ideal formulation of the ILP as a conceptual tool to define generalized gradient of the objective value of the optimal solution to the combinatorial problem with respect to the parameters defining the combinatorial problem. Specifically, our approach first uses a single run of an efficient, black-box combinatorial algorithm to produce the optimal solution vector and the associated objective value. Then, the combinatorial problem is conceptually viewed as an instance of an ILP. A possibly exponentially large linear program (LP) equivalent to the ILP is then used, without actually being spelled out or solved, to derive generalized gradients based on the solution vector returned by the combinatorial algorithm.\nFirst, we introduce several notions of efficiency of transforming a combinatorial problem into a linear integer program that will be convenient in defining the generalized gradients of combinatorial problems. Definition 2. Let P (w) be a combinatorial problem that is parameterized by a continuous vector w ∈ W ⊆ Rn, whereW is simply connected and n is the problem size, and let k ∈ Z be a constant that may depend on the problem type but not on its size. Then, a combinatorial problem is\n• primal-dual ∂-efficient if it can be phrased as an integer linear program involving n variables, with kn constraints in an LP formulation equivalent to the ILP, and the parameters (A, b, c) of the LP formulation depend on w through (sub)differentiable functions, c = c(w), A = A(w), b = b(w).\n• primal ∂-efficient if it can be phrased as an integer linear program involving n variables, the parameters w of the problem influence the cost vector c through a (sub)differentiable function c = c(w), and do not influence the constraints A, b.\n• dual ∂-efficient if it can be phrased as an integer linear program in which the number of constraints in the equivalent LP formulation is kn, the parametersw of the problem influence b through a (sub)differentiable function b = b(w), and do no influence the constraint matrix A nor the cost vector c.\nThe class of ∂-efficient problems includes polynomially solvable combinatorial problems with objective function that is linear in terms of problem parameters. Typically, the functions c = c(w), b = b(w) and A = A(w) are either identity mapping or are constant; for example, in the LP for maximum network flow, the cost vector c is composed directly of edge capacities, and A an b are constant for a given flow network topology, and do not depend on capacities.\nFor any polynomially solvable combinatorial problem, we can construct a poly(n)-sized Boolean circuit for the algorithm solving it. For each poly(n)-sized circuit, there is a linear program with poly(n) variables and constraints that gives the same solution (see (Dasgupta et al., 2008), Chap.\n7). For example, for MST in a graph with V vertices and E edges, the Martin’s ILP formulation (Martin, 1991) has only poly(V + E) constraints, but it is an extended formulation that involves V E additional variables on top of the typical E variables used in the standard ILP formulations for MST. Thus, we cannot use it to construct an ILP formulation that would make MST primal-dual ∂-efficient. Alternatively, there is an ILP for MST with one binary variable per edge, and the weight of the edge only influences the cost vector c, but to prohibit cycles in the solution there is a constraint for each cycle in the graph, thus the number of constraints is not poly(n) for arbitrary graphs. These constraints are specified fully by the topology of the graph, not by the edge weights, so w does not influence A nor b, meeting the conditions for primal ∂-efficiency. The MST example shows that there are problems that are primal ∂-efficient and not primal-dual ∂-efficient.\nSome polynomially solvable combinatorial problems are not ∂-efficient in any of the above sense. For example, fixed-rank combinatorial problems with interaction costs (Lendl et al., 2019) can be phrased succinctly as a bilinear program, but lead to prohibitively large linear programs both in terms of the number of variables and the number of constraints.\nFor ∂-efficient problems, we can efficiently obtain generalized gradients of the objective value. Theorem 1. Consider a combinatorial problem P (w) of size n, a parameter vector w from the interior of the parameter domainW , and an algorithm Π(w) for solving it in time poly(n). Let z∗ be the optimal objective value returned by Π. Then,\n• if P is primal ∂-efficient, then the generalized gradients ∂z∗(w) exist, and can be efficiently computed from U∗, the set of primal solution of the ideal formulation of integer program corresponding to P ;\n• if P is dual ∂-efficient, then the generalized gradients of ∂z∗(w) exist, and can be efficiently computed from V ∗, the set of all dual solution to the ideal formulation of the integer program corresponding to P ;\n• if P is primal-dual ∂-efficient, then the generalized gradients of A over w exist, and can be efficiently computed from U∗ and V ∗, as defined above.\nProof. A series of results (Gal, 1975; Freund, 1985; De Wolf & Smeers, 2000) shows that if the optimal objective value z∗ = LP (c, A, b) for a linear program is finite at (c, A, b) and in some neighborhood of (c, A, b), then generalized gradients of z∗ with respect to c, b, and A exist and are\n∂z∗(c) = U∗, ∂z∗(b) = V ∗, ∂z∗(A) = { −vuT : (u, v) ∈ V ∗ × U∗ } .\nWe build on these results to obtain generalized gradients of the linear program corresponding to the combinatorial problem. For the first case in the theorem, definition 2 states that in the linear program corresponding to P , only the cost vector c depends on w, through a (sub)differentiable function c = c(w). Since w is in the interior of the parameter domainW , the objective value is finite over some neighborhood of w. Then,\n∂z∗(w) = ∂z∗(c) ∂c\n∂w =\n∂c ∂w U∗,\nwhere the generalized gradient z∗(c) exists and is equal to U∗. For the second case, the ideal formulation LP exists. Then, from definition 2 we have that\n∂z∗(w) = ∂z∗(b) ∂b\n∂w =\n∂b ∂w V ∗.\nThe third case is a direct extension of the first two cases.\nTheorem 1 indicates that black-box combinatorial algorithms can be used to expand the range of transformations that can be efficiently utilized in neural networks. One immediate area of application is using them to specify a loss function. Consider a network F (x;β) parameterized by a vector of tunable parameters β. The network transforms a batch of input samples x into a batch of outputs χ = F (x;β). Then, in the broadest primal-dual ∂-efficient case, χ is used, possibly with the true classes y, to formulate parameters (c, A, b) = g(χ, y) of a linear program corresponding to the combinatorial problem, through some (sub)differentiable function g. For\nAlgorithm 1 Minimization of a combinatorial loss Input: batch x ⊂ X , y ⊂ Y , network F (x;β), functions g, h, combinatorial algorithm Π Output: Loss and its generalized gradient, L(β), ∂L(β)\n1: procedure COMBLOSSMIN(x, y, β, F, g, h,Π) 2: forward pass χ = F (x;β) 3: forward pass (c, A, b) = g(χ, y) 4: run combinatorial solver to find optimal objective value z∗ = Π(c, A, b) and optimal primal and/or dual solution vectors u∗, v∗ 5: forward pass L(β) = h(z∗) 6: backward pass through h: ∂L/∂z∗ 7: backward pass through Π: ∂z∗(c) = u∗, ∂z∗(b) = v∗, ∂z∗(A) = −v∗u∗T 8: backward pass through g and F 9: ∂L(β) = ∂L∂z ( u∗ ∂c∂β − v ∗u∗T ∂A∂β + v ∗ ∂b ∂β\n) 10: return L(β), ∂L(β) 11: end procedure\na given β and given batch samples (x, y), we can then define loss as a function of the optimal objective value of the linear program corresponding to the combinatorial problem resulting from g(F (x;β), y), L(β) = h(z∗(c, A, b)). This approach, summarized in Algorithm 1, allows us to obtain the generalized gradient of the loss with respect to β as long as functions g and h are differentiable. For clarity, in Algorithm 1, we did not consider functions h depending not just on z but also on x or y, but the extension is straightforward." }, { "heading": "3 EXAMPLE USE CASES AND EXPERIMENTAL VALIDATION", "text": "" }, { "heading": "3.1 DIFFERENTIATING OVER BIPARTITE MATCHING FOR WEAKLY-SUPERVISED LEARNING", "text": "To illustrate gradient descent over a combinatorial loss, we first focus on a simple image recognition problem. Consider a photo of a group of people with a caption listing each of the persons in the picture, but missing the ”from left to right” part. Given a collection of such labeled photos, can a model learn to recognize individual faces? Similarly, consider a shopping cart and a printout from the register. Given a collection of unordered shopping carts together with matching receipts, can a model learn to recognize individual shopping items? These are example of a weakly-supervised learning where the goal is to learn to classify previously unseen feature vectors, but a training sample is a bag of feature vectors accompanied by a bag of correct labels, instead of a feature-vector and a correct label. We are not told which class belongs to which sample, which prevents us from directly using the standard cross-entropy loss.\nMore formally, consider a d-class classification problem, and a model F (xj ;β) that for sample xj returns a d-dimensional vector of class probabilities, pj , with pcj denoting the predicted conditional probability of class c given feature vector xj . Let yj denote a d-dimensional, one-hot representation of the true class label of sample xj , with ycj = 1 if sample j is of class c, and zero otherwise. In weakly supervised learning involving bags of size b, we are given a tuple of b feature vectors, X = (xj) b j=1,\nand a tuple of permuted labels Y = ( yσ(i) )b i=1\nas one-hot-vectors, for some permutation σ; we will refer to the j-th element of the tuple Y as Yj . The permutation σ is unknown, thus using a loss `(pj , Yj) = `(pj , yσ(i)) to compare predicted distribution over classes for sample j with one-hot representation of j-th element in the randomly ordered set of true classes Yj makes no sense, since most likely i 6= j; Yj = yσ(i) is the class for some other sample i, not for sample j. While the permutation is unknown, with repeated presentation of bags of samples and bags of corresponding labels, we do have some information connecting the feature vector to classes. Intuitively, we can try to match model’s outputs for feature vectors in the bag to the class labels using the information in the probability distribution pj over classes provided by the model for each feature vector xj . That is, we can aim to find permutation σ̂ optimal in the average loss sense minσ̂ ∑b j=1 `(pj , σ̂(Y )j). If the class conditional probabilities pj resulting from the model perfectly match the one-hot vectors, the optimal σ̂ will be the inverse of the permutation σ, that is, σ̂(Y )j = yj .\nAlgorithm 2 Loss based on bipartite matching for weakly-supervised image classification\nInput: X = (xj) b j=1 – bag of b input images; Y = (Yk) b k=1 – a set of b sample classes to match, in\none-hot representation, in arbitrary order; β – ResNet18 network weights. Output: Loss (optimal matching cost) and its generalized gradient, L(β), ∂L(β)\n1: procedure MATCHBAG(X,Y, β) 2: forward pass, class probabilities pj = softmax(ResNet18(xj ;β)) for j = 1, ..., b 3: forward pass, cross-entropy for all image-label pairs Cjk = 〈log pj , Yk〉 for j, k = 1, ..., b 4: optimal matching cost and matching matrix: z∗,M∗ = OptMatching(C), i.e., M∗ = arg minM 〈C,M〉F , z∗ = 〈C,M∗〉F 5: final loss: cost of optimal matching L(β) = z∗ 6: backward pass through bipartite matching ∂z∗(C) = M∗ 7: backward pass through cross-entropy, softmax and ResNet18: ∂L(β) = M∗ ∂C∂β 8: return L(β), ∂L(β) 9: end procedure\nA b-element permutation can be represented by a b× b permutation matrix M . To find M , we define a b× b matrix C with Cjk = `(pj , Yk), where ` represents cross-entropy loss `(p, y) = −〈log p, y〉, with the logarithm applied element-wise. The elements Cjk correspond to edge weight in a bipartite graph with the feature vectors x processed by the neural network on one side, and labels y on the other side. We use a combinatorial solver, for example the Hungarian method with computational complexity O ( b3 ) , to find the the permutation matrix M∗ = arg minM 〈C,M〉F minimizing the Frobenius inner product of C and M . The procedure is outlined in Algorithm 2.\nTo test the approach, we used the CIFAR100 benchmark image dataset. As a baseline, we trained 5 independent fully supervised models with ResNet18 architecture (Zagoruyko & Komodakis, 2016) (see Supplementary Material for details), that is, models where each image is a separate sample with its true class available for loss calculation. To evaluate the ability of our method to provide gradients of a combinatorial loss defined by weighted matching, during training we explored image bags of samples consisting of b=4, 8, 12, 16, 24, or 32 images, and including correct but shuffled image labels. We trained 5 independent models for each bag size with the loss and its gradient provided using Algorithm 2. To avoid situations where the combinatorial loss is superficially aided by bags with mostly one class, we ignored any bag that has less than 75% of different classes, that is, for bag of size 8, we only consider bags that consist of at least 6 different classes. During testing, same as in the baseline model experiments, each image had the matching label available for test error calculations. For comparison, we trained a model with the same setup of image bags using cvxpylayers (Agrawal et al., 2019), a recently proposed methods for differentiable layers defined by conic programs. In contrast to our approach, which uses a combinatorial algorithm and relies on the LP formulation of the weighted bipartite matching only conceptually, for the definition of gradients, cvxpylayers solve the linear program in order to obtain gradients. We also trained the same model using a recently proposed approach to approximate gradients of the optimal solution vector, not the optimal objective value, of a combinatorial problem (Vlastelica Pogančić et al., 2020); we used the same combinatorial solver as in the experiments with our method.\nTest error for CIFAR100 of the training set reshuffled into bags after each epoch (Fig. 1, left) shows that for bag sizes up to twelve elements, weak supervision through weighted bipartite graph matching is almost as effective as supervised learning with true label available for each individual image, that is, bag of size one. Training using the bipartite matching loss was implemented in three different ways: through interpolated combinatorial gradients proposed in (Vlastelica Pogančić et al., 2020), through differentiable LP approach (cvxpylayers), and through the proposed approach for obtaining gradients of the objective value. All three approaches lead to very similar error rates (Fig. 1, left), indicating these three ways of obtaining gradients provide similar training signal to the network. The two methods that use combinatorial solvers are much more efficient than LP solver-based cvxpylayers (Fig. 1, right). The performance of the LP-based method decreases for very small bag sizes, where each epoch has large number of individual problems to solve, as well as for large bag sizes, where each problem to be solved involves more computation. Among the two methods using the same combinatorial solver, our proposed method is twice as fast as the interpolation method of (Vlastelica Pogančić et al., 2020), which requires solving a combinatorial problem not only in the\nforward pass, but also in the backwards pass in order to obtain gradients of the solution vector. These results show that the generalized gradient over combinatorial optimization is effective in providing training signal to train a large neural network, and can do it much faster than the state-of-the-art alternative approaches." }, { "heading": "3.2 DIFFERENTIATING OVER GLOBAL SEQUENCE ALIGNMENT FOR SENTENCE-LEVEL LOSS", "text": "IN SEQUENCE-TO-SEQUENCE MODELS\nAnother use case where a combinatorial loss is advantageous occurs in to sequence-to-sequence natural language models. We used a standard encoder-decoder architecture for the model (see Supplementary Material for details). The encoder takes the source sequence on input and prepares a context vector capturing the source sequence. The decoder is a recurrent network that outputs the predicted sequence one token at a time, based on the context vector and the output of the previous step. The output of the decoder at a step t is a vector of probabilities pt over the set of all possible output tokens.\nExisting encoder-decoder models use cross-entropy loss to compare predicted probabilities pt to the target word at position t, encoded as one-hot vector yt. Instead of a sequence-level optimization, position-specific cross entropy loss results in an averaged token-level optimization. We hypothesize this has detrimental effect on the training process of differentiable sequence-to-sequence models that involve softmax or Gumbel-softmax (Jang et al., 2016) as the mechanism for feeding the output of the previous step of the decoder as input for the next step. For example, a recurrent model that learned to output almost all of the target sentence correctly but is still making the mistake of missing one word early in the sentence will have very high loss at all the words following the missing word – correcting the mistake should involve keeping most of the model and focusing on the missing word, but with position-specific loss, all the outputs are considered wrong and in need of correction.\nGaps or spurious words in the output sequence can be treated naturally if we consider global sequence alignment (GSA) as the loss. Global sequence alignment (Needleman & Wunsch, 1970) is a combinatorial problem in which two sequences are aligned by choosing, at each position, to either match a token from one sequence to a token from the other, or to introduce a gap in one or the other sequence; each choice has a cost (see Fig. 2). In sequence-to-sequence modeling, the cost of matching the decoder’s output from position i to the target sequence token as position k will be given by 〈− log pi, yk〉. The cost of a gap, that is, of a horizontal or a vertical move in Fig. 2, is specified in a way that promotes closing of the gap; we use the cost of diagonal move from that position as the cost of the gap, multiplied by a scalar γ > 1 to prioritize closing the gaps over improving the matchings. In our experiments, we used γ = 1.5. The GSA problem can stated as a linear program\nwith p variables and m+ 1 constraints, with the costs of the moves forming the right-hand side of the constraints. Thus, by Theorem 1, the generalized gradient of the minimum global sequence alignment with respect to matching and gap costs is efficiently available.\nIn experiments involving global sequence alignment in sequence-to-sequence models, we used an encoder-decoder sequence-to-sequence architecture with bidirectional forward-backward RNN encoder and an attention-based RNN decoder (Luong et al., 2015), as implemented in PyTorch-Texar (Hu et al., 2018). While this architecture is no longer the top performer in terms of ROUGE metric – currently, large pre-trained self-attention models are the state-of-the-art – it is much more efficient in training, allowing for experimenting with different loss functions. During inference, we used beam search. During training, to have a differentiable decoder, we use two alternative approaches. First, we feed the probabilities resulting from the softmax layer applied to the outputs of the RNN directly as the recursive inputs to the RNN. Second, inputs to the RNN are provided by the straight-through Gumbel-softmax distribution (Jang et al., 2016) based on the outputs of the RNN, which is an approximation of the categorical distribution from which one-hot, single-token outputs are sampled. In both cases, as a baseline for comparisons with the GSA-based loss, we use word-level maximum likelihood, that is, cross-entropy between the probability vector on output of the softmax layer of the RNN and the desired target word at that position. In evaluating the combinatorial GSA loss, we used text summarization task involving the GIGAWORD dataset (Graff & Cieri, 2003) as an example of a sequence-to-sequence problem. We used test set ROUGE 1, 2, and L scores (Lin, 2004) as the measure of quality of the summarizations.\nThe results in Table 1 show that the GSA-based loss leads to improved text summarization results in all three ROUGE metrics compared to position-specific cross-entropy maximum likelihood training, both for the softmax and the Gumbel-softmax approach for providing the recursive input to the RNN in a differentiable way. The increase in accuracy comes at the cost of doubling the training time when our method is used to provide gradients of the optimal alignment score. A similar increased accuracy can be observed when the interpolation approach (Vlastelica Pogančić et al., 2020) for gradients of optimal alignment path is used instead, but the interpolation method further increases the training time, by a factor of two compared to our method. The proposed combinatorial approach is much more accurate and efficient than the recently proposed cvxpylayers method. The running time for the cvxpylayers approach is orders of magnitude slower. The cvxpylayers solver managed to reduce the training loss for several initial epochs, after which solver errors start to occur and the learning process diverges. In order to confirm this behavior, we performed 3 additional runs of the cvxpylayers-based training for the softmax model. In all cases, the loss dropped from the initial value in the 90-95 range to above 50, after which it increased to 500 or more. For comparison, the proposed combinatorial loss approach and the standard cross-entropy approach reach loss in the 30-32 range by epoch 10." }, { "heading": "4 RELATED WORK", "text": "Recently, (Tschiatschek et al., 2018) proposed an approximate solver for submodular function maximization that uses differentiable elements and allows for differentiating through the solver. Differentiable solvers are also considered in (Mensch & Blondel, 2018), where dynamic programming solver is re-implemented with the maximum operation replaced by smoothed max. Similar approach is used in differentiable dynamic time warping (Chang et al., 2019). Several authors used a differential approximation to linear program solutions instead of introducing differentiable operations into combinatorial algorithms. WGAN-TS (Liu et al., 2018) solves an LP to obtain the exact empirical Wasserstein distance. Then, to circumvent lack of differentiability of linear programs, WGAN-TS proceeds by training a neural network to approximate the LP solution in order to obtain gradients. In seq2seq-OT (Chen et al., 2019), an approximation is used to model optimal transport between word embeddings serving as a regularizer in training sequence-to-sequence models. These approximation approaches are limited to specific problems and preclude using off-the-shelf combinatorial solvers.\nRecently, an approach that relies on interpolation to obtain gradients of the optimal solution vector – not optimal objective value as in our method – produced by combinatorial solvers has been proposed (Vlastelica Pogančić et al., 2020; Rolı́nek et al., 2020). Similar to our approach, it allows for using off-the-shelf, black-box implementations of combinatorial algorithms. However, unlike our approach, it requires two executions of the solver, one in the forward phase, and a second execution for a slightly perturbed problem for the backward phase. As can be seen in our experiments, this results in doubling the performance overhead compared to our approach.\nAn alternative approach is to use mathematical programming solvers in gradient-trained neural networks. OptNet (Amos & Kolter, 2017) provides differentiable quadratic programming layers, and an efficient GPU-based batch solver, qpth. Cvxpylayers (Agrawal et al., 2019) generalizes this approach to a broad class of convex optimization problems expressed as cone programs, which include QP and LP as special cases, using conic solver based on ADMM, providing a general-purpose package based on the easy-to-use interface of cvxpy, with speed comparable to qpth for QP problems. Other authors (Wilder et al., 2019; Ferber et al., 2019) focus on LP problems, regularize them by adding the quadratic term, and use a QP solver as in OptNet to obtain the optimal solution vector and its gradient. Quadratic smoothing is also used in (Djolonga & Krause, 2017) in submodular set function minimization. While these methods can handle broader class of problems than our method, the reliance on quadratic or linear programming solvers translates to increased solving time. In the approach proposed here, linear programming is used only as a theoretical tool that allows for defining a mapping from the solution to a combinatorial problem to the gradient of its objective value. The solution is obtained by a single run of a combinatorial algorithm, which, as our experiments confirm, is faster than using mathematical programming and not affected by numerical instability and convergence problems." } ]
2,020
null
SP:16d9ab54eb8e4f24314ceca6e0f86f4ca586d7f1
[ "This paper provides the interesting method that leverages GPU memory resources more efficiently for supernet (meta-graph) of differentiable NAS. For this, this paper proposes binary neural architecture search and consecutive model parallel (CMP). CMP parallelizes one supernet with multiple GPUs, which allows NAS model to use larger batch size and search space. Additionally, this paper improves neural architecture search speed and hardware utilization with waiting cycles reduction by dividing forward/backward phases into several sub-tasks and executing the same type of sub-tasks. The proposed method shows 1.2x faster search time compared with other model parallel methods and the highest performance among differentiable NAS methods in the experiment section." ]
Neural architecture search (NAS) automatically designs effective network architectures. Differentiable NAS with supernets that encompass all potential architectures in a large graph cuts down search overhead to few GPU days or less. However, these algorithms consume massive GPU memory, which will restrain NAS from large batch sizes and large search spaces (e.g., more candidate operations, diverse cell structures, and large depth of supernets). In this paper, we present binary neural architecture search (NASB) with consecutive model parallel (CMP) to tackle the problem of insufficient GPU memory. CMP aggregates memory from multiple GPUs for supernets. It divides forward/backward phases into several sub-tasks and executes the same type of sub-tasks together to reduce waiting cycles. This approach improves the hardware utilization of model parallel, but it utilizes large GPU memory. NASB is proposed to reduce memory footprint, which excludes inactive operations from computation graphs and computes those operations on the fly for inactive architectural gradients in backward phases. Experiments show that NASB-CMP runs 1.2× faster than other model parallel approaches and outperforms state-of-the-art differentiable NAS. NASB can also save twice GPU memory more than PC-DARTS. Finally, we apply NASB-CMP to complicated supernet architectures. Although deep supernets with diverse cell structures do not improve NAS performance, NASB-CMP shows its potential to explore supernet architecture design in large search space 1.
[]
[ { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tianqi Chen", "Bing Xu", "Chiyuan Zhang", "Carlos Guestrin" ], "title": "Training deep nets with sublinear memory cost", "venue": "arXiv preprint arXiv:1604.06174,", "year": 2016 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Aaron Harlap", "Deepak Narayanan", "Amar Phanishayee", "Vivek Seshadri", "Nikhil Devanur", "Greg Ganger", "Phil Gibbons" ], "title": "Pipedream: Fast and efficient pipeline parallel dnn training", "venue": "arXiv preprint arXiv:1806.03377,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yanping Huang", "Youlong Cheng", "Ankur Bapna", "Orhan Firat", "Dehao Chen", "Mia Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V Le", "Yonghui Wu" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Chiheon Kim", "Heungsub Lee", "Myungryong Jeong", "Woonhyuk Baek", "Boogeon Yoon", "Ildoo Kim", "Sungbin Lim", "Sungwoong Kim" ], "title": "torchgpipe: On-the-fly pipeline parallelism for training giant models", "venue": null, "year": 2004 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Seunghak Lee", "Jin Kyu Kim", "Xun Zheng", "Qirong Ho", "Garth A Gibson", "Eric P Xing" ], "title": "On model parallelization and scheduling strategies for distributed machine learning", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Chenxi Liu", "Liang-Chieh Chen", "Florian Schroff", "Hartwig Adam", "Wei Hua", "Alan L Yuille", "Li FeiFei" ], "title": "Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Jieru Mei", "Yingwei Li", "Xiaochen Lian", "Xiaojie Jin", "Linjie Yang", "Alan Yuille", "Jianchao Yang" ], "title": "Atomnas: Fine-grained end-to-end neural architecture search", "venue": null, "year": 1912 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameters sharing", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Esteban Real", "Sherry Moore", "Andrew Selle", "Saurabh Saxena", "Yutaka Leon Suematsu", "Jie Tan", "Quoc Le", "Alex Kurakin" ], "title": "Large-scale evolution of image classifiers", "venue": "arXiv preprint arXiv:1703.01041,", "year": 2017 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the AAAI conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Alexander Sergeev", "Mike Del" ], "title": "Balso. Horovod: fast and easy distributed deep learning in tensorflow", "venue": "arXiv preprint arXiv:1802.05799,", "year": 2018 }, { "authors": [ "Linnan Wang", "Yiyang Zhao", "Yuu Jinnai", "Yuandong Tian", "Rodrigo Fonseca" ], "title": "Alphax: exploring neural architectures with deep neural networks and monte carlo tree search", "venue": null, "year": 1903 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "SNAS: stochastic neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yuhui Xu", "Lingxi Xie", "Xiaopeng Zhang", "Xin Chen", "Guo-Jun Qi", "Qi Tian", "Hongkai Xiong" ], "title": "PCDARTS: Partial channel connections for memory-efficient architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Quanming Yao", "Ju Xu", "Wei-Wei Tu", "Zhanxing Zhu" ], "title": "Efficient neural architecture search via proximal iterations", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural architecture search (NAS) has revolutionized architecture designs of deep learning from manually to automatically in various applications, such as image classification (Zoph & Le, 2016) and semantic segmentation (Liu et al., 2019a). Reinforcement learning (Zoph & Le, 2016; Zoph et al., 2018; Pham et al., 2018), evolutionary algorithms (Real et al., 2017; 2019), and differentiable algorithms (Liu et al., 2019b; Cai et al., 2019) have been applied to discover the optimal architecture from a large search space of candidate network structures. Supernets (Zoph et al., 2018; Pham et al., 2018) comprising all possible networks reduce search spaces from complete network architectures to cell structures. Recent acceleration techniques of differentiable NAS (Xie et al., 2019; Yao et al., 2020; Chen et al., 2019; Xu et al., 2020) further diminish search costs to affordable computation overheads (e.g., half GPU day). Prior work (Xu et al., 2020) randomly samples partial channels of intermediate feature maps in the mixed operations.\nHowever, supernets of differentiable NAS consume gigantic GPU memory, which constrains NAS from using large batch sizes and imposes restrictions on supernet architectures’ complexity. For example, NAS determines networks in shallow supernets (e.g., 8 layers) for deep compact networks (e.g., 20 layers). The cell structures are also required to remain identical for the same type of cells. Data parallelism can increase the search efficiency of NAS by using large batch sizes, such as SNAS (Xie et al., 2019), but it requires supernet complexity low enough to fit in a single GPU. In contrast, model parallelism can parallelize complex supernets, which distributes partial models to multiple devices. Nevertheless, model parallelism suffers from low hardware utilization. Only one device executes its model partition, while other devices stay idle. How to take advantage of multiple GPUs for large supernets efficiently is an open problem.\n1Search and evaluation code are released at link\nIn this paper, we propose a simple and efficient solution, binary neural architecture search (NASB) using consecutive model parallel (CMP), to tackle the above limitations. Specifically, supernets have two forward and two backward phases to learn architecture parameters and network weights. CMP distributes several sub-tasks split from the four phases in multiple GPUs and executes the sub-tasks of all forward/backward phases together. Figure 1 illustrates that sub-tasks of forward/backward phases will be overlapped to reduce waiting cycles. Nevertheless, CMP consumes large GPU memory due to two computation graphs existing at the same time. Thus, we introduce NASB to declines GPU memory occupation. NASB utilizes binary and sparse architecture parameters (1 or 0) for mixed operations. It excludes inactive operations in the computation graph and computes feature maps of inactive operations for architecture gradients during the back-propagation. In this way, NASB-CMP can increase hardware utilization of model parallelism with efficient GPU memory in differentiable NAS.\nIn our experiments on CIFAR-10, NASB-CMP runs 1.2× faster than using model parallel and pipeline parallel, TorchGPipe (Kim et al., 2020) in a server with 4 GPUs 2. It can achieve the test error of 2.53 ± 0.06% by searching for only 1.48 hours. Our contribution can be summarized as follows:\n• NASB-CMP is the first NAS algorithm that can parallelize large supernets with large batch sizes. We analyze the acceleration ratio between CMP and traditional model parallelism. Even though complex supernets (e.g., large layers and different cell structures) will not boost NAS performance, NASB-CMP paves the way to explore the supernet architecture design in the future.\n• NASB utilizes binary architecture parameters and extra architecture gradients computation to reduce GPU usage. It can save memory consumption by accepting twice batch sizes larger than the other memory saving algorithm, PC-DARTS (Xu et al., 2020).\n• We fairly compare NASB-CMP with state-of-the-art differentiable NAS in the same hardware and search space. Extensive experiments show that NASB-CMP can achieve competitive test error in short search time.\n2NVIDIA GTX 1080 Ti." }, { "heading": "2 METHODOLOGY", "text": "We first describe the fundamental concepts of one-shot neural architecture search (NAS) in Section 2.1. We then portray the consecutive model parallel to enhance NAS search efficiency in multiple devices in Section 2.2. Finally, we explain how we binarize the architectural weights and compute their gradients to cut down the GPU memory consumption in Section 2.3." }, { "heading": "2.1 ONE-SHOT NEURAL ARCHITECTURE SEARCH", "text": "One-shot neural NAS (Zoph et al., 2018) is built on a supernet (a.k.a. meta graph) in which we stack normal cells and reduce cells sequentially in Figure 2 (a). Normal cells are analogous to convolutional layers to extract images features. Reduce cells are equivalent to pooling layers to reduce the spatial dimension of feature maps. All normal cells share the same structure, but each cell still has its network weights. So do all reduce cells. One-shot approaches are required to design two cell structures instead of complete neural networks. Figure 2 (b) illustrates one popular cell structure (Pham et al., 2018), an N -node directed-acyclic-graph (DAG) with total edges E, not counting the “concat” node. In the h-th cell, the first two nodes are the (h − 2)-th and (h − 1)-th cells having no inbound edges. The other nodes accept previous nodes whose index is lower than the current index. Total edges E (red lines of Figure 2 (b)) is (N +1)(N −2)/2. We denote the h-th cell’s output as yh = concat(nj), where 2 ≤ j ≤ N − 1 and nj is a DAG node signified in Eq. 1.\nnj = yh−2, if j = 0, yh−1, j = 1,∑\ni<j mO(ni), 2 ≤ j ≤ N − 1. (1)\nA mixed operation mO is the edge between node i and j in the DAG. Let O be a set of candidate operations (e.g., convolution, pooling, identity, zero) and A ∈ RE×|O| be a matrix of architecture parameters. Eq. 2 formulates the mixed operation mO from node i to j as the weighted sum of all operations ok (Liu et al., 2019b).\nmjO(ni) = |O|∑ k=1 Ae,kok(ni), j ≥ 2, i < j, (2)\nwhere e = (j+1)(j− 2)/2+ i is the edge index. The mixed operations transform the cell structure search to the problem of learning two matrices, AN and AR, for the normal and reduce cell.\nGiven that Lval and Ltrain is the loss function L beyond a training and validation dataset, respectively. Let A comprise AN and AR. Mathematically, one-shot NAS can be formulated in the following optimization problem,\nminA Lval(w ∗,A)\ns.t. w∗ = argmin w Ltrain(w,A). (3)\nNAS leverages the validation performance to choose well-trained networks that outperform others. After training A, we derive the compact network by pruning unused operations in the supernet. Since the whole paper follows the image classification setting (Liu et al., 2019b; Cai et al., 2019), we assume each node is assigned two inputs and two operations. And we prune node inputs of cells of the supernet by the largest two values of A associated with that node. For simplicity, we use A in replace of A in the following discussion." }, { "heading": "2.2 CONSECUTIVE MODEL PARALLEL", "text": "Data parallelism can scale up supernets with large batch sizes, but it cannot handle large supernets (e.g., deep supernets with different cell structures). Model parallelism (MP) is able to amortize such large supernets across multiple GPUs, but its hardware utilization is low. MP would generate unwanted waiting cycles across devices. Figure 1 displays that the first device becomes idle until the second device finishes its forward and backward phases. The parallelization gets worse as we use large available GPUs.\nMotivated by pipeline parallelism (Huang et al., 2019), we propose consecutive model parallel (CMP) to decrease GPU idle time. Let FA and BA signify the forward and backward phase to update A, and Fw and Bw be two phases to update w. CMP divides the four phases into several sub-tasks and performs sub-tasks of FA and Fw consecutively, followed by sub-tasks of Bw and BA. Figure 1 illustrates that the execution order change by CMP overlaps sub-tasks without waiting for others to finish. Given the number of available GPUs M , Eq. 4 reveals the ratio of execution time between CMP and MP in theory.\nTime of CMP Time of MP = 1 M [4M − 2(M − 1)] 4 = 1− M − 1 2M . (4)\nWe assume FA, BA, Fw, and Bw take the same time unit. MP will complete an iteration in 4 units. For CMP, the total sub-tasks is 4M , and 2(M − 1) sub-tasks can be overlapped. If a sub-task takes 1/M ideally, CMP will finish an iteration in 1/M(4M − 2(M − 1)) units. According to Eq. 4, CMP with two devices could reduce (2-1)/(2*2)=25% time from MP. In practice, Experiment 3.1 demonstrates that NASB-CMP runs 1.2× faster than model parallelism without sacrificing test error. The theoretical value for 4 GPU is 1.6 (or reduce 37.5% time). We believe communication overhead and uneven model balance cause the deviation. Communication overhead comes from the intermediate tensors transfer from one to another GPU when models are split into different GPUs. Moreover, the main thread is responsible for loading data and backward propagation. The GPU with the main thread always consumes the most GPU memory, which causes uneven model balance.\nCMP is a general model parallel approach for any existing differentiable NAS algorithm. However, runningBA andBw consecutively asks for two computation graphs, which doubles GPU utilization and deteriorates CMP efficiency. To address the problem of great GPU consumption, we introduce a memory-efficient NAS to CMP, called binary neural architecture search (NASB)." }, { "heading": "2.3 BINARY NEURAL ARCHITECTURE SEARCH", "text": "Binary neural architecture search (NASB) harnesses binary mixed operations mBO (Yao et al., 2020) that convert the real-valued A into sparse binary matrix G, as illustrated in Figure 2. Among rows Ae,: associate node j, mBO enforces the two largest elements to 1 (active) and the rest elements to 0 (inactive). The row indexes of active elements indicate selected edges to node j, while column indexes indicate chosen operations. Notice that NASB does not directly multiply G with candidate operations in Eq. 5. Instead, NASB constructs a set of active operations O(active) based on active elements in G. Only those active operations oa ∈ O(active) are included in the forward phase. This technique could stop inactive operations being stored in the computation graph and decrease roughly\nAlgorithm 1: NASB - Consecutive Model Parallel 1: Initialize architecture weights A and network weights w 2: while not stopped do 3: Gt = binarize(At) 4: Create mBO using Gt and Eq. 5 5: Compute Lvalid(wt,Gt) and Ltrain(wt,Gt) consecutively // model parallel 6: Compute∇wLtrain(wt,Gt) and∇ALvalid(wt,Gt) consecutively // model parallel 7: Update wt+1 by descending∇wLtrain(wt,Gt) 8: Update At+1 by descending∇ALvalid(wt,Gt) 9: end while\n|O| times GPU memory compared to using the multiplication by G.\nmBO(ni) = |O|∑ k=1 Ge,kok(ni) = oa(ni). (5)\nNASB computes gradients of network weights w using standard back-propagation in the supernet. For the gradients of A, NASB estimates ∂L/∂A approximately by ∂L/∂G:\n∂L ∂Ae,k = ∂L ∂mO ∂mO ∂Ae,k ≈ ∂L ∂mBO ∂mBO ∂Ge,k = ∂L ∂mBO × ok(n) = ∂L ∂Ge,k . (6)\nEq. 6 states that gradients of elements in A come from ∂L/∂mBO × ok(n). However, inactive operations are not in the computation graph. NASB saves inputs of inactive operations n in PyTorch Context that is used for backward computation. During the backward phase, NASB will compute inactive operations ok′(n) on the fly and multiply the results with the ∂L/∂mBO.\nApart from saving unneeded GPU FLOPS and memory, mBO can avoid performance bias between supernets and compact networks. Supernets using mO assume that the performance of supernets can represent derived compact networks, but non-linear operations (e.g., ReLU-Conv-BN) break the representation that causes performance bias (Xie et al., 2019). Instead, the sparse matrix of mBO activates one operation. The performance of supernets during the search is only for one compact network. Thus, NASB can mitigate the bias caused by non-linear operations.\nAlgorithm 1 describes how CMP works with NASB. Note that NASB-CMP does not update any parameter (including A and w) until FA, BA, Fw, and Bw complete. Ltrain will use the current binary architecture matrix Gt rather than updated Gt+1, which is the major difference from the alternate algorithm (See Appendix A). Experiment 3.2 demonstrates NASB could save substantial GPU memory than PC-DARTS (Xu et al., 2020), which reduces GPU memory by partial channels of feature maps in mixed operations.\nComparison with other methods. NASP (Yao et al., 2020) binarizes A based on A itself, while ProxylessNAS (Cai et al., 2019) binarizes A based on the softmax results of A. The two binarization approaches are equivalent, but how they handle binary mixed operations (Eq. 5) is different. NASP multiplies G with all operations (i.e., saving active and inactive operations in the computation graph). ProxylessNAS selects two sampled operations (paths) in the computation graph according to multinomial distribution. NASB utilizes the same binarization as NASP but only keeps one active operation in the computation graph according to G." }, { "heading": "3 EXPERIMENTS", "text": "We compare NASB-CMP with other parallelisms on the CIFAR-10 in Section 3.1. We then inspect the quality of NASB and compare NASB-CMP with state-of-the-art NAS in Section 3.2. Finally, we investigate the design of supernet architectures using large layers and different cell structures in Section 3.3, which cannot be conducted without saving GPU consumption or model parallel.\nDataset. CIFAR-10 Krizhevsky & Hinton (2009) is a color-image dataset for image classification, composed of 50,000 training images and 10,000 test images for 10 classes. The dataset preparation can be found in Append E.\nSearch Space. The DAG (See Section 2.1) has N = 6 intermediate nodes and E = 14 total edges. The set of candidate operations follows NASP (Yao et al., 2020), where normal operations |ON |=8 and reduce operations |OR|=5. Notice that our baselines also use the same operation sets rather than their original one (|ON | = |OR| = 8). All operations are included in Appendix F." }, { "heading": "3.1 PARALLELISM COMPARISON ON CIFAR-10", "text": "The performance of NASB-CMP is compared with other parallel approaches on CIFAR-10, including data parallelism, model parallelism, and GPipe (Huang et al., 2019), the state-of-the-art model parallel that pipelines chunks of data into several model partitions. The implementation of Parallel NASB is written in Appendix B, and the search and evaluation settings are in Appendix C and D.\nFigure 3 compares the performance of different parallelizations in NASB in varied GPUs. CMP runs 1.2× faster than model parallel (MP) and GPipe especially running in 3 and 4 GPUs. According to Eq. 4, four GPUs should run 1.6X faster (or reduce 37.5% search time) than MP. In practice, communication overhead and uneven model partitions reduce the ideal speedup ratio. Compared with all parallel approaches, CMP’s execution order change does not degrade the test error. Data parallel takes the lowest search cost, but it does not generate as low test error as other model parallel approaches. The reason might be that model replicas in data parallel utilize partial batches to compute architectural gradients, while model parallel can make use of the whole batches. Therefore, CMP is an efficient model parallel approach that helps NAS to utilize large batches.\nDespite the competitive performance, the scalability of CMP is inferior. CMP disallows batch sizes from linearly scaling up as large GPUs are involved. For example, 2 GPUs should use 448 (we used 416 instead) if 1 GPU uses 224. Besides, 1-GPU NASB can utilize batch size 448, but NASB-CMP needs 4 GPUs to double batch sizes. The main reason is that CMP keeps two computation graphs (for BA and Bw) simultaneously for overlapping computations, resulting in twice GPU consumption. We believe that a mixed parallel combining CMP and data parallel can mitigate the drawback by merging two advantages, accuracy of CMP and scalability of data parallel." }, { "heading": "3.2 STATE-OF-THE-ART NAS COMPARISON ON CIFAR-10", "text": "Following experiment settings in Appendix C and D, we compare NASB and NASB-CMP with several NAS algorithms on CIFAR-10. DARTS (Liu et al., 2019b), SNAS (Xie et al., 2019), NASP (Yao et al., 2020), and PC-DARTS (Xu et al., 2020) are selected as our baselines. DARTS is the pioneer of differentiable NAS. SNAS points out the performance bias between a supernet and derived networks in DARTS. Both NASP and PC-DARTS reduce GPU memory, which overlaps the scope of this paper. We should select ProxylessNAS (Cai et al., 2019) as a baseline, but their search code on CIFAR-10 is not released. We prefer not to ruin their performance with improper implementation.\nInstead of directly using their reported results, we re-run the baselines from scratch to ensure their hardware and search space are the same. So, we can fairly compare them in terms of test error and search cost.\nThe test error and search cost on CIFAR-10 are stated in Table 1, where “c/o” signifies Cutout (DeVries & Taylor, 2017) used in the evaluation phase. The first row (human designed networks) and the second group of rows are extracted from their papers. The third group compares NASB with differentiable NAS baselines. The fourth group compares NAS algorithms using large batch sizes. Notably, ProxylessNAS attains the outstanding test error, but its supernet structure and search space are different from what we use, which might bias the comparison.\nIn the third group, NASB significantly takes the cheapest search cost, roughly 4 hours, to reach a comparable test error of 2.64% with SNAS 2.58% and PC-DARTS 2.59%, not to mention the search cost is smaller than the second group. NASB and NASP use similar mixed binary operations, but NASB outperforms NASP in both search cost (3.92 versus 6.44) and test error (2.64 versus 2.76). The GPU memory utilization of NASB and NASP is 2,117 MB and 9,587 MB, respectively. These three comparisons indicate that the additional gradient computation for inactive operations is a useful technique. Notice that DARTS, SNAS, and PC-DARTS use different search space (See Appendix F), so the original test errors (second group) differ from what we report in the third group of Table 1. Especially, DARTS tends to overfit the validation set by selecting “skip connect” in our search space. Its results are not as good as their original search space. Even though NASP uses the same search space, the different batch sizes and random seeds for the search and retrain setting still lead to different results.\nThe fourth group points out that NASB can considerably reduce GPU memory by using twice batch sizes larger than PC-DARTS within 1.64 hours to attain a test error of 2.49. PC-DARTS, however, becomes worse when using large batch sizes. We consider that the Hessian approximation in PC-DARTS fluctuates greatly with large batch sizes, which misleads PC-DARTS to easily select “skip connect”. NASB-CMP using four GPUs enables twice batch sizes for NASB to finish its search in 1.48 hours without severe test error degradation. Its test error 2.53% also performs better than other differentiable NAS. The empirical results in the third and fourth groups demonstrate the high efficient NASB with significant memory saving and the strong performance of NASB-CMP." }, { "heading": "3.3 LARGE SUPERNETS ON CIFAR-10", "text": "One-shot NAS embraces two limitations, (1) searching 8-layer supernets for 20-layer compact networks and (2) same cell structures (Liu et al., 2019b; Pham et al., 2018). We hypothesize that\n20-layer supernets with different cell structures can build suitable compact networks. Thanks to NASB and NASB-CMP that reduce GPU utilization and exploit multiple GPUs, we can examine how supernet architectures affect NAS. The search and retrain setting follow Appendix C and D. Table 2 shows test errors on CIFAR-10 with various supernet architectures, where the 1st and 2nd rows indicate cell diversity and layers (cells) numbers. Since 8-layer supernets could have six varying normal cells, we magnify each normal cell three times to construct compact networks.\nFirst, supernets with large layers do not benefit NAS to discover high-quality compact networks. Test errors in NASB (3rd row) and NASB-CMP (4th row) show that most 8-layer supernets can generate lower test errors than 20-layer supernets. The reason is 20-layer supernets have numerous architecture parameters and network weights, and they should ask for more search epochs than 8- layer supernets to train. Insufficient search epochs for deep supernets do not help NAS reach strong compact networks. Furthermore, supernets with different cell structures are not beneficial for NAS as well. When we compare results in 2nd and 4th columns (or 3rd and 5th columns), most supernets using the same cell structures can generate similar or lower test errors than using different cell structures. The reason is close to the previous one. Different cell structures demand extra search epochs to train high-dimensional architecture parameters compared to homogeneous cell structures. Not enough epochs for different cell structures do not produce low test error. Although the results contradict the hypothesis, NASB-CMP shows its potential to explore complicated supernet architectures, which paves the way for designing supernet architectures." }, { "heading": "4 RELATED WORK", "text": "Parallelism has been applied to NAS for acceleration (Zoph & Le, 2016; Xie et al., 2019; Cai et al., 2019; Mei et al., 2019). Parameter servers in NAS (Zoph & Le, 2016) train several child networks in parallel to speed up the learning process of the controller. ProxylessNAS (Cai et al., 2019) speed up its retrain phase by a distributed framework, Horovod (Sergeev & Del Balso, 2018). SNAS (Xie et al., 2019) and AtomNAS (Mei et al., 2019) have accelerated the search phase by data parallelism. Data parallelism runs data partitions simultaneously across multiple devices, but it cannot parallelize large models exceeding the memory of a single device, especially complicated supernets with large batch sizes. In contrast, model parallelism (Lee et al., 2014; Harlap et al., 2018; Huang et al., 2019; Kim et al., 2020) excels at parallelizing large models. GPipe (Huang et al., 2019) splits mini-batches to micro-batches and execute micro-batches in the pipeline of model partitions. The pipeline manner mitigates low hardware utilization in model parallelism. Consecutive model parallel is motivated by pipeline parallelism to overlap sub-tasks of forward/backward phases. We found that batch splitting and re-materialization (Chen et al., 2016) of GPipe increase NAS search time because frequently updating A and w intensifies extra computation. To the best of our knowledge, CMP is the most efficient model parallelism for NAS.\nReducing GPU utilization to enlarge search batch sizes is another acceleration techniques (Xu et al., 2020; Chen et al., 2019; Xie et al., 2019; Yao et al., 2020; Cai et al., 2019). PC-DARTS (Xu et al., 2020) samples channels of feature maps in mixed operations. P-DARTS (Chen et al., 2019) reduce search space as it progressively increases layers of supernets in the search phase. ProxylessNAS (Cai et al., 2019) and NASP (Yao et al., 2020) binarize A to reduce all operations saved in GPU. NASB uses the same binarization as NASP but saves one active operation in the mixed operations. Thus, NASB can reduce GPU consumption substantially and give CMP more space to keep two computation graphs in GPUs." }, { "heading": "5 CONCLUSION", "text": "We proposed a simple and efficient model parallel approach, NASB-CMP, which overlaps sub-tasks of forward and backward phases to reduce idle time across GPUs and utilize binary architecture parameters to reduce GPU utilization for heavy supernets. Experiments on CIFAR-10 show NASBCMP runs 1.2× faster with a large batch size of 896 than other model parallel approaches in 4 GPUs and only took 1.48 hours to attain a test error of 2.53, surpassing state-of-the-art differentiable NAS. Moreover, NASB-CMP is able to accommodate high complicated supernets for search, which paves the way for supernet network architecture design. In the future, we will combine the data parallel with NASB-CMP to overcome its inferior scalability, investigate effective and complicated supernet architectures, and analyze the communication overhead of NASB-CMP in a multi-node GPU cluster." }, { "heading": "A ALTERNATE ALGORITHM OF BINARY NEURAL ARCHITECTURE SEARCH", "text": "Algorithm 2 displays the alternate fashion to update A and w in the NAS: updating A with fixed w and then updating w with fixed A. Note that Line 9 of Algorithm 2 computes the gradients ∇wLtrain with updated Gt+1, which is different from the consecutive algorithm (Algorithm 1).\nAlgorithm 2: NASB 1: Initialize architecture weights A and network weights w 2: while not stopped do 3: Gt = binarize(At) 4: Create mBO using Gt and Eq. 5 5: Compute∇ALvalid(wt,Gt) using Eq. 6 // handle the gradients of inactive elements 6: Update At+1 by descending∇ALvalid(wt,Gt) 7: Gt+1 = binarize(At+1) 8: Create mBO using Gt+1 and Eq 5 9: Compute∇wLtrain(wt,Gt+1) // standard back-propagation 10: Update wt+1 by descending ∇wLtrain(wt,Gt+1) 11: end while\nB IMPLEMENTATION OF PARALLEL NAS\nThe data parallel leverages PyTorch (Paszke et al., 2017) distributed module providing communication interfaces to update parameter tensors between multiple processes. Model parallel and CMP are implemented in multi-threading. Each GPU has a specialized thread responsible for its model partition. Those threads enable different model partitions to run simultaneously. Without multi-threading, only assigning model partitions to specific devices do not automatically overlap sub-tasks. For GPipe, we adopt the corresponding PyTorch package, torchgpipe (Kim et al., 2020), in replace of GPipe, since GPipe is written in Tensorflow. The chunk setting to split mini-batch size to micro-batch size is disabled in the experiment, because enabling the setting increases the search cost." }, { "heading": "C SEARCH DETAILS ON CIFAR-10", "text": "Our platform is a server with 4 GPUs of NVIDIA GTX 1080 Ti, in which all search experiments are executed. Supernets consist of 8 cells in which the 3rd and 6th cells are reduce cells, and others are normal cells with initial channels 16. The optimizer for network weights w is momentum SGD with moment 0.9, L2 penalty 3e − 4, and cosine anneal learning rate initialized by 0.025 and minimal 0.001. The optimizer for architecture parameter A is Adam with learning rate 3e − 4, L2 penalty 1e− 3, and (β1, β2) = (0.5, 0.999). PC-DARTS with large batch sizes (Xu et al., 2020) has unique configurations: initial learning rate 0.1 and minimal 0.0 for SGD optimizer and learning rate 6e− 4 for Adam optimizer.\nAll NAS algorithms will search networks for 50 epochs with varied batch sizes and random seeds. In Experiment 3.1, NABS-CMP is specified search batch size 224, 416, 512, 896 for 1, 2, 3, 4 GPUs, respectively It random seed is 2. In Experiment 3.2, The batch size is 60 determined by DARTS because DARTS consumes the largest GPU memory. We want all NAS algorithms to use the same batch size in order to compare each other fairly. Since PC-DARTS is proposed to reduce GPU memory consumption, we also compare the performance of PC-DARTS using a large batch size 224 with NASB and NASB-CMP. NASB is specified with its allowable maximal batch size 448 in a single GPU, and NASB-CMP uses a batch size of 896 in 4 GPUs. All NAS baselines and NASB use 2, 3, 4, 5, 6 as random seeds, and NASP-CMP uses 2, 3, 9, 11, 18 instead. In Experiment 3.3, NASB and NASB-CMP exploit batch size 160 and 256, respectively, for 50 epochs. We ran search experiments twice using random seed 2 and 3 and reported the average test error among the two searches in Table 2." }, { "heading": "D EVALUATION DETAILS ON CIFAR-10", "text": "The compact networks used in the retrain (evaluation) phase have 20 cells (layers), where the onethird and two-thirds of the depth are reduce cells and others are normal cells. We retrain the compact networks from scratch for 600 epochs with the batch size 96, dropout path of probability 0.2, and initial channels of 36. We also add the auxiliary layer in the network with a loss weight 0.4. During the evaluation phase, the cutout length 16 is additionally applied for image transformation. The optimizer setting for network weights w is the same as the searching setting. The retrain random seed is assigned to 0, which is different from the search seeds." }, { "heading": "E PRE-PROCESSING SUMMARY ON CIFAR-10", "text": "We preprocess the training images in the following techniques: padding 32×32 images with 4 pixels, and then randomly cropping them back to 32 × 32; randomly flipping images in the horizontal direction; normalizing image pixels by the channel mean and standard deviation. The processed training set is split evenly: the first half serves as the final training set, and the other serves as the validation set. SNAS merely relies on the training set to search, so its training set is not split." }, { "heading": "F CANDIDATE OPERATIONS ON CIFAR-10", "text": "Table 3 summarizes candidate operations for mixed operations used in NAS papers. Experiment 3 makes use of the first row of Table 3 as its search space on CIFRAR-10. “skip connect” symbolizes identity operation if stride size is 1 or ReLU-Conv-Conv-BN operation. “conv”, “sep”, and “dil conv” signifies convolution, depthwise-separable convolutions, and dilated depthwise-separable convolutions, respectively. “none” means the zero operation. Note that differentiable NAS baselines (DARTS, SNAS, PC-DARTS) also utilize the first row of Table 3 as their search space." } ]
2,020
null
SP:d10957cc11891e1aad6ecac21a73d589bfac341d
[ "This paper proposes a method called Temporal Abstract Latent Dynamics (TALD). TALD is built up on RSSM (Hafner et al. 2019) but with hierarchical dynamics. The experiments are conducted on moving MNIST, GQN 3D Mazes, and KTH. Results are qualitatively better than other methods in term of maintaining long-term consistent prediction. Quantitative comparison is reported only on KTH dataset (Figure 5). Written presentation is clear and easy to understand." ]
Deep learning has shown promise for accurately predicting high-dimensional video sequences. Existing video prediction models succeeded in generating sharp but often short video sequences. Toward improving long-term video prediction, we study hierarchical latent variable models with levels that process at different time scales. To gain insights into the representations of such models, we study the information stored at each level of the hierarchy via the KL divergence, predictive entropy, datasets of varying speed, and generative distributions. Our analysis confirms that faster changing details are generally captured by lower levels, while slower changing facts are remembered by higher levels. On synthetic datasets where common methods fail after 25 frames, we show that temporally abstract latent variable models can make accurate predictions for up to 200 frames.
[]
[ { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H. Campbell", "Sergey Levine" ], "title": "Stochastic variational video", "venue": "prediction. CoRR,", "year": 2017 }, { "authors": [ "Lars Buesing", "Theophane Weber", "Sébastien Racanière", "S.M. Ali Eslami", "Danilo Jimenez Rezende", "David P. Reichert", "Fabio Viola", "Frederic Besse", "Karol Gregor", "Demis Hassabis", "Daan Wierstra" ], "title": "Learning and querying fast generative models for reinforcement learning", "venue": "CoRR, abs/1802.03006,", "year": 2018 }, { "authors": [ "Lluís Castrejón", "Nicolas Ballas", "Aaron C. Courville" ], "title": "Improved conditional vrnns for video prediction", "venue": "CoRR, abs/1904.12165,", "year": 2019 }, { "authors": [ "Silvia Chiappa", "Sébastien Racanière", "Daan Wierstra", "Shakir Mohamed" ], "title": "Recurrent environment simulators", "venue": "CoRR, abs/1704.02254,", "year": 2017 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "venue": "CoRR, abs/1406.1078,", "year": 2014 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C. Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "CoRR, abs/1506.02216,", "year": 2015 }, { "authors": [ "Junyoung Chung", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "CoRR, abs/1609.01704,", "year": 2016 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned", "venue": "prior. CoRR,", "year": 2018 }, { "authors": [ "Andreas Doerr", "Christian Daniel", "Martin Schiegg", "Duy Nguyen-Tuong", "Stefan Schaal", "Marc Toussaint", "Sebastian Trimpe" ], "title": "Probabilistic recurrent state-space models", "venue": "arXiv preprint arXiv:1801.10395,", "year": 2018 }, { "authors": [ "SM Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A Rusu", "Ivo Danihelka", "Karol Gregor" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Jean-Yves Franceschi", "Edouard Delasalles", "Mickaël Chen", "Sylvain Lamprier", "Patrick Gallinari" ], "title": "Stochastic latent residual video prediction", "venue": "arXiv preprint arXiv:2002.09219,", "year": 2020 }, { "authors": [ "Mevlana Gemici", "Chia-Chun Hung", "Adam Santoro", "Greg Wayne", "Shakir Mohamed", "Danilo Jimenez Rezende", "David Amos", "Timothy P. Lillicrap" ], "title": "Generative temporal models with memory", "venue": "CoRR, abs/1702.04649,", "year": 2017 }, { "authors": [ "William H. Guss", "Brandon Houghton", "Nicholay Topin", "Phillip Wang", "Cayden Codel", "Manuela Veloso", "Ruslan Salakhutdinov" ], "title": "MineRL: A large-scale dataset of Minecraft demonstrations", "venue": "Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Juan Camilo Gamboa Higuera", "David Meger", "Gregory Dudek" ], "title": "Synthesizing neural network controllers with probabilistic model based reinforcement learning", "venue": "arXiv preprint arXiv:1803.02291,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Wojciech M. Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio García Castañeda", "Charles Beattie", "Neil C. Rabinowitz", "Ari S. Morcos", "Avraham Ruderman", "Nicolas Sonnerat", "Tim Green", "Louise Deason", "Joel Z. Leibo", "David Silver", "Demis Hassabis", "Koray Kavukcuoglu", "Thore Graepel" ], "title": "Human-level performance in first-person multiplayer games with population-based deep reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick Van der Smagt" ], "title": "Deep variational bayes filters: Unsupervised learning of state space models from raw data", "venue": "arXiv preprint arXiv:1605.06432,", "year": 2016 }, { "authors": [ "Taesup Kim", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Variational temporal abstraction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Manoj Kumar", "Mohammad Babaeizadeh", "Dumitru Erhan", "Chelsea Finn", "Sergey Levine", "Laurent Dinh", "Durk Kingma" ], "title": "Videoflow: A flow-based generative model for video", "venue": "CoRR, abs/1903.01434,", "year": 2019 }, { "authors": [ "Yann LeCun", "Bernhard Boser", "John S Denker", "Donnie Henderson", "Richard E Howard", "Wayne Hubbard", "Lawrence D Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Alex X. Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video", "venue": "prediction. CoRR,", "year": 2018 }, { "authors": [ "Junhyuk Oh", "Xiaoxiao Guo", "Honglak Lee", "Richard L Lewis", "Satinder Singh" ], "title": "Action-conditional video prediction using deep networks in atari games", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "arXiv preprint arXiv:1505.05770,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "C. Schuldt", "I. Laptev", "B. Caputo" ], "title": "Recognizing human actions: a local svm approach", "venue": "In Proceedings of the 17th International Conference on Pattern Recognition,", "year": 2004 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In Proceedings of the 30th International Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhutdinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "CoRR, abs/1502.04681,", "year": 2015 }, { "authors": [ "Thomas Unterthiner", "Sjoerd van Steenkiste", "Karol Kurach", "Raphaël Marinier", "Marcin Michalski", "Sylvain Gelly" ], "title": "Towards accurate generative models of video: A new metric & challenges", "venue": "CoRR, abs/1812.01717,", "year": 2018 }, { "authors": [ "Arash Vahdat", "Jan Kautz" ], "title": "Nvae: A deep hierarchical variational autoencoder", "venue": null, "year": 2020 }, { "authors": [ "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Generating videos with scene dynamics", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Dirk Weissenborn", "Jakob Uszkoreit", "Oscar Täckström" ], "title": "Scaling autoregressive video models", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Nevan Wichers", "Ruben Villegas", "Dumitru Erhan", "Honglak Lee" ], "title": "Hierarchical long-term video prediction without supervision", "venue": "CoRR, abs/1806.04768,", "year": 2018 }, { "authors": [ "Ziru Xu", "Yunbo Wang", "Mingsheng Long", "Jianmin Wang" ], "title": "Predcnn: Predictive learning with cascade convolutions", "venue": "In Proceedings of the 27th International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "R. Zhang", "P. Isola", "A.A. Efros", "E. Shechtman", "O. Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Learning hierarchical features from generative models", "venue": "CoRR, abs/1702.08396,", "year": 2017 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nDeep learning has enabled predicting video sequences from large datasets (Chiappa et al., 2017; Oh et al., 2015; Vondrick et al., 2016). For high-dimensional inputs such as video, there likely exists a more compact representation of the scene that facilitates long term prediction. Instead of learning dynamics in pixel space, latent dynamics models predict ahead in a more compact feature space (Doerr et al., 2018; Buesing et al., 2018; Karl et al., 2016; Hafner et al., 2019). This has the added benefit of increased computational efficiency and a lower memory footprint, allowing to predict thousands of sequences in parallel using a large batch size.\nA lot of work in deep learning has focused on spatial abstraction, following the advent of convolutional networks (LeCun et al., 1989), such as the Variational Ladder Autoencoder (Zhao et al., 2017) that learns a hierarchy of features in images using networks of different capacities, along with playing an important role in the realm of video\nprediction models (Castrejón et al., 2019). Recent sequential models have incorporated temporal abstraction for learning dependencies in temporally distant observations (Koutník et al., 2014; Chung et al., 2016). Kim et al. (2019) proposed Variational Temporal Abstraction (VTA), in which they explored one level of temporal abstraction above the latent states, the transition of which was modeled using a Bernoulli random variable. In this paper, we intend to work in a more controlled setup than VTA for a qualitative and quantitative analysis of temporally abstract latent variable models.\nIn this paper, we study the benefits of temporal abstraction using a hierarchical latent dynamics model, trained using a variational objective. Each level in the hierarchy of this model temporally abstracts the level below by an adjustable factor. This model can perform long-horizon video prediction of 200 frames, while predicting accurate low-level information for a 6 times longer duration than the baseline model. We study the information stored at different levels of the hierarchy via KL divergence, predictive entropy, datasets of varying speeds, and generative distributions. In our experiments we show that this amounts to object location and identities for the Moving MNIST dataset, and the wall or floor patterns for the GQN mazes dataset (Eslami et al., 2018), stored at different levels.\nOur key contributions are summarized as follows:\n• Temporal Abstract Latent Dynamics (TALD) We introduce a simple model with different clock speeds at every level to study the properties of variational hierarchical dynamics.\n• Accurate long-term predictions Our form of temporal abstraction substantially improves for how long the model can accurately predict video frames into the future.\n• Adaptation to sequence speed We demonstrate that our model automatically adapts the amount of information processed at each level to the speed of the video sequence.\n• Separation of information We visualize the content represented at each level of the hierarchy to find location information in lower levels and object identity in higher levels." }, { "heading": "2 RELATED WORK", "text": "Generative video models A variety of methods have successfully approached video prediction using large datasets (Chiappa et al., 2017; Oh et al., 2015; Vondrick et al., 2016; Babaeizadeh et al., 2017; Gemici et al., 2017; Ha & Schmidhuber, 2018). Denton & Fergus (2018) proposed a stochastic video generation model with a learned prior that transitions in time, and is conditioned on past observations. Lee et al. (2018) proposed to use an adversarial loss with a variational latent variable model to produce naturalistic images, while Kumar et al. (2019) used flow-based generative modeling to directly optimize the likelihood of a video generation model. Recently, Weissenborn et al. (2020) scaled autoregressive models for video prediction using a three-dimensional self-attention mechanism and showed competitive results on real-world video datasets. On similar lines, Xu et al. (2018) proposed to use an entirely CNN-based architecture for modeling dependencies between sequential inputs. Latent dynamics models Latent dynamics models have evolved from latent space models that had access to low-dimensional features (Deisenroth & Rasmussen, 2011; Higuera et al., 2018), to models that can build a compact representation of visual scenes and facilitate video prediction purely in the latent space (Doerr et al., 2018; Buesing et al., 2018; Karl et al., 2016; Franceschi et al., 2020). The Variational RNN (Chung et al., 2015) uses an auto-regressive state transition that takes inputs from observations, making it computationally expensive to be used as an imagination module. Hafner et al. (2019) proposed a latent dynamics model, which is a combination of deterministic and stochastic states, that enables the model to deterministically remember all previous states and filter that information to obtain a distribution over the current state. Hierarchical latent variables Learning per-frame hierarchical structures has proven to be helpful in generating videos on long-term horizon (Wichers et al., 2018). Zhao et al. (2017) proposed the Variational Ladder Autoencoder (VLAE) that uses networks of different capacities at different levels of the hierarchy, encouraging the model to store high-level image features at the top level, and simple features at the bottom. Other recently proposed hierarchical models use a purely bottom-up inference approach with no interaction between the inference and generative models (Kingma & Welling,\n2014; Rezende & Mohamed, 2015; Rezende et al., 2014). In contrast, Sønderby et al. (2016, LVAE) and Vahdat & Kautz (2020, NVAE) proposed to use a combination of bottom-up and top-down inference procedures, sharing parameters between the inference and generative distributions during the top-down pass. We incorporate this conditional structure in our model design as well.\nTemporal abstraction Identifying complex dependencies between temporally distant observations is a challenging task and has inspired a variety of fundamental work in recurrent models (Koutník et al., 2014; Chung et al., 2016). However, relatively few works have demonstrated modeling longterm dependencies using temporally abstract latent dynamics models (Wichers et al., 2018; Jaderberg et al., 2018). Recently, Kim et al. (2019) introduced Variational Temporal Abstraction (VTA) to learn temporally abstract latent spaces. They explored one level of temporal abstraction above the latent states, the transition of which was modeled using a Bernoulli random variable, that chose between ‘copy’ or ‘update’ steps. Inspired by this work, we aim to gain a deeper understanding of such temporally-abstract latent dynamics models. We perform our analysis on a model that is simplified to using fixed time scales for every level. Moreover, the lower level is a continuing chain in our model, whereas VTA resets transitions at a lower level when transitioning at a higher level." }, { "heading": "3 TEMPORALLY ABSTRACT LATENT DYNAMICS", "text": "Long video sequences contain both information that is local to a few frames as well as global information that is shared among many frames. Traditional video prediction models that predict ahead at the frame rate of the video can struggle to retain information long enough to learn such longterm dependencies. We introduce Temporally Abstract Latent Dynamics (TALD) to learn long-term correlations of videos. Our model predicts ahead on multiple time scales to learn dependencies at different temporal levels, as visualized in Figure 3. We build our work upon the recurrent state-space model (RSSM; Hafner et al., 2019), the details of which can be found in Appendix A.\nTALD consists of a hierarchy of recurrent latent variables, where each level transitions at a different clock speed. We slow down the transitions exponentially as we go up in the hierarchy, i.e. every level being slower than the level below by a factor of k. We denote a set of active timesteps for every level l ∈ [1, L] as those steps in time where the state transition generates a new latent state,\nActive timesteps: Tl . = {t ∈ [1, T ] | tmod kl−1 = 1}. (1)\nAt each level, we condition every window of k latent states on a single latent variable in the level above. This can also be thought of as a hierarchy of latent variables where each level has the same clock speed, but performs a state transition every kl−1 timesteps and copies the same state variable otherwise, so that ∀t /∈ Tl:\nInactive states: slt . = slmax\nτ {τ∈Tl|τ≤t}. (2)\nJoint distribution We can factorize the joint distribution of a sequence of observations and (active) latents at every level into two terms: (1) a decoder term conditioned on the latent states in the lowest\nlevel, and (2) state transitions at all levels conditioned on the latent state of the last active timestep at the current level and the level above,\np(x1:T , s 1:L 1:T )\n. = (∏T t=1 p(xt | s1t ) )(∏L l=1 ∏ t∈Tl p(s l t | slt−1, sl+1t ) ) . (3)\nInference For inference, TALD embeds observed frames using a CNN. A hierarchical recurrent network then summarizes the input embeddings, for which each (active) latent state at a level l receives embeddings from kl−1 observation frames (dashed lines in Figure 3). The latent state at the previous timestep at the current level, and the state belief at the level above also condition the posterior belief (solid lines in Figure 3). The input embeddings combined with this top-down and temporal context together condition the posterior belief qlt over the latent state.\nGeneration The prior transition plt is computed by conditioning over the latent state at the previous timestep at the current level, and the state belief at the level above (solid lines in Figure 3).\nDecoding Finally, the state beliefs at the bottom-most level are decoded using a transposed CNN to provide a training signal. To summarize, we utilize the following components in our model, ∀ l ∈ [1, L], t ∈ Tl, Encoder: elt = e(xt:t+kl−1−1)\nPosterior transition qlt: q(s l t | slt−1, sl+1t , elt)\nPrior transition plt: p(s l t | slt−1, sl+1t )\nDecoder: p(xt | s1t ).\n(4)\nTraining objective Since we cannot compute the likelihood of the training data under the model in closed form, we use the ELBO as our training objective. This training objective optimizes a reconstruction loss at the lowest level, and a KL regularizer at every level in the hierarchy summed across active timesteps,\nmax e,h,q,p\n∑T t=1 Eq1t [ln p(xt | s 1 t )]− ∑L l=1 ∑ t∈Tl KL[q l t ‖ plt]. (5)\nThe KL regularizer at each level limits the amount of information that filters through the encoder and stays in the posterior at that level. This encourages the model to utilize the state transitions and context from the level above as much as possible. Since the number of active timesteps decreases as we go higher in the hierarchy, the number of KL terms per level decreases as well. Hence it is easier for the model to push global information high up in the hierarchy and pay lesser KL penalty, instead of transitioning those bits with an identity transformation at a lower level.\nStochastic and Deterministic Path As illustrated in Figure 3 (right), we split the state slt into stochastic (zlt) and deterministic (h l t) parts (Hafner et al., 2019). The deterministic state is computed using the top-down and temporal context, which then conditions the stochastic state at that level.\nThe stochastic states follow a diagonal Gaussian, with mean and variance predicted by a neural network. We use a GRU (Cho et al., 2014) per level to update the deterministic state at every active timestep. All components in Equation 4 are trained jointly by optimizing Equation 5 using stochastic backpropagation with reparameterized sampling. Please refer to Appendix B for architectures details.\n4 EXPERIMENTS\nWe aim to evaluate temporally-abstract latent dynamics models at modeling long-term dependencies in video. Moreover, we aim to understand how they separate information into different levels of the hierarchy. To investigate these questions, we train TALD described in Section 3, the temporally-abstract VTA model (Kim et al., 2019), the RSSM model without temporal abstraction (Hafner et al., 2019), and the image-space video prediction model SVGLP (Denton & Fergus, 2018) on four datasets of varying complexity. We consider the well-\nestablished Moving MINST dataset (Srivastava et al., 2015), the KTH Action dataset (Schuldt et al.,\n2004), the GQN mazes dataset (Eslami et al., 2018), and the MineRL Navigate dataset (Guss et al., 2019). We evaluate open-loop video predictions on these datasets using four quantitative metrics: Structural Similarity index (SSIM; higher is better), Peak Signal-to-Noise Ratio (PSNR; higher is better), Learned Perceptual Image Patch Similarity (LPIPS; lower is better) (Zhang et al., 2018), and Frechet Video Distance (FVD; lower is better) (Unterthiner et al., 2018). In Section 4.5, we investigate how the amount of information stored at different levels of a temporal hierarchy adapts to changes in sequence speed. In Section 4.6, we visualize the information stored at different levels by resetting individual levels of the hierarchy.\nWe trained all our models using sequences of length 100. We used convolutional frame encoders and decoders, with architectures very similar to the DCGAN (Radford et al., 2016) discriminator and generator, respectively. Our implementations made use of TensorFlow Probability (Dillon et al., 2017) and CuDNN, and used the Adam optimizer (Kingma & Ba, 2014) for training. The training time for a 3-level TALD model with temporal abstraction 6 amounted to around 24 hours for 100 epochs on a single NVIDIA TITAN Xp GPU. Refer to Appendix C for hyperparameters and experimental setup." }, { "heading": "4.1 MOVING MNIST DATASET", "text": "The Moving MNIST dataset consists of two digits moving in a square with velocities sampled in the range of 2 to 6 pixels per frame. We trained different versions of TALD with 3 levels in the hierarchy and temporal abstraction factors 1, 2, 4, 6 and 8, all of which use the same number of model parameters. We compare samples of long-horizon open-loop video predictions of 900 frames with RSSM and SVG-LP in Figure 4. All samples were conditioned using posterior beliefs inferred after observing 36 context frames. Please refer to Appendix C for more details and discussion.\nWe observe that SVG typically forgets object identity within 50 timesteps, while TALD with abstraction factor 6 maintains digit identity over 900 timesteps. RSSM clearly outperforms SVG, however starts to forget object identities after 250 time frames. TALD with abstraction factor 1 (i.e. no temporal abstraction) also starts to forget object identity after around 600 frames. With regards to the object positions, TALD with abstraction factor 6 predicts accurate digit positions until around 90 steps, and predicts a plausible sequence thereafter. RSSM and TALD without temporal abstraction predict the correct location of digits for at least as long as TALD with temporal abstraction. However, SVG starts to lose track of positions much sooner. We also note that our predictions are a bit blurry compared to those generated by SVG. Please refer to Appendix D for more experimental results.\nWe report the KL divergence value per level (summed over active time steps) for our TALD models in Table 1. Each value was obtained after training over sequences of length 100, for 200 epochs. The 2 and 3-level models were trained with a temporal abstraction factor of 6, and the 4-level model with a factor of 4 (to fit into memory). Figure 1 compares the Structural Similarity index (SSIM) for different versions of TALD with RSSM and SVG-LP. We note that SSIM decreases at a lower rate for models with higher temporal abstractions. As a baseline, we compute SSIM between ground truths and random sequences from the training set. It is interesting to note that quality of video predictions from TALD stay better than random for a 6 times longer duration than SVG." }, { "heading": "4.2 KTH ACTION DATASET", "text": "We trained a 3-level TALD model with temporal abstraction factor 6 for the KTH Action dataset. In Figure 6, we report the SSIM, PSNR, and LPIPS, of TALD compared to SVGLP, RSSM, and VTA. We also illustrate open-loop video predictions in Figure 5, conditioned using 36 context frames. While TALD predicts plausible frames for 50 timesteps, we observe jumpy transitions with VTA, probably because of breaks in the transition chain at the lower level. We also observe that SVG predicts accurately for 18 frames, while switching to a different task thereafter. We also note that SVG uses the DCGAN architecture for MNIST and the much larger VGG for KTH, whereas TALD works well even with the smaller DCGAN encoder/decoder. Refer to Appendix D for more results." }, { "heading": "4.3 GQN 3D MAZES DATASET", "text": "We trained a 2-level TALD model with temporal abstraction factor 6, and compared it with RSSM and VTA, on the GQN mazes dataset. Figure 7 shows open-loop video prediction samples, conditioned using 36 context frames. We observe that while our model can maintain global information of wall and floor colors for 200 frames, RSSM starts to forget the same after ∼50 frames. Even though the open-loop predictions from TALD differ from ground truth in terms of camera viewpoints, the model does not forget the wall and floor patterns. We\ncompare quantitative metrics on this dataset in Figure 12d. Please refer to Appendix D for detailed experimental results." }, { "heading": "4.4 MINERL NAVIGATE DATASET", "text": "We trained a 3-level TALD model with temporal abstraction factor 6, and compared it with RSSM, SVG-LP and VTA, on a Minecraft dataset (MineRL Navigate) (Guss et al., 2019). This dataset features videos in a variety of world environments with complex moving backgrounds. We show long-horizon open-loop video predictions upto 420 frames (conditioned on 36 context frames) on this dataset in Figure 2. We also compare quantitative metrics on this dataset in Figure 12c in Appendix D. Please refer to Figure 15 in Appendix D for more open-loop video prediction examples." }, { "heading": "4.5 ADAPTING TO SEQUENCE SPEED", "text": "In order to understand how our model adapts to changing temporal correlations in the dataset, we trained our model with slower and faster versions of moving MNIST, with speeds varied by factors of 3. For this experiment, our model consisted of 3 levels in the hierarchy, with each level temporally abstracting the level below by a factor of 6.\nFigure 8 shows the KL divergence summed across the active timesteps at every level in the hierarchy. We observe that there is a correlation between the KL divergence at every level and the speed at which the digits move. There is more information at level 1 when the digits move faster, and consequently lesser information at the levels above it. Also, even though the KL divergence at level 3 is small, it still follows the same trend as the other two levels. It is also important to note that the KL divergence between the prior and the posterior is only an upper bound on the information stored by the encoder in a posterior belief state." }, { "heading": "4.6 RESETTING INDIVIDUAL LEVELS", "text": "We visualize the information stored at a certain level by replacing the posterior belief at one level with the prior belief, i.e. all but one level receive observations (Zhao et al., 2017). Conditioned on those posterior beliefs, we sample open-loop video predictions using the trained prior model, which should show variations in the attributes learned at that level. We expect our model to store global information high up in the hierarchy, allowing the model to perform fewer transitions over that information, making it easier to pay less cost in the form of KL divergence during training.\nFigure 9 shows video predictions with different levels reset to the prior for the GQN mazes dataset. With a 3-level model and temporal abstraction factor 2, we observe that when level 3 was not fed with observations, the conditioned open-loop predictions started with the correct viewpoint, but differed w.r.t. wall and floor colors. This suggests that those characteristics were stored in the higher level of the hierarchy.\nFigure 11 shows video predictions with different levels reset to the prior for the Moving MNIST dataset. The 3-level model with temporal abstraction factor 6 obtained a separation of information in the bottom two levels, while the posterior at the third level nearly collapsed to the prior. When level 1 was not fed with observations, we observe that the conditioned open-loop predictions maintained the same digit identity, but showed variations w.r.t. digit positions. On the other hand, when level 2\nwas not fed with observations, the samples maintained the same digit positions but produced digits with different identities in every sample. This suggests that lower level stored digit positions, high frequency details which changed frequently in time, while the level above it stored the digit identities, i.e. long-term information. We also observed that, when resetting level 2 to the prior, the digits start to differ in position sooner (∼60 frames) than when all levels receive observations (∼80 frames). This suggests that this level does have some information about digit positions, and that there is still mixing of information between different levels of the hierarchy.\nPredictive entropy To corroborate our understanding of the latent representations, we observe the entropy of the prior distribution as it varies over time during open-loop video generation in Figure 10. With GQN mazes, the entropy at the top level remained relatively constant as the model remained certain about the high-level details. With Moving MNIST, the top two levels showed a relatively constant entropy, with level 2 storing the digit identities and level 3 suffering from posterior collapse. Please refer to Appendix D.6 for a more detailed analysis." }, { "heading": "5 DISCUSSION", "text": "In this work, we presented a hierarchical latent dynamics model with temporal abstraction (TALD), where each level in the hierarchy temporally abstracted the level below by an adjustable factor.\n• We evaluated long-horizon open-loop predictions using our model, and observed that TALD was able to predict far into the future while accurately maintaining global information.\n• We also observed that the amount of information at the higher levels decreased as the speed of the sequence was increased.\n• We analyzed the separation of information at different levels of the hierarchy, by generating open-loop video predictions with different levels reset to the prior. With Moving MNIST, the bottom level in the hierarchy stored high frequency details (digit positions) and the level above stored more global information (digit identities). With the GQN mazes dataset, TALD stored wall and floor patterns at the top level in the hierarchy.\nTemporally abstract models are an intuitive approach to obtaining high-level representations of complex datasets and environments. We hope that our work can refuel interest in temporally abstract latent dynamics models and motivate the development of effective deep learning systems for highdimensional data more generally." }, { "heading": "A BACKGROUND", "text": "We build our work upon the recurrent state-space model (RSSM; Hafner et al., 2019) that has been shown to successfully learn environment dynamics from raw pixels in reinforcement learning. This model acts as an important baseline for our evaluation. RSSM explains the video sequence x1:T using a latent sequence of compact Markovian states s1:T . Importantly, the model is autoregressive in latent space but not in image space, allowing us to predict into the future without generating images along the way,\np(x1:T , s1:T ) . = ∏T t=1 p(xt | st)p(st | st−1). (6)\nGiven a training sequence, RSSM first individually embeds the frames using a CNN. A recurrent network with deterministic and stochastic components then summarizes the image embeddings. Hafner et al. (2019) argued that the stochastic component helps with modeling multiple futures, while the deterministic state helps remember information over many timesteps so that it is not erased by noise. Finally, the recurrent states are decoded using a transposed CNN to provide a training signal.\nEncoder: et = enc(xt) Posterior transition qt: q(st | st−1, et) Prior transition pt: p(st | st−1) Decoder: p(xt | st).\n(7)\nThe posterior and prior transition models share the same recurrent model. The difference is that the posterior incorporates images, while the prior tries to predict ahead without knowing the corresponding images. This lets us to predict ahead purely in latent space at inference time. As typical with deep latent variable models, we cannot compute the likelihood of the training data under the model in closed form. Instead, we use the evidence lower bound (ELBO) as training objective. The ELBO encourages to accurately reconstruct each image from its corresponding latent state, while regularizing the latent state distributions to stay close to the prior dynamics,\nmax q,p\n∑T t=1 Eqt [ln p(xt | st)]− ∑T t=1 KL[qt ‖ pt]. (8)\nThe KL regularizer limits the amount of information that the posterior transition incorporates into the latent state at each time step, thus encouraging the model to mostly rely on information from past time steps and only extract information from each image that cannot be predicted from the preceding images already. All components jointly optimize Equation 8 using stochastic backpropagation with reparameterized sampling (Kingma & Welling, 2013; Rezende et al., 2014)." }, { "heading": "B MODEL ARCHITECTURES", "text": "We use convolutional frame encoders and decoders, with architectures very similar to the DCGAN (Radford et al., 2016) discriminator and generator, respectively. To obtain the input embeddings elt at a particular level, kl−1 input embeddings are pre-processed using a feed-forward network and then added up to obtain a single embedding. We also want to emphasize that we do not use any skip connections between the encoder and decoder bypassing the latent states as we believe this would motivate the model to make better use of the temporal hierarchy." }, { "heading": "C HYPER PARAMETERS AND EXPERIMENTAL SETUP", "text": "We kept the output dimensionality of the encoder at each level of TALD as |elt| = 1024, that of the stochastic states as |plt| = |qlt| = 20, and that of the deterministic states as |hlt| = 200. All hidden layers inside the cell, both for prior and posterior transition, were set to 200. We trained our models using the Adam optimizer (Kingma & Ba, 2014), with a learning rate of 5× 10−4 and = 10−4 for the Moving MNIST dataset. With the KTH Action and GQN mazes datasets, we lowered the learning rate to 3× 10−4. For all three datasets, batch size was set to be 100 sequences, each sequence being of length 100.\nThe decision of using 36 context frames for conditioning all open-loop video predictions was taken considering the minimum number of observation frames required to transition at least once in the highest level of the hierarchy. With 3 levels in the hierarchy and a temporal abstraction factor of 6, each latent state at the highest level corresponds to 36 images in the sequence, and thus its encoder network expects 36 images as input." }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "D.1 QUANTITATIVE METRICS FOR LONG-HORIZON VIDEO PREDICTION\nD.2 LONG-HORIZON PREDICTION OF MOVING MNIST\nD.3 LONG-HORIZON PREDICTION OF 3D MAZES\nD.4 LONG-HORIZON PREDICTION OF MINERL NAVIGATE\nD.5 UNDERSTANDING THE LATENT REPRESENTATIONS - 3D MAZES\nD.6 ANALYSING THE ENTROPY CURVES Figure 10b shows the entropy of the prior distribution at all levels of the 3-level TALD model with temporal abstraction factor 2, as it varies over time, with the Moving MNIST dataset. We observe that the entropy at the lowest level varies between 9 to 13 nats, whereas for the higher levels in the hierarchy, the entropy remained relatively constant. In Figure 11 we presented some visualizations that supported the claim that the higher level in the hierarchy stores global information (digit identities) which, in this case, does not change over time, and hence the model should remain rather certain about as time progresses. Figure 10b corroborates that claim showing that the model’s uncertainty at higher levels does not dither significantly over time as compared to the lowest level in the hierarchy.\nFigure 10a shows the entropy of the prior distribution at all levels of the 3-level TALD model with temporal abstraction factor 2, as it varies over time, with the GQN mazes dataset. While the bottom two levels showed a similar variance in entropy with time, the entropy of the prior distribution at the top level did not change significantly. We showed in Figure 9 how the top level in the hierarchy stores wall and floor colors, which the model should remain certain about over time. In support of this claim, here we also see that the entropy of the prior distribution at the topmost level does not change significantly over time." } ]
2,020
null
SP:6082a5b51b24315dfdbfe147de1aef2c53cd113d
[ "This paper extends the Wasserstein autoencoder for learning disentangled representations from sequential data. The latent variable model considered contains separate latent variables capturing global and local information respectively, each of which is regularized by a divergence measuring the marginal posterior $Q_z$ and the prior $P_z$. An optional auxiliary discrete latent is introduced to incorporate inductive bias for discrete local features (e.g., type of actions). To estimate the divergence terms, the authors propose to use MMD for the recurrent local latents since the prior distribution evolves over time; for the global latent, the authors presented two options: discriminator-based Jenson-Shannon Divergence estimate (the same as adversarial autoencoder proposed in Makhzani et al., 2016) and scaled MMD (Arbel et al., 2018). The connection between the proposed objective and mutual information maximization is outlined in Section 4. Experimental results show that the proposed R-WAE model outperforms baseline DS-VAE/FHVAE/MocoGAN" ]
Learning disentangled representations leads to interpretable models and facilitates data generation with style transfer, which has been extensively studied on static data such as images in an unsupervised learning framework. However, only a few works have explored unsupervised disentangled sequential representation learning due to challenges of generating sequential data. In this paper, we propose recurrent Wasserstein Autoencoder (R-WAE), a new framework for generative modeling of sequential data. R-WAE disentangles the representation of an input sequence into static and dynamic factors (i.e., time-invariant and time-varying parts). Our theoretical analysis shows that, R-WAE minimizes an upper bound of a penalized form of the Wasserstein distance between model distribution and sequential data distribution, and simultaneously maximizes the mutual information between input data and different disentangled latent factors, respectively. This is superior to (recurrent) VAE which does not explicitly enforce mutual information maximization between input data and disentangled latent representations. When the number of actions in sequential data is available as weak supervision information, R-WAE is extended to learn a categorical latent representation of actions to improve its disentanglement. Experiments on a variety of datasets show that our models outperform other baselines with the same settings in terms of disentanglement and unconditional video generation both quantitatively and qualitatively.
[ { "affiliations": [], "name": "DISENTANGLED RECURRENT" }, { "affiliations": [], "name": "WASSERSTEIN AUTOEN" }, { "affiliations": [], "name": "Jun Han" }, { "affiliations": [], "name": "Martin Renqiang Min" }, { "affiliations": [], "name": "Ligong Han" }, { "affiliations": [], "name": "Li Erran Li" }, { "affiliations": [], "name": "Xuan Zhang" } ]
[ { "authors": [ "Niki Aifanti", "Christos Papachristou", "Anastasios Delopoulos" ], "title": "The mug facial expression database", "venue": "In 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS", "year": 2010 }, { "authors": [ "Michael Arbel", "Dougal Sutherland", "Mikołaj Bińkowski", "Arthur Gretton" ], "title": "On gradient regularizers for mmd gans", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Yogesh Balaji", "Martin Renqiang Min", "Bing Bai", "Rama Chellappa", "Hans Peter Graf" ], "title": "Tfgan: Improving conditioning for text-to-video synthesis", "venue": null, "year": 2018 }, { "authors": [ "Marc G Bellemare", "Ivo Danihelka", "Will Dabney", "Shakir Mohamed", "Balaji Lakshminarayanan", "Stephan Hoyer", "Rémi Munos" ], "title": "The cramer distance as a solution to biased wasserstein gradients", "venue": "arXiv preprint arXiv:1705.10743,", "year": 2017 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Mikołaj Bińkowski", "Dougal J Sutherland", "Michael Arbel", "Arthur Gretton" ], "title": "Demystifying mmd gans", "venue": "arXiv preprint arXiv:1801.01401,", "year": 2018 }, { "authors": [ "Olivier Bousquet", "Sylvain Gelly", "Ilya Tolstikhin", "Carl-Johann Simon-Gabriel", "Bernhard Schoelkopf" ], "title": "From optimal transport to generative modeling: the vegan cookbook", "venue": "arXiv preprint arXiv:1705.07642,", "year": 2017 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": null, "year": 2019 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": null, "year": 2016 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Najim Dehak", "Patrick J Kenny", "Réda Dehak", "Pierre Dumouchel", "Pierre Ouellet" ], "title": "Front-end factor analysis for speaker verification", "venue": "IEEE Transactions on Audio, Speech, and Language Processing,", "year": 2010 }, { "authors": [ "Emily L Denton" ], "title": "Unsupervised learning of disentangled representations from video", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Emilien Dupont" ], "title": "Learning disentangled joint continuous and discrete representations", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Peyman Mohajerin Esfahani", "Daniel Kuhn" ], "title": "Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations", "venue": "Mathematical Programming,", "year": 2018 }, { "authors": [ "John S Garofolo" ], "title": "Timit acoustic phonetic continuous speech corpus", "venue": "Linguistic Data Consortium,", "year": 1993 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Arthur Gretton", "Karsten Borgwardt", "Malte Rasch", "Bernhard Schölkopf", "Alex J Smola" ], "title": "A kernel method for the two-sample-problem", "venue": "In NIPS,", "year": 2007 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Jiawei He", "Andreas Lehrmann", "Joseph Marino", "Greg Mori", "Leonid Sigal" ], "title": "Probabilistic video generation using holistic attribute control", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": null, "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Jun-Ting Hsieh", "Bingbin Liu", "De-An Huang", "Li F Fei-Fei", "Juan Carlos Niebles" ], "title": "Learning to decompose and disentangle representations for video prediction", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Wei-Ning Hsu", "Yu Zhang", "James Glass" ], "title": "Unsupervised learning of disentangled and interpretable representations from sequential data", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": null, "year": 2016 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": null, "year": 2014 }, { "authors": [ "Chun-Liang Li", "Wei-Cheng Chang", "Yu Cheng", "Yiming Yang", "Barnabás Póczos" ], "title": "Mmd gan: Towards deeper understanding of moment matching", "venue": null, "year": 2017 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": null, "year": 2008 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "arXiv preprint arXiv:1611.00712,", "year": 2016 }, { "authors": [ "Lars Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for gans do actually converge", "venue": null, "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": null, "year": 2018 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Sherjil Ozair", "Corey Lynch", "Yoshua Bengio", "Aaron Van den Oord", "Sergey Levine", "Pierre Sermanet" ], "title": "Wasserstein dependency measure for representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Henning Petzka", "Asja Fischer", "Denis Lukovnicov" ], "title": "On the regularization of wasserstein gans", "venue": null, "year": 2017 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron Van Den Oord", "Alex Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 2019 }, { "authors": [ "Guo-Jun Qi" ], "title": "Loss-sensitive generative adversarial networks on lipschitz densities", "venue": "arXiv preprint arXiv:1701.06264,", "year": 2017 }, { "authors": [ "Kevin Roth", "Aurelien Lucchi", "Sebastian Nowozin", "Thomas Hofmann" ], "title": "Stabilizing training of generative adversarial networks through regularization", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Paul K Rubenstein", "Bernhard Schoelkopf", "Ilya Tolstikhin" ], "title": "On the latent space of wasserstein auto-encoders", "venue": "arXiv preprint arXiv:1802.03761,", "year": 2018 }, { "authors": [ "Paul K Rubenstein", "Bernhard Schoelkopf", "Ilya Tolstikhin" ], "title": "Learning disentangled representations with wasserstein auto-encoders", "venue": null, "year": 2018 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Ximeng Sun", "Huijuan Xu", "Kate Saenko" ], "title": "A two-stream variational adversarial network for video generation", "venue": "arXiv preprint arXiv:1812.01037,", "year": 2018 }, { "authors": [ "Hoang Thanh-Tung", "Truyen Tran", "Svetha Venkatesh" ], "title": "Improving generalization and stability of generative adversarial networks", "venue": null, "year": 2019 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 2020 }, { "authors": [ "Sergey Tulyakov", "Ming-Yu Liu", "Xiaodong Yang", "Jan Kautz" ], "title": "Mocogan: Decomposing motion and content for video generation", "venue": null, "year": 2018 }, { "authors": [ "Li Yingzhen", "Stephan Mandt" ], "title": "Disentangled sequential autoencoder", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Learning hierarchical features from deep generative models", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Brock" ], "title": "2019), named \"ResBlock down\". After each Resblock, we use a FC network to get latent feature h, for i = 0", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unsupervised representation learning is an important research topic in machine learning. It embeds high-dimensional sensory data such as images and videos into a low-dimensional latent space in an unsupervised learning framework, aiming at extracting essential data variation factors to help downstream tasks such as classification and prediction (Bengio et al., 2013). In the last several years, disentangled representation learning, which further separates the latent embedding space into exclusive explainable factors such that each factor only interprets one of semantic attributes of sensory data, has received a lot of interest and achieved many empirical successes on static data such as images (Chen et al., 2016; Higgins et al., 2017; Dupont, 2018; Chen et al., 2018; Rubenstein et al., 2018b;a; Kim & Mnih, 2018). For example, the latent representation of handwritten digits can be disentangled into a content factor encoding digit identity and a style factor encoding handwriting style.\nIn spite of successes on static data, only a few works have explored unsupervised representation disentanglement of sequential data due to the challenges of developing generative models of sequential ∗Equal contribution. †Part of his work was done before joining Tencent. ‡His work was done before joining Amazon. §His work was done before joining Texas A&M University.\ndata. Learning disentangled representations of sequential data is important and has many applications. For example, the latent representation of a smiling-face video can be disentangled into a static part encoding the identity of the person (content factor) and a dynamic part encoding the smiling motion of the face (motion factor). The disentangled representation of the video can be potentially used for many downstream tasks such as classification, retrieval, and synthetic video generation with style transfer. Most of previous unsupervised representation disentanglement models for static data heavily rely on the KL-divergence regularization in a VAE framework (Higgins et al., 2017; Dupont, 2018; Chen et al., 2018; Kim & Mnih, 2018), which has been shown to be problematic due to matching individual instead of aggregated posterior distribution of the latent code to the same prior (Tolstikhin et al., 2018; Rubenstein et al., 2018b;a). Therefore, extending VAE or recurrent VAE (Chung et al., 2015) to disentangle sequential data in a generative model framework (Hsu et al., 2017; Yingzhen & Mandt, 2018) is not ideal. In addition, recent research (Locatello et al., 2019) has theoretically shown that it is impossible to perform unsupervised disentangled representation learning without inductive biases on both models and data, especially on static data. Fortunately, sequential data such as videos often have clear inductive biases for the disentanglement of content factor and motion factor as mentioned in (Locatello et al., 2019). Unlike static data, the learned static and dynamic factors of sequential data are not exchangeable.\nIn this paper, we propose a recurrent Wasserstein Autoencoder (R-WAE) to learn disentangled representations of sequential data. We employ a Wasserstein metric (Arjovsky et al., 2018; Gulrajani et al., 2017; Bellemare et al., 2017) induced from the optimal transport between model distribution and the underlying data distribution, which has some nicer properties (for e.g., sum invariance, scale sensitivity, applicable to distributions with non-overlapping supports, and better out-of-sample performance in the worst-case expectation (Esfahani & Kuhn, 2018)) than the KL divergence in VAE (Kingma & Welling, 2014) and β-VAE (Higgins et al., 2017). Leveraging explicit inductive biases in both sequential data and model, we encode an input sequence into two parts: a shared static latent code and a dynamic latent code, and sequentially decode each element of the sequence by combining both codes. We enforce a fixed prior distribution for the static code and learn a prior for the dynamic code to ensure the consistency of the sequence. The disentangled representations are learned by separately regularizing the posteriors of the latent codes with their corresponding priors.\nOur main contributions are summarized as follows: (1) We draw the first connection between minimizing a Wasserstein distance and maximizing mutual information for unsupervised representation disentanglement of sequential data from an information theory perspective; (2) We propose two sets of effective regularizers to learn the disentangled representation in a completely unsupervised manner with explicit inductive biases in both sequential data and models. (3) We incorporate a relaxed discrete latent variable to improve the disentangled learning of actions on real data. Experiments show that our models achieve state-of-the-art performance in both disentanglement of static and dynamic latent representations and unconditional video generation under the same settings as baselines (Yingzhen & Mandt, 2018; Tulyakov et al., 2018)." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Notation Let calligraphic letters (i.e. X ) be sets, capital letters (i.e. X) be random variables and lowercase letters be their values. Let D(PX , PG) be the divergence between the true (but unknown) data distribution PX (density p(x)) and the latent-variable generative model distribution PG specified by a prior distribution PZ (density p(z)) of latent variable Z. Let DKL be KL divergence, DJS be Jensen-Shannon divergence and MMD be Maximum Mean Discrepancy (MMD) (Gretton et al., 2007).\nOptimal Transport Between Distributions The optimal transport cost inducing a rich class of divergence between the distribution PX and the distribution PG is defined as follows,\nW (PX , PG):= inf Γ∼P(X∼PX ,Y∼PG) E(X,Y )∼Γ[c(X,Y )], (1)\nwhere c(X,Y ) is any measurable cost function and P(X ∼ PX , Y ∼ PG) is the set of joint distributions of (X, Y) with respective marginals PX and PG.\nComparison between WAE (Tolstikhin et al., 2018) and VAE (Kingma & Welling, 2014) Instead of optimizing over all couplings Γ between two random variables in X , Bousquet et al.\n(2017); Tolstikhin et al. (2018) show that it is sufficient to find Q(Z|X) such that the marginal Q(Z) := EX∼PX [Q(Z|X)] is identical to the prior P (Z), as given in the following definition,\nDefinition 1. For any deterministic PG(X|Z) and any function G : Z → X ,\nW (PX , PG) = inf Q:QZ=PZ EPXEQ(Z|X)[c(X,G(Z))]. (2)\nDefinition 1 leads to the following loss DWAE of WAE based on a Wasserstein distance,\ninf Q(Z|X) EPXEQ(Z|X)[c(X,G(Z))] + β D(QZ , PZ), (3)\nwhere the first term is data reconstruction loss, and the second one is a regularizer that forces the posterior QZ = ∫ Q(Z|X)dPX to match the prior PZ (Adversarial autoencoder (AAE) (Makhzani et al., 2015) shares a similar idea to WAE). In contrast, VAE has a different regularizer EX [DKL(Q(Z|X), PZ))] enforcing the latent posterior distribution of each input to match PZ . In (Rubenstein et al., 2018a;b), it is shown that WAE has better disentanglement than β-VAE (Higgins et al., 2017) on images, which inspires us to design a new representation disentanglement framework for sequential data with several innovations.\nUnsupervised disentangled representation learning Several generative models have been proposed to learn disentangled representations of sequential data (Denton et al., 2017; Hsu et al., 2017; Yingzhen & Mandt, 2018; Hsieh et al., 2018; Sun et al., 2018; Tulyakov et al., 2018). FHVAE in (Hsu et al., 2017) is a VAE-based hierarchical graphical model with factorized Gaussian priors and only focuses on speech or audio data. Our R-WAE employing a more powerful recurrent prior can be applied to both speech and video data. The models in (Sun et al., 2018; Denton et al., 2017; Hsieh et al., 2018) are based on the first several elements of a sequence to design disentanglement architectures for future sequence predictions.\nIn terms of representation learning by mutual information maximization, our work empirically demonstrates that explicit inductive biases in data and model architecture are necessary to the success of learning meaningful disentangled representations of sequential data, while the works in (Locatello et al., 2019; Poole et al., 2019; Tschannen et al., 2020; Ozair et al., 2019) are about general representation learning, especially on static data.\nThe most related works to ours are MoCoGAN (Tulyakov et al., 2018) and DS-VAE (Yingzhen & Mandt, 2018), which have the ability to disentangle variant and invariant parts of sequential data and perform unconditional sequence generation. Tulyakov et al. (2018) is a GAN-based model that can be only applied to the setting in which the number of motions is finite, and cannot encode the latent representation of sequences. Yingzhen & Mandt (2018) provides a disentangled sequential autoencoder based on VAE (Kingma & Welling, 2014). Training VAE is equivalent to minimizing a lower bound of the KL divergence between empirical data distribution and generated data distribution, which has been shown to produce inferior disentangled representations of static data than generative models employing the Wasserstein metric (Rubenstein et al., 2018a;b)." }, { "heading": "3 PROPOSED APPROACH: DISENTANGLED RECURRENT WASSERSTEIN AUTOENCODER (R-WAE)", "text": "Given a high-dimensional sequence x1:T , our goal is to learn a disentangled representation of timeinvariant latent code zc and time-variant latent code zmt , along the sequence. Let zt = (z\nc, zmt ) be the latent code of xt. Let Xt, Zt, Zc and Zmt be random variables with realizations xt, zt, z\nc and zmt respectively, and denote D = X1:T . To achieve this goal, we define the following probabilistic generative model by assuming Zmt and Z c are independent,\nP (X1:T , Z1:T ) = P (Z c) T∏ t=1 Pψ(Z m t |Zm<t)Pθ(Xt|Zt), (4)\nwhere P (Z1:T ) = P (Zc) ∏T t=1 Pψ(Z m t |Zm<t) is the prior in which Zt = (Zc, Zmt ), and the decoder model Pθ(Xt | Zt) is a Dirac delta distribution. In practice, P (Zc) is chosen as N (0, I)\nand Pψ(Zmt |Zm<t) = N (µψ(Zm<t),σ2ψ(Zm<t)), µψ and σψ are parameterized by Recurrent Neural Networks (RNNs). The inference model Q is defined as\nQφ(Z c, Zm1:T |X1:T ) = Qφ(Zc|X1:T ) T∏ t=1 Qφ(Z m t | Zm<t, Xt), (5)\nwhereQφ(Zc|X1:T ) andQφ(Zmt | Zm<t, Xt) are also Gaussian distributions parameterized by RNNs. The structures of the generative model (4) and the inference model (5) are provided in Fig. 1." }, { "heading": "3.1 R-WAE MINIMIZES A PENALIZED FORM OF A WASSERSTEIN DISTANCE", "text": "The optimal transport cost between two distributions PD and PG with respective sequential variables X1:T (X1:T∼PD) and Y1:T (Y1:T∼PG) is given as follows,\nW (PD, PG) := inf Γ∼P(X1:T∼PD,Y1:T∼PG) E(X1:T ,Y1:T )∼Γ[c(X1:T , Y1:T )], (6)\nP(X1:T∼PD,Y1:T∼PG) is a set of all joint distributions with marginals PD and PG respectively. When we choose c(x,y) = ‖x−y‖2 (2-Wasserstein distance) and c(X1:T , Y1:T ) = ∑ t ‖Xt−Yt‖2 by linearity, it is easy to derive the optimal transport cost for disentangled sequential variables.\nTheorem 1. With deterministic P (Xt|Zt) and any function Yt = G(Zt), we derive\nW (PD, PG) = inf Q:QZc=PZc ,QZm\n1:T =PZm 1:T ∑ t EPDEQ(Zt|Z<t,Xt)[c(Xt, G(Zt))], (7)\nwhere QZ1:T = QZcQZm1:T is the marginal distribution of Z1:T when X1:T ∼ PD and Z1:T ∼ Q(Z1:T |X1:T ) and PZ1:T is the prior. Based on the assumptions, we have an upper bound,\nW (PD, PG) ≤ inf Q∈S ∑ t EPDEQ(Zt|Z<t,Xt)[c(Xt, G(Zt))], (8)\nwhere the subset S is S = {Q : QZc = PZc , QZm1 = PZm1 , QZmt |Zm<t = PZmt |Zm<t} .\nIn practice, we have the following objective function of our proposed R-WAE based on Theorem 1,\nT∑ t=1 EQ(Zt|Z<t,Xt)[c(Xt, G(Zt))] + β1 D(QZc , PZc) + β2 T∑ t=1 D(QZmt |Zm<t , PZmt |Zm<t), (9)\nwhere D is an divergence between two distributions, and the second and third terms are, respectively, regularization terms for Zc and Zmt . In the following, we will present two different approaches to calculating the regularization terms in section 3.2 and 3.3. Because we cannot straightforwardly estimate the marginals Qφ(Zc) and Qφ(Zmt |Zm<t), we cannot directly use KL divergence in the two regularization terms, but we can optimize the RHS of (9) by likelihood-free optimizations (Gretton et al., 2007; Goodfellow et al., 2014; Nowozin et al., 2016; Arjovsky et al., 2018) when samples from all distributions are available.\n3.2 DJS PENALTY FOR Zc AND MMD PENALTY FOR Zm\nThe prior distribution of Zc is chosen as a multivariate unit-variance Gaussian, N (0, I). We can choose penalty DJS(QZc ,PZc) for Zc and apply min-max optimization by introducing a discriminator Dγ (Goodfellow et al., 2014). Instead of performing optimizations in high-dimensional input data space, we move the adversarial optimization to the latent representation space of the content with a much lower dimension. Because the prior distribution of {Zmt } is dynamically learned during training, it is challenging to use DJS to regularize {Zmt } with a discriminator, which will induce a third minimization within a min-max optimization. Therefore, we use MMD to regularize {Zmt } as samples from both distributions are easy to obtain (dimension of zmt is less than 20 in our experiments on videos). With a kernel k, MMDk(Q,P ) is approximated by samples from Q and P (Gretton et al., 2007). The regularization terms can be summarized as follows and we call the resulting model R-WAE(GAN) (see Algorithm 1 in Appendix for details):\nD(QZc ,PZc) = DJS(QZc ,PZc); D(QZmt |Zm<t ,PZmt |Zm<t) = MMDk(QZmt |Zm<t ,PZmt |Zm<t). (10)\n3.3 SCALED MMD PENALTY FOR Zc AND MMD PENALTY FOR Zm\nMMD with neural kernels for generative modeling of real-world data (Li et al., 2017; Bińkowski et al., 2018; Arbel et al., 2018) motivates us to use only MMD as regularization in Eq. (9),\nD(QZc ,PZc) = MMDkγ (QZc ,PZc); D(QZmt |Zm<t ,PZmt |Zm<t) = MMDk(QZmt |Zm<t ,PZmt |Zm<t), (11)\nwhere kγ is a parametrized family of kernels (Li et al., 2017; Bińkowski et al., 2018; Arbel et al., 2018) defined as kγ(x,y) = k(fγ(x), fγ(y)) and fγ(x) is a feature map, which is more expressive and used for Zc with equal or higher dimension than Zmt . The details of optimizing the first term MMDkγ (QZc ,PZc) in Eq. (11) is provided in Appendix D based on scaled MMD (Arbel et al., 2018), a principled and stable technique for training MMD-based critic. We call the resulting model R-WAE(MMD) (see Algorithm 2 in Appendix for details)." }, { "heading": "3.4 WEAKLY SUPERVISED DISENTANGLEMENT WITH A KNOWN NUMBER OF ACTIONS", "text": "When the number of actions (motions) in sequential data, denoted by A, is available, we incorporate a categorical latent variable a (a one-hot vector whose dimension is A) to enhance the disentanglement of the dynamic latent codes of the motions. The inference model for a is designed as qφ(a|x1:T , zm1:T ). Intuitively, the action is inferred from the motion sequence to recognize its label. Learning such a categorical distribution requires a continuous relaxation of the discrete random variable in order to backpropagate its gradient. Let α1, · · · , αA be the class probabilities, we can obtain a sample ã = (y1, · · · , yA) from its continuous relaxation by first sampling g = (g1, · · · , gA) with gj ∼ Gumbel(0, 1) and then applying transformation ãj = exp((logαj +gj)/τ) ∑ i exp((logαi+gi)/τ), where τ is a temperature parameter controlling the approximation. To learn the categorical distribution using the reparameterization trick, we use a regularizer DKL(qφ(ã|x1:T , zm1:T ), p(ã)), where p(ã) is a uniform Gumbel-Softmax prior distribution (Jang et al., 2016; Maddison et al., 2016). The motion variable is augmented as zRt = (z m t ,a), and learning joint continuous and discrete latent representation of image data has been extensively discussed in (Dupont, 2018) (see Fig. 1(c,d) for illustrations)." }, { "heading": "4 ANALYZING R-WAE FROM AN INFORMATION THEORY PERSPECTIVE", "text": "Theorem 2. If the mutual information (MI) between Z1:T and X1:T is defined in terms of the inference model Q, I(Z1:T ;X1:T ) = EQ(X1:T ,Z1:T )[logQ(Z1:T |X1:T ) − logQ(Z1:T )], where Q(X1:T , Z1:T ) = Q(Z1:T |X1:T )P (X1:T ) and Q(Z1:T ) = ∑ X1:T\nQ(X1:T , Z1:T ), we have a lower bound:\nI(Z1:T ;X1:T )≥ T∑ t=1 EPD [ EQφ [logPθ(Xt |Zt)−logP (D)]−EQφ(Zc|X1:T )[logQφ(Z c)−logP (Zc)] ]\n− T∑ t=1 EPD [ EQφ(Zmt |Zm<t,Xt)[logQφ(Z m t |Zm<t)−logP (Zmt |Zm<t) ] . (12)\nTheorem 2 shows that R-WAE maximizes a lower bound of the mutual information between X1:T and Z1:T , which theoretically guarantees that R-WAE learns semantically meaningful latent representations of input sequences. With constant removed, the RHS of (9) and (12) are the same if D is KL divergence. In spite of being theoretically important, Theorem 2 with KL divergence cannot be directly used for the regularization terms of R-WAE in practice, because we cannot straightforwardly estimate the marginals Qφ(Zc) and Qφ(Zmt |Zm<t) as discussed previously. From Eq. (9) and (12), we can obtain the following theorem.\nTheorem 3. When its distribution divergence is chosen as KL divergence, the regularization terms in Eq. (9) jointly minimize the KL divergence between the inference model Q(Z1:T |X1:T ) and the prior model P (Z1:T ) and maximize the mutual information between X1:T and Z1:T ,\nKL(Q(Zc)||P (Zc)) = EpD [KL(Q(Zc|X1:T )||P (Zc))]− I(X1:T ;Zc), (13) KL(Q(Zmt |Zm<t)||P (Zmt |Zm<t))=EpD [KL(Q(Zmt |Zm<t, X1:T )||P (Zmt |Zm<t)]− I(X1:T ;Zmt |Zm<t),\nwhere the mutual information is defined in terms of the inference model as in Theorem 2.\nTheorem 3 shows that, even if adopting KL divergence, the regularization in the loss of R-WAE still improves over the one in vanilla VAE, which only has the first term in the RHS of Eq. (13). The two mutual information terms explicitly enforce mutual information maximization between input data and unexchangeable disentangled latent representations, Zc and Zmt . Therefore, R-WAE is superior to recurrent VAE (DS-VAE)." }, { "heading": "5 EXPERIMENTS", "text": "We conduct extensive experiments on four datasets to quantitatively and qualitatively validate our methods. The baseline methods for comparisons are DS-VAE (Yingzhen & Mandt, 2018) and MoCoGAN (Tulyakov et al., 2018). We train our models on Stochastic moving MNIST (SMMNIST), Sprites, and TIMIT datasets under a completely unsupervised setting. The number of actions (motions) is used as prior information for all methods on MUG facial dataset. The detailed descriptions of datasets, architectures, and hyperparameters are provided in Appendix C, D, and G, respectively." }, { "heading": "5.1 QUALITATIVE RESULTS ON DISENTANGLEMENT", "text": "We encode two original videos on the first and fourth row in Fig. 2 and generate videos on the second and third row by swapping corresponding {zc} and {zm1:T } between videos for style transfer. Fig. 2(left) shows that even testing on the long sequence (trained with T = 100), our R-WAE can disentangle content and motions exactly. In Fig. 2(right), we do the same swapping on Sprites. We can see that the generated swapped videos have exact same appearances and actions as the corresponding original ones. On the MUG\ndataset, it is interesting to see that we can swap different motions between different persons. PPPPPPPMethods Datasets EER zc = 16 ↓ zm = 16 ↑\nFHVAE 5.06% 22.77% DS-VAE 5.64% 19.20% R-WAE 4.73% 23.41%\nTable 1: EER on TIMIT speech dataset under the same dimension setting of segment-level zc and sequence-leve zm for FHVAE (Hsu et al., 2017), DS-VAE (full q) (Yingzhen & Mandt, 2018) and R-WAE(MMD), respectively. Small EER is better for zc and larger EER is better for zm.\nPPPPPPPMethods Datasets Sprites SM-MNIST\nactions content digits DS-VAE(S) 8.11% 3.98% 3.31% R-WAE(S) 5.83% 2.45% 1.78% DS-VAE(C) 10.37% 4.86% 4.26% R-WAE(C) 7.72% 3.31% 2.84%\nTable 2: Comparison of averaged classification errors. On Sprites dataset, fix one encoded attribute and randomly sample others. On SM-MNIST dataset, we fix the encoded zc and randomly sample the motion sequence from the learned prior pψ(zmt |zm<t). We cannot quantitatively verify the motion disentanglement on SM-MNIST." }, { "heading": "5.2 QUANTITATIVE RESULTS", "text": "SM-MNIST and Sprites Datasets We quantitatively evaluate the disentanglement of our RWAE(MMD). In Table 2, \"S\" denotes a simple encoder/decoder architecture, where the encoders in both our model and DS-VAE (Yingzhen & Mandt, 2018) only use 5 layers of convolutional and deconvolutional networks adopted from DS-VAE (Yingzhen & Mandt, 2018). \"C\" denotes a complex encoder/decoder architecture where we use Ladder network (Sønderby et al., 2016; Zhao et al., 2017) and ResBlock (He et al., 2016), provided in Appendix E. On SM-MNIST, we get the labeled latent codes {zc} of test videos {x1:T } with T = 10 and randomly sample motion variables {zm1:T } to get labeled new samples. We pretrain a classifier and predict the accuracy on these labeled new samples. The accuracy on SM-MNIST dataset is evaluated on 10000 test samples. On Sprites, the labels of each attribute(skin colors, pants, hair styles, tops and pants) are available. We get the latent codes by fixing one attribute and randomly sample other attributes. We train a classifier for each attribute and evaluate the disentanglement of each attribute. The accuracy is based on 296 × 9 test data. Both DS-VAE and R-WAE(MMD) have extremely high accuracy (99.94%) when fixing hair style attribute, which is not provided in Table 2 due to space limit. As R-WAE(GAN) and R-WAE(MMD) have similar performance on these datasets, we only provide the results and parameters of R-WAE(MMD) to save space. There are two interesting observations in Table 2. First, the simple architecture has better disentanglement than the complex architecture overall. The reason is that the simple architecture has sufficient ability to extract features and generate clear samples to be recognized by the pretrained classifiers. But the simple architecture cannot generate high-quality samples when applied to real data. Second, our proposed R-WAE(MMD) achieve better disentanglement than DS-VAE (Yingzhen & Mandt, 2018) on both corresponding architectures. The attributes within content latent variables are independent, our model can further disentangle the factors. Compared to DS-VAE, these results demonstrate the advantages of R-WAE with implicit mutual information maximization terms. Due to space limit, we also include similar comparisons on a new Moving-Shape dataset in Appendix I. As the number of possible motions in SM-MNIST is infinite and random, we cannot evaluate the disentanglement of motions by training a classifier. We fix the encoded motions {zm1:T } and randomly sample content variables {zc}. We also randomly sample a motion sequence {zm1:T } and randomly sample contents {zc}. We manually check the motions of these samples and almost all have the same corresponding motion even though the sequence is long (T = 100). TIMIT Speech Dataset We quantitatively compare our R-WAE with FHVAE and DS-VAE on the speaker verification task under the same setting as (Hsu et al., 2017; Yingzhen & Mandt, 2018). The evaluation metric is based on equal error rate (EER), which is explained in detail in Appendix C. The lower EER on zc encoding the timbre of speakers is better and the higher EER on zm encoding linguistic content is better. From Table 1, our model can disentangle zc and zm well. We can see that our R-WAE(MMD) has the best EER performance on both content attribute and motion attribute. In Appendix H we show by style transfer experiments that the learned dynamic factor encodes semantic content which is comparable to DS-VAE. MUG Facial Dataset We quantitatively evaluate the disentanglement and quality of generated samples. We train a 3D classifier on MUG facial dataset with accuracy 95.11% and Inception Score 5.20 on test data (Salimans et al., 2016). We calculate Inception score, intra-entropy H(y|v), where y is the predicted label and v is the generated video, and inter-entropy H(y) (He et al., 2018). For a comprehensive quantitative evaluation, Frame-level FID score, introduced by (Heusel et al., 2017), is also provided. From Table 2, our R-WAE(MMD) and R-WAE(GAN) have higher accuracy, which\nmeans the categorical variable best captures the actions, which indicates our models achieve the best disentanglement. In table 2, the Inception score of R-WAE(GAN) is very close to Inception Score of the exact test data, which means our models have the best sample quality. Our proposed R-WAE(GAN) and R-WAE(MMD) have the best frame-level FID scores, compared with DS-VAE and MoCoGAN. The orignal DS-VAE (DS-VAE(NA)) (Yingzhen & Mandt, 2018) without leveraging the number of actions performs worst, which shows that incorporating the number of actions as prior information does enhance the disentanglement of actions." }, { "heading": "5.3 UNCONDITIONAL VIDEO GENERATION", "text": "SM-MNIST dataset Fig. 4 in Appendix E provides generated samples on the SM-MNIST dataset by randomly sampling content {zc} from the prior p(zc) and motions {zm1:T } from the learned prior pψ(z m t |zm<t). The length of our generated videos is T = 100 and we only show randomly chosen videos of T = 20 to save file size. Our R-WAE(MMD) achieves the most consistent and visually best sequence even when T = 100. Samples from MoCoGAN (Tulyakov et al., 2018) usually change digit identity along the sequence. The reason is that MoCoGAN (Tulyakov et al., 2018) requires the number of actions be finite. Our generated Sprites videos also have the best results but are not provided due to page limit.\nMUG Facial Dataset Fig. 3 and Fig. 5 in Appendix E show generated samples on MUG dataset by randomly sampling content {zc} from the prior p(zc) and motions zRt = (a, zmt ) from the categorical prior p(a) (latent action variable a is a one-hot vector with dimension 6) and the learned prior pψ(zmt |zm<t). We show generated videos of length T = 10. DS-VAE (Yingzhen & Mandt, 2018) used the same structure as ours. Fig. 5 shows that DS-VAE (Yingzhen & Mandt, 2018) and\nMoCoGAN (Tulyakov et al., 2018) have blurry beginning frames {xt} and even more blurry frames as time t evolves. While our R-WAE(GAN) has much better frame quality and more consistent video sequences. To have a clear comparison among all three methods, we show the samples at time step T = 10 in Fig. 3, and we can see that DS-VAE has very blurry samples with large time steps." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose recurrent Wasserstein Autoencoder (R-WAE) to learn disentangled representations of sequential data based on the optimal transport between distributions with sequential variables. Our theoretical analysis shows that R-WAE simultaneously maximizes the mutual information between input sequential data and different disentangled latent factors. Experiments on a variety of datasets demonstrate that our models achieve state-of-the-art results on the disentanglement of static and dynamic latent representations and unconditional video generation. Future research includes exploring our framework in self-supervised learning and conditional settings for text-to-video and video-to-video synthesis.\nAcknowledgement Jun Han thanks Dr. Chen Fang at Tencent for insightful discussions and Prof. Qiang Liu at UT Austin for invaluable support." }, { "heading": "APPENDIX FOR RECURRENT WASSERSTEIN AUTOENCODER", "text": "" }, { "heading": "APPENDIX A: PROOF OF THEOREM 1", "text": "In the following, we provide the proof of Theorem 1.\nTheorem 1 For PG defined with deterministic PG(X|Z) and any function Y = G(Z),\nW (PD, PG) = inf Q:QZc=PZc ,QZm\n1:T =PZm 1:T T∑ t=1 EPDEQ(Zt|Xt)[c(Xt, G(Zt))], (14)\nwhere QZ1:T is the marginal distribution of Z1:T when X1:T ∼ PD and Z1:T ∼ Q(Z1:T |X1:T ) and PZ1:T is the prior. Based on the assumptions, the constraint set is equivalent to\nW (PD, PG) ≤ inf Q∈S ∑ t EPDEQ(Zt|Xt)[c(Xt, G(Zt))], (15)\nwhere the set S = {Q : QZc = PZc , QZm1 = PZm1 , QZmt |Zm<t = PZmt |Zm<t}.\nProof: Consider the sequential random variables D = X1:T and Y1:T , the optimal transport between the distribution for D = X1:T and the distribution for Y1:T induces a rich class of divergence,\nW (PD, PG) := inf Γ∼P(X1:T∼PD,Y1:T∼PG) E(X1:T ,Y1:T )∼Γ[c(X1:T , Y1:T )] (16)\nwhere P(X1:T ∼ PD, Y1:T ∼ PG) is a set of all joint distributions of (X1:T , Y1:T ) with marginals PD and PG, respectively.\nWhen we choose c(x,y) = ‖x − y‖2, c(X1:T , Y1:T ) = ∑ t ‖Xt − Yt‖2 by linearity. It is easy to derive the optimal transport for distributions with sequential random variables,\nW (PD, PG) = inf Q:QZ1:T =PZ1:T ∑ t EPDEQ(Zt|Xt)[c(Xt, G(Zt))]. (17)\nBased on our assumption, the marginal distribution of the inference model satisfies Q(Z1, · · · , ZT ) = Q(Zc)Q(Zm1 , · · · , ZmT ) = Q(Zc) ∏ t Q(Zmt |Zm<t). (18) The prior distribution P (Z1, · · · , ZT ) satisfies P (Z1, · · · , ZT ) = P (Zc)P (Zm1 , · · · , ZmT ) = P (Zc)\n∏ t P (Zmt |Zm<t). (19)\nSince the set S is a subset of {Q : QZ1:T = PZ1:T }, we derive the inequality (15)." }, { "heading": "APPENDIX B: PROOF OF THEOREM 2", "text": "In the following, we provide the proof of Theorem 2. To make the notations easy to read, we use the density functions of corresponding distributions.\nThe joint generative distribution is p(x1:T , z1:T ) = pψ(z1:T )pθ(x1:T |z1:T ), where pψ(z1:T ) is the prior distribution and pθ(x1:T |z1:T ) is the decoder model. And the corresponding joint inference distribution is qφ(x1:T , z1:T ) = pD(x1:T )qφ(z1:T | x1:T ). If the MI between z1:T and x1:T is defined in terms of the inference model q, we have the following lower bound with step-by-step derivations:\nI(z1:T ;x1:T )= Eq(x1:T ,z1:T )[log qφ(z1:T |x1:T ) qφ(z1:T ) ] (20)\n= Eq(x1:T ,z1:T )[DKL(qφ(z1:T |x1:T ), p(z1:T |x1:T ))+log p(z1:T |x1:T )−log qφ(z1:T )] ≥ EpD [Eq(z1:T |x1:T )[log p(x1:T |z1:T ) + log p(z1:T )− log qφ(z1:T )−log p(D)]]\n≥ T∑ t=1 Ep(D) [ Eqφ(zt|xt)[log pθ(xt|zt)] ] −Ep(D)[Eqφ(zt|xt)[log qφ(z c)−log p(zc)] ]\n− T∑ t=1 Ep(D) [ Eqφ(zmt |xt)[log qφ(z m t |zm<t)− log p(zmt |zm<t) + log p(D) ] ,\nwhere we use Bayesian’s rule p(z1:T |x1:T ) = pθ(x1:T |z1:T )p(z1:T )p(D) . Maximizing the MI between z1:T and x1:T achieves state-of-the-art results in disentangled latent representation by using different regularizers for the static and dynamical latent variable with different priors (Hjelm et al., 2018). In practice, incorporating the mutual information I(zmt ,xt) between element xt and motion z m t might facilitate the disentanglement of the dynamical latent variable zmt .\nTheorem 3 When its distribution divergence is chosen as KL divergence, the regularization terms in Eq. (9) jointly minimize the KL divergence between the inference model Q(Z1:T |X1:T ) and the prior model P (Z1:T ) and maximize the mutual information between X1:T and Z1:T ,\nKL(Q(Zc)||P (Zc)) = EpD [KL(Q(Zc|X1:T )||P (Zc))]− I(X1:T ;Zc). KL(Q(Zmt |Zm<t)||P (Zmt |Zm<t)) = EpD [KL(Q(Zmt |Zm<t, X1:T )||P (Zmt |Zm<t)]− I(X1:T ;Zmt |Zm<t).\nProof: Denote XD = X1:T . As in the proof of Theorem 2, the mutual information between Z1:T and X1:T is defined in terms of the inference model Q, and we use the density functions of corresponding distributions to make the notations easy to read. Thus,\nQ(Z1:T ) = EpDq(z1:T |x1:T ).\nAccording to the definition of mutual information, we have\nI(X1:T ;Z c) = EpD ∑ zc pD(x1:T )q(z c|x1:T ) log pD(x1:T )q(z c|x1:T ) pD(x1:T )q(zc)\n= EpD ∑ zc q(zc|x1:T ) log q(zc|x1:T ) q(zc)\n= EpD ∑ zc q(zc|x1:T ) log q(zc|x1:T ) p(zc) − EpD ∑ zc q(zc|x1:T ) log q(zc) p(zc)\n= EpD ∑ zc q(zc|x1:T ) log q(zc|x1:T ) p(zc) − ∑ zc q(zc) log q(zc) p(zc) = EpD [KL(Q(Zc|X1:T )||P (Zc))]−KL(Q(Zc)||P (Zc))\nTherefore,\nKL(Q(Zc)||P (Zc)) = EpD [KL(Q(Zc|X1:T )||P (Zc))]− I(X1:T ;Zc).\nSimilarly, we can prove the second equality in the theorem." }, { "heading": "APPENDIX C: DATASETS", "text": "Stochastic Moving MNIST(SM-MNIST) Dataset Stochastic moving MNIST (SM-MNIST) consists of sequences of frames of size 64× 64× 1, containing one MNIST digit moving and bouncing off edges of the frame (walls). We use one digit instead of two digits because two moving digits may collide, which changes the content of the dynamics and is inconsistent with our assumption. The digits in SM-MNIST move with a constant velocity along a trajectory until they hit at wall at which point they bounce off with a random speed and direction.\nSprites Dataset We follow the same steps as in Yingzhen & Mandt (2018) to process Sprites dataset, which consists of animated cartoon characters whose clothing, hairstyle, skin color and action can be fully controlled. We use 6 variants in each of 4 attribute categories (skin colors, tops, pants and hair style) and there are 64 = 1296 unique characters in total, where 1000 of them are used for training and the rest of them are used for testing. We use 9 action categories (walking, casting spells and slashing, each with three different viewing angles.) The resulting dataset consists of video sequences with T = 8 frames of size 64× 64× 3. MUG Facial Dataset We use the MUG Facial Expression Database (Aifanti et al., 2010) for this experiment. The dataset consists of 86 subjects. Each video consists of 50 to 160 frames. To use the same network architecture for the whole video datasets in this paper, we cropped the face regions and scaled to the same size 64× 64× 3. We use six facial expressions (anger, fear, disgust, happiness,\nsadness, and surprise). To ensure there is sufficient change in the facial expression along a video sequence, we choose every other frame in the original video sequences to form training and test video sequences of length T = 10. 80% of the videos are used for training and 20% of the videos are used for testing.\nTIMIT Speech Dataset The TIMIT dataset (Garofolo, 1993) contains broadband 16k Hz of phonetically-balanced read speech. A total of 6300 utterances (5.4 hours) are presented with 10 sentences from each of 630 speakers. The data is preprocessed in the same way as in (Yingzhen & Mandt, 2018) and (Hsu et al., 2017). The raw speech waveforms are first split into sub-sequences of 200ms, and then preprocessed with sparse fast Fourier transform to obtain a 200 dimensional logmagnitude spectrum, computed every 10ms, i.e., we use T = 20 for sequence x1:T . The dimension of xt is 200.\nNow we explain the detail of the evaluation metric, equal error rate (EER), used on TIMIT dataset. Letwtest be the feature of test utterance xtest1:T andw\ntarget be the feature of test utterance xtarget1:T . The predicted identity is confirmed if the cosine similarity betweenwtest andwtarget, cos(wtest,wtarget) is greater than some threshold used in Dehak et al. (2010). The equal error rate (EER) means the false rejection rate equals the false acceptance rate (Dehak et al., 2010). In the following, we will discuss the two choices of feature wtest for evaluations of all methods,\nµc = 1\nN N∑ i=1 Eq(zc|xi1:T )[z c],\nwhich is used to evaluate the disentanglement of zc;\nµm = 1\nNT N∑ i=1 T∑ j=1 Eq(zmt |xi1:T )[z m t ],\nwhich is used to evaluate the disentanglement of zm. For more details, please refer to (Dehak et al., 2010; Yingzhen & Mandt, 2018; Hsu et al., 2017). We use the same network architecture as in Yingzhen & Mandt (2018) for a fair comparison on speech dataset. As the input dimension of speech is low, the encoder/decoder network is a 2-hidden-layer MLP with the hidden dimension 256." }, { "heading": "APPENDIX D: CHOICES OF REGULARIZERS", "text": "In the following, we will discuss the choice of regularizers in R-WAE. To make notations easy to read, we use density functions for corresponding distributions. In both R-WAE(GAN) and R-WAE(MMD), we use the same regularizer for D(q(zmt |zm<t), p(zmt |zm<t)). We also add a KLdivergence regularization term on zm to stabilize training. In the experiments, we assume inference model q(zc|x1:T ) is a Gaussian distribution with parameters mean µc and diagonal variance matrix σc. Inference model q(zmt |xt, zm<t) is a Gaussian distribution with parameters meanµm and diagonal variance matrix σm. For the prior distribution, we assume p(zmt |zm<t) is a Gaussian distribution with parameters mean µψm and diagonal covariance matrix σ ψ m. For regularizing the motion variables, we just use MMD without introducing any additional parameter, MMDk(q(zmt |zm<t), p(zmt |zm<t)), and we choose mixture of RBF kernel (Li et al., 2017), where RBF kernel is defined as k(x,y) = exp(−‖x−y‖ 2\n2σ2 ). With samples {z̃i} n i=1 from the posterior q(z̃ c) and samples {zi}ni=1 from the prior p(zc), MMDk(q(z̃ c), p(zc)) is defined as\nMMDk(q(z̃ c), p(zc))=\n1 n(n− 1) ∑ i6=j k(zi, zj)+ 1 n(n− 1) ∑ i 6=j k(z̃i, z̃j)− 1 n2 ∑ i,j k(z̃i, zj). (21)\nThe difference between R-WAE(MMD) and R-WAE(GAN) is how to choose metrics for the regularizer D(QZc ,PZc), where PZc is the prior distribution and QZc is the posterior distribution of the inference model.\nR-WAE(MMD) The regularizer D(QZc ,PZc) is chosen as,\nD(QZc ,PZc) = MMDkγ (Q(Zc),P (Zc)),\nwhere the scaled MMD MMDkγ (Q(Z c),P (Zc)) is chosen as\nMMDkγ (QZc ,PZc) = M̂MDkγ (QZc , PZc)\n1 + 10EP̂ [‖∇fγ(zc)‖2F ] ,\nwhere the function fγ(zc) is the kernel feature map and M̂MDkγ (QZc , PZc) is defined in the following. When we have samples {z̃ci}ni=1 from Q(Zc) and samples {zci}ni=1 from P (Zc),\nM̂MDkγ (Q(Z c), P (Zc)) =\n1 n(n− 1) ∑ i 6=j k(fγ(z c i ), fγ(z c j)) +\n1 n(n− 1) ∑ i 6=j k(fγ(z̃ c i ), fγ(z̃ c j))\n(22)\n− 1 n2 ∑ i,j k(fγ(z̃ c i ), fγ(z c j)),\nwhere the RBF kernel k is defined on scalar variables, k(x, y) = exp(−‖x−y‖ 2\n2 ). To avoid the situation where the generator gets stuck on a local optimum, we apply spectral parametrization for the weight matrix (Miyato et al., 2018). The feature map fγ is updated L steps at each iteration. To overcome posterior collapse and inference lagging, we will update the inference model per iteration of updating the decoder model for L steps during training (He et al., 2019). See Algorithm 1 for details.\nR-WAE(GAN) For the regularizer DJS(QZc ,PZc), we introduce a discriminator Dγ . The loss is as follows, L = Ezc∼p(zc)[logDγ(zc)] + Ez̃c∼q(z̃c)[log(1−Dγ(z̃c)))], (23) where p(zc) is the prior distribution and q(z̃c) is the posterior distribution of the inference model. To stabilize the training of the min-max problem in GAN-based optimization (23), a lot of stabilization techniques have been proposed (Thanh-Tung et al., 2019; Mescheder et al., 2018; Gulrajani et al., 2017; Petzka et al., 2017; Roth et al., 2017; Qi, 2017). Let samples {zc} are from the prior p(zc) and {z̃c} are from the inference posterior q(z̃c). In our R-WAE(GAN), we will adopt the regularization from Mescheder et al. (2018) and Thanh-Tung et al. (2019),\nL − λE[‖(∇Dγ)ẑc‖2], (24) where ẑc = αzc + (1− α)z̃c, α ∈ U(0, 1) and (∇Dγ)ẑc is evaluated its gradient at the point ẑc.\nAlgorithm 1 R-WAE(GAN) Input: regularization coefficient β and content prior p(zc) Goal: learn encoders qφ(zc|x1:T ) and qφ(z m t |xt, zm<t), prior pψ(zmt |zm<t), discrimi-\nnatorDγ , and decoder pθ(xt|zt), where zt = (zc, zmt ) while not converged do\nfor step 1 to L do Sample batch X = {xt} Sample {zc} from prior p(zc) and {zmt } from prior pψ Sample {z̃c, z̃mt } from encoders qφ Update discriminator Dγ and encoders qφ with loss given by (9), (10) end for Update pθ and prior pψ with loss given by (9) and (10).\nend while\nAlgorithm 2 R-WAE(MMD) Input: regularization coefficient β and content prior p(zc) Goal: learn encoders qφ(zc|x1:T ) and qφ(z m t |xt, zm<t), prior pψ(zmt |zm<t), feature\nmap fγ and decoder pθ(xt|zt), where zt = (zc, zmt ) while not converged do\nfor step 1 to L do Sample batch X = {xt} Sample {zc} from prior p(zc) and {zmt } from prior pψ Sample {z̃c, z̃mt } from encoders qφ Update feature map fγ and encoders qφ with loss given by (9), (11) end for Update pθ and prior pψ with loss given by (9) and (11).\nend while" }, { "heading": "6.1 APPENDIX E: UNCONDITIONAL VIDEO GENERATION", "text": "Fig. 4 provides generated samples on the SM-MNIST dataset by randomly sampling content {zc} from the prior p(zc) and motions {zm1:T } from the learned prior pψ(zmt |zm<t). The length of our\ngenerated videos is T = 100 and we only show randomly chosen videos of T = 20 to save file size. Our R-WAE(MMD) achieves the most consistent and visually best sequence even when T = 100. Samples from MoCoGAN (Tulyakov et al., 2018) usually change digit identity along the sequence. The reason is that MoCoGAN (Tulyakov et al., 2018) requires the number of actions be finite.\nFig. 5 shows unconditional video generation with T = 10 on MUG facial dataset. DS-VAE in (b) is improved by incorporating categorical latent variables. The figures should be viewed with Adobe Reader to see video.\n(a) R-WAE(GAN) (b) DS-VAE (c) MoCoGAN\nFigure 5: Unconditional video generation with T = 10 on MUG facial dataset. DS-VAE in (b) is improved by incorporating categorical latent variables. The figures should be viewed with Adobe Reader to see video." }, { "heading": "APPENDIX F: LATENT MANIFOLD VISUALIZATION", "text": "We encode the test data {x1:T } of SM-MNIST with T = 10 to get the content codes {zc} using our R-WAE(MMD). We visualize two-dimensional (2D) manifold of {zc} using t-SNE (Maaten & Hinton, 2008). In Fig. 6, different colors correspond to the digit identities of the latent codes {zc} of test videos on SM-MNIST. This indicates that {zc} encoded by our R-WAE(MMD) exactly captures the invariant information (digits) of the test data. The latent motion codes are sequential and cannot be visualized." }, { "heading": "APPENDIX G: MODEL ARCHITECTURE AND HYPER-PARAMETERS", "text": "In the inference model, we use an encoder network, defined in Fig. 7 (a) to extract latent feature ht defined in Fig.1. We use a decoder network to reconstruct x̂t from the hidden state ht, defined in Fig.1. For the discriminator Dγ in R-WAE(GAN), we use a 4-layer fully-connected neural\nnetwork (FC NN) with respective dimension (256, 256, 128, 1). For the feature map fγ with a scalar output for the RBF kernel of R-WAE(MMD), we use a 4-layer fully-connected neural network with respective dimension (256, 256, 128, 1). After encoding xt, we get extracted latent feature ht. We use Fig. 8(a) and Fig. 8(b) to infer the content variable zc and motion variables zmt . When the Gumbel latent variable is incorporated into our weakly-supervised inference model, we use Fig. 8(c) to infer the Gumbel latent variable a. The latent content variable zc and latent motion variable zmt are concatenated as input to an LSTM after an FC NN to output hidden state ht for reconstructing x̂t using the decoder. For our weakly-supervised model, the latent content variable zc, latent motion variable zmt and latent action variable a are concatenated as input to an LSTM after an FC NN to output hidden state ht for reconstructing x̂t using the decoder. We use Adam optimizer (Kingma & Ba, 2015) with β1 = 0.5 and β2 = 0.9.\nArchitecture on SM-MNIST, Sprites and TIMIT Datasets We use the same architecture on SM-MNIST and Sprites dataset, as shown in Fig. 9. The details of the parameters of the networks are provided in Fig. 9. As R-WAE(GAN) and R-WAE(MMD) have similar performance on SMMNIST and Sprites (see Sprites results in Table 4), we only provide the results and parameters of R-WAE(MMD) to save space. At each iteration of training the decoder pθ(xt|zt) and the prior pψ(z m t |zm<t), we train the encoder parameters qφ and the feature map fγ for R-WAE(MMD) with L steps. The results on SM-MNIST and Sprites datasets are evaluated after 500 epochs. On SM-MNIST dataset, we use a Bernoulli cross-entropy loss and choose L = 5. The penalty coefficients β1 and β2, are, respectively, 5 and 20. The learning rate for the decoder model is 5× 10−4 and the learning rate for the encoder is 1× 10−4. The learning rate for fγ is 1× 10−4. On Sprites dataset, we use an L2 reconstruction loss and choose L = 5 steps. The penalty coefficients β1 and β2 are, respectively, 10 and 60. The learning rate for the decoder model is 3× 10−4 and the learning rate for the encoder is 1× 10−4. The learning rate for Dγ in R-WAE(GAN) or fγ in R-WAE(MMD) is 1× 10−4. We use a decayed learning rate schedule on both datasets. After 50 epochs, we decrease all learning rates by a factor of 2 and after 80 epochs decrease further by a factor of 5. On TIMIT speech dataset, we use the same encoder and decoder architecture as that of DS-VAE. The dimension of hidden states is 256 and the dimensions of zc and zmt are both 16.\nResBlock1 down 64*3*3 self-attention\nResBlock2 down 128*3*3 ResBlock3 down 256*3*3 ResBlock4 down 512*3*3\nResBlock5 down 1024*3*3 Reshape output to (N, 1024× 2× 2)\nFC NN\nTable 5: Encoder Network Architecture.\nFC NN and Reshape input to (N, 2048, 2, 2) ResBlock1 up 1024*3*3 ResBlock2 up 512*3*3 ResBlock3 up 256*3*3 ResBlock4 up 128*3*3\nself-attention ResBlock5 up 64*3*3\nConv 3*3*3, activation=sigmoid\nTable 6: Decoder Network Architecture.\nTable 7: Encoder Network Architecture.\nFigure 10: Network parameters on encoder network and decoder network on MUG facial dataset. We adopt ResBlock down and up from Brock et al. (2019). The dimensions of zc, zmt , ht, a are 150, 16, 180 and 6 respectively. The batch size on MUG facial dataset are 30 and the length of video sequence for training is T = 8.\nArchitecture on MUG Facial Dataset The details of the architecture parameters of the networks for MUG facial dataset are provided in Fig. 9. The results on MUG facial dataset are evaluated after 800 epochs. For the regularizer DKL(qφ(a|x1:T , zm1:T ), p(a)), we choose the coefficient of this categorical regularizer to be 50. We use an L2 reconstruction loss and choose L = 5 steps. For R-WAE(MMD), the penalty coefficients β1 and β2 are, respectively, 10 and 50. For R-WAE(GAN), the coefficients β1 and β2 of the penalties are, respectively, 5 and 60. The learning rate for the decoder model is 5 × 10−4 and the learning rate for the encoder is 2 × 10−4. The learning rate for Dγ in R-WAE(GAN) or fγ in R-WAE(MMD) is 2× 10−4. We use the same decayed learning rate schedule as described on SM-MNIST and Sprites datasets. This architecture can be applied to improve the compression rate (?)." }, { "heading": "APPENDIX H: ADDITIONAL RESULTS ON AUDIO DATA", "text": "Swapping Static and Dynamic Factors on Audio Data Here we present results of swapping static and dynamic factors of given audio sequences. Results are given in Figure 11. Each heatmap subplot is of dimension 80 × 20 and visualizes the spectrum of 200ms of an audio clip, in which the mel-scale filter bank features are plotted in the frequency domain (x-axis represents temporal domain with 20 timesteps and y-axis is the value of frequencies). We collect these heatmaps in a matrix where the static factors in a row are kept the same and each column shares the same dynamic factor. It can be observed that in each column, the linguistic phonetic contents as reflected by the formants along x-axis are kept almost the same after swapping. Likewise, the timbres are reflected as the harmonics in the spectrum plot. This can be concluded by observing that the horizontal light stripes which represents the harmonics are kept consistent in a row. Moreover, we perform\nidentity verification experiment as conducted in DS-VAE (Yingzhen & Mandt, 2018). Similar to cross reconstruction, zcfemale and z c male (or f\nfemale and fmale in DS-VAE) are swapped for two sequences {xfemale} and {xmale}. By an informal listening test of the original-swapped speech sequence pairs, we confirm that the speech content is preserved and identity is transferred (i.e. female voice usually has higher frequency).\nAPPENDIX I: ADDITIONAL RESULTS ON A MOVING-SHAPE VIDEO DATA\nGeneration Results on Moving Shapes We report results on a Moving-Shape dataset in Table 9 and Fig. 12. The Moving-Shape synthetic dataset was introduced in Balaji et al. (2018) which has 5 control parameters: shape type (e.g. triangle and square), size (small and large), color (e.g. white and red), motion type (e.g. zig-zag, straight line and diagonal) and motion direction. In Table 9, TFGAN (Balaji et al., 2018) encoder and decoder architectures are considered less expressive compared with BigGAN (Brock et al., 2019) architectures. Similar to results in Table 2, with more complex and expressive architecture, learning disentangled representation is harder. The results in Table 9 and Fig. 12 demonstrate that R-WAE produces better disentanglement and generation performance than DS-VAE both quantitatively and qualitatively. Qualitative difference of fixing zm and sampling zc for DS-VAE and R-WAE is not that obvious and thus not shown." } ]
2,021
null
SP:60894f74f40addd7a2a35a003dcdce6cf70ffef4
[ "The paper extends prior work on equivalence between predictive coding and backprop in layered neural networks to arbitrary computation graphs. This is empirically tested first on a simple nonlinear scalar function, and then on a few commonly used architectures (CNNs, RNNs, LSTMs), confirming the theoretical results. The importance of this advance is highlighted by noting that the demonstrated equivalence shows how in principle modern architectures could be implemented in biological neural systems, and that the highly parallel nature of predictive coding could lead to efficient implementations in neuromorphic hardware." ]
Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies solely on local and Hebbian updates. The power of backprop, however, lies not in its instantiation in MLPs, but rather in the concept of automatic differentiation which allows for the optimisation of any differentiable program expressed as a computation graph. Here, we demonstrate that predictive coding converges asymptotically (and in practice rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rules. We apply this result to develop a straightforward strategy to translate core machine learning architectures into their predictive coding equivalents. We construct predictive coding CNNs, RNNs, and the more complex LSTMs, which include a non-layer-like branching internal graph structure and multiplicative interactions. Our models perform equivalently to backprop on challenging machine learning benchmarks, while utilising only local and (mostly) Hebbian plasticity. Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry, and may also contribute to the development of completely distributed neuromorphic architectures.
[ { "affiliations": [], "name": "APPROXIMATES BACKPROP" } ]
[ { "authors": [ "Mohamed Akrout", "Collin Wilson", "Peter Humphreys", "Timothy Lillicrap", "Douglas B Tweed" ], "title": "Deep learning without weight transport", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shun-Ichi Amari" ], "title": "Information geometry of the em and em algorithms for neural networks", "venue": "Neural networks,", "year": 1995 }, { "authors": [ "Yali Amit" ], "title": "Deep learning with asymmetric connections and hebbian updates", "venue": "Frontiers in computational neuroscience,", "year": 2019 }, { "authors": [ "Brandon Amos", "Denis Yarats" ], "title": "The differentiable cross-entropy method", "venue": "arXiv preprint arXiv:1909.12830,", "year": 2019 }, { "authors": [ "Ryszard Auksztulewicz", "Karl Friston" ], "title": "Repetition suppression and its contextual determinants in predictive", "venue": "coding. cortex,", "year": 2016 }, { "authors": [ "Andre M Bastos", "W Martin Usrey", "Rick A Adams", "George R Mangun", "Pascal Fries", "Karl J Friston" ], "title": "Canonical microcircuits for predictive coding", "venue": null, "year": 2012 }, { "authors": [ "Atılım Günes Baydin", "Barak A Pearlmutter", "Alexey Andreyevich Radul", "Jeffrey Mark Siskind" ], "title": "Automatic differentiation in machine learning: a survey", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Matthew James Beal" ], "title": "Variational algorithms for approximate Bayesian inference", "venue": "university of London London,", "year": 2003 }, { "authors": [ "Yoshua Bengio", "Asja Fischer" ], "title": "Early inference in energy-based models approximates backpropagation", "venue": "arXiv preprint arXiv:1510.02777,", "year": 2015 }, { "authors": [ "Yoshua Bengio", "Thomas Mesnard", "Asja Fischer", "Saizheng Zhang", "Yuhuai Wu" ], "title": "Stdp-compatible approximation of backpropagation in an energy-based model", "venue": "Neural computation,", "year": 2017 }, { "authors": [ "David M Blei", "Alp Kucukelbir", "Jon D McAuliffe" ], "title": "Variational inference: A review for statisticians", "venue": "Journal of the American statistical Association,", "year": 2017 }, { "authors": [ "Rafal Bogacz" ], "title": "A tutorial on the free-energy framework for modelling perception and learning", "venue": "Journal of mathematical psychology,", "year": 2017 }, { "authors": [ "Christopher L Buckley", "Chang Sub Kim", "Simon McGregor", "Anil K Seth" ], "title": "The free energy principle for action and perception: A mathematical review", "venue": "Journal of Mathematical Psychology,", "year": 2017 }, { "authors": [ "Gyorgy Buzsaki" ], "title": "Rhythms of the Brain", "venue": null, "year": 2006 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Tianqi Chen", "Emily Fox", "Carlos Guestrin" ], "title": "Stochastic gradient hamiltonian monte carlo", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Francis Crick" ], "title": "The recent excitement about neural networks", "venue": "Nature, 337(6203):129–132,", "year": 1989 }, { "authors": [ "Mike Davies", "Narayan Srinivasa", "Tsung-Han Lin", "Gautham Chinya", "Yongqiang Cao", "Sri Harsha Choday", "Georgios Dimou", "Prasad Joshi", "Nabil Imam", "Shweta Jain" ], "title": "Loihi: A neuromorphic manycore processor with on-chip learning", "venue": "IEEE Micro,", "year": 2018 }, { "authors": [ "Jonas Degrave", "Michiel Hermans", "Joni Dambre", "Francis Wyffels" ], "title": "A differentiable physics engine for deep learning in robotics", "venue": "Frontiers in neurorobotics,", "year": 2019 }, { "authors": [ "Michael Eickenberg", "Alexandre Gramfort", "Gaël Varoquaux", "Bertrand Thirion" ], "title": "Seeing it all: Convolutional network layers map the function of the human visual system", "venue": null, "year": 2017 }, { "authors": [ "Harriet Feldman", "Karl Friston" ], "title": "Attention, uncertainty, and free-energy", "venue": "Frontiers in human neuroscience,", "year": 2010 }, { "authors": [ "Karl Friston" ], "title": "Learning and inference in the brain", "venue": "Neural Networks,", "year": 2003 }, { "authors": [ "Karl Friston" ], "title": "A theory of cortical responses", "venue": "Philosophical transactions of the Royal Society B: Biological sciences,", "year": 2005 }, { "authors": [ "Karl Friston" ], "title": "Hierarchical models in the brain", "venue": "PLoS computational biology,", "year": 2008 }, { "authors": [ "Steve B Furber", "Francesco Galluppi", "Steve Temple", "Luis A Plana" ], "title": "The spinnaker project", "venue": "Proceedings of the IEEE,", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Andreas Griewank" ], "title": "On automatic differentiation", "venue": "Mathematical Programming: recent developments and applications,", "year": 1989 }, { "authors": [ "Jordan Guerguiev", "Timothy P Lillicrap", "Blake A Richards" ], "title": "Towards deep learning with segregated dendrites", "venue": "Elife, 6:e22901,", "year": 2017 }, { "authors": [ "Jeff Hawkins", "Sandra Blakeslee" ], "title": "On intelligence: How a new understanding of the brain will lead to the creation of truly intelligent", "venue": null, "year": 2007 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Eric Heiden", "David Millard", "Gaurav Sukhatme" ], "title": "Real2sim transfer using differentiable physics", "venue": "In Workshop on Closing the Reality Gap in Sim2real Transfer for Robotic Manipulation,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Jakob Hohwy", "Andreas Roepstorff", "Karl Friston" ], "title": "Predictive coding explains binocular rivalry: An epistemological review", "venue": null, "year": 2008 }, { "authors": [ "Mike Innes", "Alan Edelman", "Keno Fischer", "Chris Rackauckus", "Elliot Saba", "Viral B Shah", "Will Tebbutt" ], "title": "Zygote: A differentiable programming system to bridge machine learning and scientific computing", "venue": null, "year": 1907 }, { "authors": [ "Ryota Kanai", "Yutaka Komura", "Stewart Shipp", "Karl Friston" ], "title": "Cerebral hierarchies: predictive processing, precision and the pulvinar", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2014 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling laws for neural language models", "venue": null, "year": 2001 }, { "authors": [ "Seyed-Mahdi Khaligh-Razavi", "Nikolaus Kriegeskorte" ], "title": "Deep supervised, but not unsupervised, models may explain it cortical representation", "venue": "PLoS computational biology,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Dong-Hyun Lee", "Saizheng Zhang", "Asja Fischer", "Yoshua Bengio" ], "title": "Difference target propagation. In Joint european conference on machine learning and knowledge discovery in databases, pages 498–515", "venue": null, "year": 2015 }, { "authors": [ "Qianli Liao", "Joel Z Leibo", "Tomaso Poggio" ], "title": "How important is weight symmetry in backpropagation", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Adam Santoro" ], "title": "Backpropagation through time and the brain", "venue": "Current opinion in neurobiology,", "year": 2019 }, { "authors": [ "Timothy P Lillicrap", "Daniel Cownden", "Douglas B Tweed", "Colin J Akerman" ], "title": "Random synaptic feedback weights support error backpropagation for deep learning", "venue": "Nature communications,", "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Adam Santoro", "Luke Marris", "Colin J Akerman", "Geoffrey Hinton" ], "title": "Backpropagation and the brain", "venue": "Nature Reviews Neuroscience,", "year": 2020 }, { "authors": [ "Grace Lindsay" ], "title": "Convolutional neural networks as a model of the visual system: past, present, and future", "venue": "Journal of Cognitive Neuroscience,", "year": 2020 }, { "authors": [ "Seppo Linnainmaa" ], "title": "The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors. Master’s Thesis (in Finnish)", "venue": "Univ. Helsinki,", "year": 1970 }, { "authors": [ "William Lotter", "Gabriel Kreiman", "David Cox" ], "title": "Deep predictive coding networks for video prediction and unsupervised learning", "venue": "arXiv preprint arXiv:1605.08104,", "year": 2016 }, { "authors": [ "Stephan Mandt", "Matthew D Hoffman", "David M Blei" ], "title": "Stochastic gradient descent as approximate bayesian inference", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Paul A Merolla", "John V Arthur", "Rodrigo Alvarez-Icaza", "Andrew S Cassidy", "Jun Sawada", "Filipp Akopyan", "Bryan L Jackson", "Nabil Imam", "Chen Guo", "Yutaka Nakamura" ], "title": "A million spikingneuron integrated circuit with a scalable communication network and interface", "venue": null, "year": 2014 }, { "authors": [ "Beren Millidge", "Alexander Tschantz", "Anil Seth", "Christopher L Buckley" ], "title": "Relaxing the constraints on predictive coding models", "venue": "arXiv preprint arXiv:2010.01047,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Masashi Okada", "Luca Rigazio", "Takenobu Aoshima" ], "title": "Path integral networks: End-to-end differentiable optimal control", "venue": "arXiv preprint arXiv:1706.09597,", "year": 2017 }, { "authors": [ "Yann Ollivier" ], "title": "The extended kalman filter is a natural gradient descent in trajectory space", "venue": "arXiv preprint arXiv:1901.00696,", "year": 2019 }, { "authors": [ "Yann Ollivier", "Corentin Tallec", "Guillaume Charpiat" ], "title": "Training recurrent networks online without backtracking", "venue": "arXiv preprint arXiv:1507.07680,", "year": 2015 }, { "authors": [ "Alexander Ororbia", "Ankur Mali", "C Lee Giles", "Daniel Kifer" ], "title": "Continual learning of recurrent neural networks by locally aligning distributed representations", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Avik Pal" ], "title": "Raytracer. jl: A differentiable renderer that supports parameter optimization for scene reconstruction", "venue": "arXiv preprint arXiv:1907.07198,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Chris Rackauckas", "Mike Innes", "Yingbo Ma", "Jesse Bettencourt", "Lyndon White", "Vaibhav Dixit" ], "title": "Diffeqflux. jl-a julia library for neural differential equations", "venue": null, "year": 1902 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Rajesh PN Rao", "Dana H Ballard" ], "title": "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects", "venue": "Nature neuroscience,", "year": 1999 }, { "authors": [ "Jarrett Revels", "Miles Lubin", "Theodore Papamarkou" ], "title": "Forward-mode automatic differentiation in julia", "venue": "arXiv preprint arXiv:1607.07892,", "year": 2016 }, { "authors": [ "Blake A Richards", "Timothy P Lillicrap", "Philippe Beaudoin", "Yoshua Bengio", "Rafal Bogacz", "Amelia Christensen", "Claudia Clopath", "Rui Ponte Costa", "Archy de Berker", "Surya Ganguli" ], "title": "A deep learning framework for neuroscience", "venue": "Nature neuroscience,", "year": 2019 }, { "authors": [ "Dennis W. Ruck", "Steven K. Rogers", "Matthew Kabrisky", "Peter S. Maybeck", "Mark E. Oxley" ], "title": "Comparative analysis of backpropagation and the extended kalman filter for training multilayer perceptrons", "venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence,", "year": 1992 }, { "authors": [ "David E Rumelhart", "David Zipser" ], "title": "Feature discovery by competitive learning", "venue": "Cognitive science,", "year": 1985 }, { "authors": [ "João Sacramento", "Rui Ponte Costa", "Yoshua Bengio", "Walter Senn" ], "title": "Dendritic cortical microcircuits approximate the backpropagation algorithm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Benjamin Scellier", "Yoshua Bengio" ], "title": "Equilibrium propagation: Bridging the gap between energybased models and backpropagation", "venue": "Frontiers in computational neuroscience,", "year": 2017 }, { "authors": [ "Benjamin Scellier", "Anirudh Goyal", "Jonathan Binas", "Thomas Mesnard", "Yoshua Bengio" ], "title": "Generalization of equilibrium propagation to vector field dynamics", "venue": "arXiv preprint arXiv:1808.04873,", "year": 2018 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "H Sebastian Seung" ], "title": "Learning in spiking neural networks by reinforcement of stochastic synaptic transmission", "venue": null, "year": 2003 }, { "authors": [ "Stewart Shipp" ], "title": "Neural elements for predictive coding", "venue": "Frontiers in psychology,", "year": 2016 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Michael W Spratling" ], "title": "Reconciling predictive coding and biased competition models of cortical function", "venue": "Frontiers in computational neuroscience,", "year": 2008 }, { "authors": [ "Jochen J Steil" ], "title": "Backpropagation-decorrelation: online recurrent learning with o (n) complexity", "venue": "IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541),", "year": 2004 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Andrea Tacchetti", "Leyla Isik", "Tomaso Poggio" ], "title": "Invariant recognition drives neural representations of action sequences", "venue": "PLoS computational biology,", "year": 2017 }, { "authors": [ "Corentin Tallec", "Yann Ollivier" ], "title": "Unbiased online recurrent optimization", "venue": "arXiv preprint arXiv:1702.05043,", "year": 2017 }, { "authors": [ "Belinda Tzen", "Maxim Raginsky" ], "title": "Neural stochastic differential equations: Deep latent gaussian models in the diffusion limit", "venue": "arXiv preprint arXiv:1905.09883,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Eiji Watanabe", "Akiyoshi Kitaoka", "Kiwako Sakamoto", "Masaki Yasugi", "Kenta Tanaka" ], "title": "Illusory motion reproduced by deep neural networks trained for prediction", "venue": "Frontiers in psychology,", "year": 2018 }, { "authors": [ "Veith Weilnhammer", "Heiner Stuke", "Guido Hesselmann", "Philipp Sterzer", "Katharina Schmack" ], "title": "A predictive coding account of bistable perception-a model-based fmri study", "venue": "PLoS computational biology,", "year": 2017 }, { "authors": [ "Paul J Werbos" ], "title": "Applications of advances in nonlinear sensitivity analysis", "venue": "In System modeling and optimization,", "year": 1982 }, { "authors": [ "James CR Whittington", "Rafal Bogacz" ], "title": "An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity", "venue": "Neural computation,", "year": 2017 }, { "authors": [ "James CR Whittington", "Rafal Bogacz" ], "title": "Theories of error back-propagation in the brain", "venue": "Trends in cognitive sciences,", "year": 2019 }, { "authors": [ "Ronald J Williams", "David Zipser" ], "title": "A learning algorithm for continually running fully recurrent neural networks", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Daniel LK Yamins", "Ha Hong", "Charles F Cadieu", "Ethan A Solomon", "Darren Seibert", "James J DiCarlo" ], "title": "Performance-optimized hierarchical models predict neural responses in higher visual cortex", "venue": "Proceedings of the National Academy of Sciences,", "year": 2014 }, { "authors": [ "Williams", "Zipser", "Lillicrap", "Santoro" ], "title": "While this is a fascinating area, we do not address it in this paper. We are solely concerned with the fact that predictive coding approximates backpropagation on feedforward computation graphs for which the unrolled RNN graph is a sufficient substrate", "venue": "Ollivier et al.,", "year": 2004 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has seen stunning successes in the last decade in computer vision (Krizhevsky et al., 2012; Szegedy et al., 2015), natural language processing and translation (Vaswani et al., 2017; Radford et al., 2019; Kaplan et al., 2020), and computer game playing (Mnih et al., 2015; Silver et al., 2017; Schrittwieser et al., 2019; Vinyals et al., 2019). While there is a great variety of architectures and models, they are all trained by gradient descent using gradients computed by automatic differentiation (AD). The key insight of AD is that it suffices to define a forward model which maps inputs to predictions according to some parameters. Then, using the chain rule of calculus, it is possible, as long as every operation of the forward model is differentiable, to differentiate back through the computation graph of the model so as to compute the sensitivity of every parameter in the model to the error at the output, and thus adjust every single parameter to best minimize the total loss. Early models were typically simple artificial neural networks where the computation graph is simply a composition of matrix multiplications and elementwise nonlinearities, and for which the implementation of automatic differentation has become known as ‘backpropagation’ (or ’backprop’). However, automatic differentiation allows for substantially more complicated graphs to be differentiated through, up to, and including, arbitrary programs (Griewank et al., 1989; Baydin et al., 2017; Paszke et al., 2017; Revels et al., 2016; Innes et al., 2019; Werbos, 1982; Rumelhart and Zipser, 1985; Linnainmaa, 1970). In recent years this has enabled the differentiation through differential equation solvers (Chen et al., 2018; Tzen and Raginsky, 2019; Rackauckas et al., 2019), physics engines (Degrave et al., 2019; Heiden et al., 2019), raytracers (Pal, 2019), and planning algorithms (Amos and Yarats, 2019; Okada et al., 2017). These advances allow the straightforward training of models which intrinsically embody complex processes and which can encode significantly more prior knowledge and structure about a given problem domain than previously possible.\nModern deep learning has also been closely intertwined with neuroscience (Hassabis et al., 2017; Hawkins and Blakeslee, 2007; Richards et al., 2019). The backpropagation algorithm itself arose\nas a technique for training multi-layer perceptrons – simple hierarchical models of neurons inspired by the brain (Werbos, 1982). Despite this origin, and its empirical successes, a consensus has emerged that the brain cannot directly implement backprop, since to do so would require biologically implausible connection rules (Crick, 1989). There are two principal problems. Firstly, backprop in the brain appears to require non-local information (since the activity of any specific neuron affects all subsequent neurons down to the final output neuron). It is difficult to see how this information could be transmitted ’backwards’ throughout the brain with the required fidelity without precise connectivity constraints. The second problem – the ‘weight transport problem’ is that backprop through MLP style networks requires identical forward and backwards weights. In recent years, however, a succession of models have been introduced which claim to implement backprop in MLP-style models using only biologically plausible connectivity schemes, and Hebbian learning rules (Liao et al., 2016; Guerguiev et al., 2017; Sacramento et al., 2018; Bengio and Fischer, 2015; Bengio et al., 2017; Ororbia et al., 2020; Whittington and Bogacz, 2019). Of particular significance is Whittington and Bogacz (2017) who show that predictive coding networks – a type of biologically plausible network which learn through a hierarchical process of prediction error minimization – are mathematically equivalent to backprop in MLP models. In this paper we extend this work, showing that predictive coding can not only approximate backprop in MLPs, but can approximate automatic differentiation along arbitrary computation graphs. This means that in theory there exist potentially biologically plausible algorithms for differentiating through arbitrary programs, utilizing only local connectivity. Moreover, in a class of models which we call parameter-linear, which includes many current machine learning models, the required update rules are Hebbian, raising the possibility that a wide range of current machine learning architectures may be faithfully implemented in the brain, or in neuromorphic hardware.\nIn this paper we provide two main contributions. (i) We show that predictive coding converges to automatic differentiation across arbitrary computation graphs. (ii) We showcase this result by implementing three core machine learning architectures (CNNs, RNNs, and LSTMs) in a predictive coding framework which utilises only local learning rules and mostly Hebbian plasticity." }, { "heading": "2 PREDICTIVE CODING ON ARBITRARY COMPUTATION GRAPHS", "text": "Predictive coding is an influential theory of cortical function in theoretical and computational neuroscience. Central to the theory is the idea that the core function of the brain is to minimize prediction errors between what is expected to happen and what actually happens. Predictive coding\nviews the brain as composed of multiple hierarchical layers which predict the activities of the layers below. Unpredicted activity is registered as prediction error which is then transmitted upwards for a higher layer to process. Over time, synaptic connections are adjusted so that the system improves at minimizing prediction error. Predictive coding possesses a wealth of empirical support (Friston, 2003; 2005; Bogacz, 2017; Whittington and Bogacz, 2019) and offers a single mechanism that accounts for diverse perceptual phenomena such as repetition-suppression (Auksztulewicz and Friston, 2016), endstopping (Rao and Ballard, 1999), bistable perception (Hohwy et al., 2008; Weilnhammer et al., 2017) and illusory motions (Lotter et al., 2016; Watanabe et al., 2018), and even attentional modulation of neural activity (Feldman and Friston, 2010; Kanai et al., 2015). Moreover, the central role of top-down predictions is consistent with the ubiquity, and importance of, top-down diffuse connections between cortical areas. Predictive coding is consistent with many known aspects of neurophysiology, and has been translated into biologically plausible process theories which define candidate cortical microcircuits which can implement the algorithm. (Spratling, 2008; Bastos et al., 2012; Kanai et al., 2015; Shipp, 2016).\nIn previous work, predictive coding has always been conceptualised as operating on hierarchies of layers (Bogacz, 2017; Whittington and Bogacz, 2017). Here we present a generalized form of predictive coding applied to arbitrary computation graphs. A computation graph G = {E,V} is a directed acyclic graph (DAG) which can represent the computational flow of essentially any program or computable function as a composition of elementary functions. Each edge ei ∈ E of the graph corresponds to an intermediate step – the application of an elementary function – while each vertex vi ∈ V is an intermediate variable computed by applying the functions of the edges to the values of their originating vertices. In this paper, vi denotes the vector of activations within a layer and we denote the set of all vertices as {vi}. Effectively, computation flows ’forward’ from parent nodes to all their children through the edge functions until the leaf nodes give the final output of the program as a whole (see Figure 1 and 2 for an example). Given a target T and a loss function L = g(T, vout), the graph’s output can be evaluated and, and if every edge function is differentiable, automatic differentiation can be performed on the computation graph.\nPredictive coding can be derived elegantly as a variational inference algorithm under a hierarchical Gaussian generative model (Friston, 2005; Buckley et al., 2017). We extend this approach to arbitrary computation graphs in a supervised setting by defining the inference problem to be solved as that of inferring the vertex value vi of each node in the graph given fixed start nodes v0 (the data), and end nodes vN (the targets). We define a generative model which parametrises the value of each vertex given the feedforward prediction of its parents, p({vi}) = p(v0 . . . vN ) = ∏N i p(vi|P(vi)) 1, and a factorised,\nvariational posterior Q({vi}|v0, vN ) = Q(v1 . . . vN−1|v0, vN ) = ∏N i Q(vi|P(vi), C(vi)), where P(vi) denotes the set of parents and C(vi) denotes the set of children of a given node vi. From this, we can define a suitable objective functional, the variational free-energy F (VFE), which acts as an upper bound on the divergence between the true and variational posteriors.\nF = KL[(Q(v1 . . . vN−1|v0, vN )‖p(v0 . . . vN )] ≥ KL[(Q(v1 . . . vN−1)|v0, vN )‖p(v1 . . . vN−1|v0, vN )]\n≈ N∑ i=0 Ti i\n(1) Under Gaussian assumptions for the generative model p({vi}) = ∏N i N (vi; v̂i,Σi), and the vari-\national posterior Q({vi}) = ∏N i N (vi), where the ‘predictions’ v̂i = f(P(vi); θi) are defined as the feedforward value of the vertex produced by running the graph forward, and all the precisions, or inverse variances, Σ−1i are fixed at the identity, we can write F as simply a sum of prediction errors (see Appendix D or (Friston, 2003; Bogacz, 2017; Buckley et al., 2017) for full derivations), with the prediction errors defined as i = vi − v̂i. These prediction errors play a core role in the framework and, in the biological process theories (Friston, 2005; Bastos et al., 2012), are generally considered to be represented by a distinct population of ‘error units’. Since F is an upper bound on the divergence between true and approximate posteriors, by minimizing F , we reduce this divergence, thus improving the quality of the variational posterior and approximating exact Bayesian inference. Predictive coding minimizes F by employing the Cauchy method of steepest descent to set the\n1This includes the prior p(v0), which simply has no parents.\ndynamics of the vertex variables vi as a gradient descent directly on F (Bogacz, 2017). dvi dt = ∂F ∂vi = i − ∑\nj∈C(vi)\nj ∂v̂j ∂vi\n(2)\nThe dynamics of the parameters of the edge functions θ such that v̂i = f(P(vi); θ), can also be derived as a gradient descent on F . Importantly these dynamics require only information (the current vertex value, prediction error, and prediction errors of child vertices) locally available at the vertex.\ndθi dt = ∂F ∂θi = i ∂v̂i ∂θi\n(3)\nTo run generalized predictive coding in practice on a given computation graph G = {E,V}, we augment the graph with error units ∈ E to obtain an augumented computation graph G̃ = {E,V, E}. The predictive coding algorithm then operates in two phases – a feedforward sweep and a backwards iteration phase. In the feedforward sweep, the augmented computation graph is run forward to obtain the set of predictions {v̂i}, and prediction errors { i} = {vi − v̂i} for every vertex. Following Whittington and Bogacz (2017), to achieve exact equivalence with the backprop gradients computed on the original computation graph, we initialize vi = v̂i in the initial feedforward sweep so that the output error computed by the predictive coding network and the original graph are identical.\nIn the backwards iteration phase, the vertex activities {vi} and prediction errors { i} are updated with Equation 2 for all vertices in parallel until the vertex values converge to a minimum of F . After convergence the parameters are updated according to Equation 3. Note we also assume, following Whittington and Bogacz (2017), that the predictions at each layer are fixed at the values assigned during the feedforward pass throughout the optimisation of the vs. We call this the fixed-prediction assumption. In effect, by removing the coupling between the vertex activities of the parents and the prediction at the child, this assumption separates the global optimisation problem into a local one for each vertex. We implement these dynamics with a simple forward Euler integration scheme so that the update rule for the vertices became vt+1i ← vti−η dFdvti where η is the step-size parameter. Importantly, if the edge function linearly combines the activities and the parameters followed by an elementwise nonlinearity – a condition which we call ‘parameter-linear’ – then both the update rule for the vertices (Equation 2) and the parameters (Equation 3) become Hebbian. Specifically, the update rules for the vertices and weights become dvidt = i − ∑ j jf ′(θj v̂j)θ T j and dθi dt = if ′(θiv̂i)v̂i T , respectively." }, { "heading": "2.1 APPROXIMATION TO BACKPROP", "text": "Here we show that at the equilibrium of the dynamics, the prediction errors ∗i converge to the correct backpropagated gradients ∂L∂vi , and consequently the parameter updates (Equation 3) become precisely those of a backprop trained network. Standard backprop works by computing the gradient of a vertex as the sum of the gradients of the child vertices. Beginning with the gradient of the output vertex ∂L ∂vL\n, it recursively computes the gradients of vertices deeper in the graph by the chain rule: ∂L\n∂vi = ∑ j=C(vi) ∂L ∂vj ∂vj ∂vi\n(4)\nIn comparison, in our predictive coding framework, at the equilibrium point (dvidt = 0) the prediction errors ∗i become,\n∗i = ∑\nj∈C(vi)\n∗j ∂v̂i ∂vj\n(5)\nImportantly, this means that the equilibrium value of the prediction error at a given vertex (Equation 5) satisfies the same recursive structure as the chain rule of backprop (Equation 4). Since this relationship is recursive, all that is needed for the prediction errors throughout the graph to converge to the backpropagated derivatives is for the prediction errors at the final layer to be equal to the output gradient: ∗L = ∂L ∂v̂L . To see this explicitly, consider a mean-squared-error loss function 2. at the\n2While the mean-squared-error loss function fits most nicely with the Gaussian generative model, other loss functions can be used in practice. If the loss function can be represented as a log probability distribution, then the generative model can be amended to simply set the output distribution to that distribution. If not, then there is no fully consistent generative model (although all nodes except the output remain Gaussian), but the algorithm will still work in practice. See (Figure 6) in Appendix A for results for CNNs trained with a crossentropy loss.\nAlgorithm 1: Generalized Predictive Coding Data: Dataset D = {X,L}, Augmented Computation Graph G̃ = {E,V, E}, inference learning rate ηv , weight learning rate ηθ begin\n/* For each minibatch in the dataset */ for (x, L) ∈ D do\n/* Fix start of graph to inputs */ v̂0 ← x /* Forward pass to compute predictions */ for v̂i ∈ V do\nv̂i ← f({P(v̂i); θ) /* Compute output error */ L ← L− v̂L /* Begin backwards iteration phase of the descent on the\nfree energy */ while not converged do\nfor (vi, i) ∈ G̃ do /* Compute prediction errors */ i ← vi − v̂i /* Update the vertex values */ vt+1i ← vti + ηv dFdvti\n/* Update weights at equilibrium */ for θi ∈ E do\nθt+1i ← θti + ηθ dFdθti\noutput layer L = 12 (T − v̂L) 2 with T as a vector of targets, and defining L = T − v̂L. We then consider the equilibrium value of the prediction error unit at a penultimate vertex L−1. By Equation 5, we can see that at equilibrium,\n∗L−1 = ∗ L ∂v̂L ∂vL−1 = (T − v̂∗L) ∂v̂L ∂vL−1\nsince, (T − v̂L) = ∂L∂v̂L , we can then write,\n∗L−1 = ∂L\n∂v̂L ∂v̂L ∂vL−1 = ∂L ∂vL−1 (6)\nThus the prediction errors of the penultimate nodes converge to the correct backpropagated gradient. Furthermore, recursing through the graph from children to parents allows the correct gradients to be computed3. Thus, by induction, we have shown that the fixed points of the prediction errors of the global optimization correspond exactly to the backpropagated gradients. Intuitively, if we imagine the computation-graph as a chain and the error as ’tension’ in the chain, backprop loads all the tension at the end (the output) and then systematically propagates it backwards. Predictive coding, however, spreads the tension throughout the entire chain until it reaches an equilibrium where the amount of tension at each link is precisely the backpropagated gradient. The full algorithm for training the predictive coding network is explicitly set out in Algorithm 1. Inference is just a forward pass through the network, and is identical to the corresponding ANN.\n3Some subtlety is needed here since vL−1 may have many children which each contribute to the loss. However, these different paths sum together at the node vL−1, thus propagating the correct gradient backwards.\nBy a similar argument, it is apparent that the dynamics of the parameters θi as a gradient descent on F also exactly match the backpropagated parameter gradients.\ndθi dt = dF dθi = ∗i d ∗i dθi\n= dL\ndv̂i dv̂i dθi = dL dθi (7)\nWhich follows from the fact that ∗i = dL dv̂i\nand that d ∗ i\ndθ = dv̂i dθi ." }, { "heading": "3 RELATED WORK", "text": "A number of recent works have tried to provide biologically plausible approximations to backprop. The requirement of symmetry between the forwards and backwards weights has been questioned by Lillicrap et al. (2016) who show that random fixed feedback weights suffice for effective learning. Recent additional work has shown that learning the backwards weights also helps (Amit, 2019; Akrout et al., 2019). Several schemes have also been proposed to approximate backprop using only local learning rules and/or Hebbian connectivity. These include target-prop (Lee et al., 2015) which approximate the backward gradients with trained inverse functions, but which fails to asymptotically compute the exact backprop gradients, and contrastive Hebbian (Seung, 2003; Scellier and Bengio, 2017; Scellier et al., 2018) approaches which do exactly approximate backprop, but which require two separate learning phases and the storing of information across successive phases. There are also dendritic error theories (Guerguiev et al., 2017; Sacramento et al., 2018) which are computationally similar to predictive coding (Whittington and Bogacz, 2019; Lillicrap et al., 2020). Whittington and Bogacz (2017) showed that predictive coding can approximate backprop in MLP models, and demonstrated comparable performance on MNIST. We advance upon this work by extending the proof to arbitrary computation graphs, enabling the design of predictive coding variants of a range of standard machine learning architectures, which we show perform comparably to backprop on considerably more difficult tasks than MNIST. Our algorithm evinces asymptotic (and in practice rapid) convergence to the exact backprop gradients, does not require separate learning phases, and utilises only local information and largely Hebbian plasticity." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 NUMERICAL RESULTS", "text": "To demonstrate the correctness of our derivation and empirical convergence to the true gradients, we present a numerical test in the simple scalar case, where we use predictive coding to derive the gradients of an arbitrary, highly nonlinear test function vL = tan( √ θv0) + sin(v 2 0) where θ is an arbitrary parameter. For our tests, we set v0 to 5 and θ to 2. The computation graph for this function is presented in Figure 2. Although simple, this is a good test of predictive coding because the function is highly nonlinear, and its computation graph does not follow a simple layer structure but includes some branching. An arbitrary target of T = 3 was set at the output and the gradient of the loss L = (vL − T )2 with respect to the input v0 was computed by predictive coding. We show (Figure 2) that the predictive coding optimisation rapidly converges to the exact numerical gradients computed by automatic differentiation, and that moreover this optimization is very robust and can handle even exceptionally high learning rates (up to 0.5) without divergence.\nIn summary, we have shown and numerically verified that at the equilibrium point of the global free-energy F on an arbitrary computation graph, the error units exactly equal the backpropagated gradients, and that this descent requires only local connectivity, does not require a separate phases or a sequential backwards sweep, and in the case of parameter-linear functions, requires only Hebbian plasticity. Our results provide a straightforward recipe for the direct implementation of predictive coding algorithms to approximate certain computation graphs, such as those found in common machine learning algorithms, in a potentially biologically plausible manner. Next, we showcase this capability by developing predictive coding variants of core machine learning architectures - convolutional neural networks (CNNs) recurrent neural networks (RNNs) and LSTMs (Hochreiter and Schmidhuber, 1997), and show performance comparable with backprop on tasks substantially more challenging than MNIST.\nθ\n* ( ⋅ , ⋅ ) ⋅ tan( ⋅ )\n+( ⋅ , ⋅ )\n( ⋅ )2 sin( ⋅ )\nμ1 = θ * v0 μ2 = ( ̂v1) μ3 = tan( ̂v2)\nμ4 = v20 μ5 = sin( ̂v4) μL = ̂v3 + ̂v5\n̂v1 ̂v2 ̂v3\n̂v4 ̂v5\nϵ1 ϵ2 ϵ3\nϵ5ϵ4\nT ϵL\nϵL dμL d ̂v3\nϵL dμL d ̂v5\nϵ5 dμ5 d ̂v4\nϵ3 dμ3 d ̂v2ϵ2 dμ2 d ̂v1ϵ1\ndμ1 dθ\nϵ1 dμ1 dv0\nϵ4 dμ4 dv0\nv0" }, { "heading": "4.2 PREDICTIVE CODING CNN, RNN, AND LSTM", "text": "First, we constructed predictive coding CNN models (see Appendix B for full implementation details). In the predictive coding CNN, each filter kernel was augmented with ‘error maps’ which measured the difference between the forward convolutional predictions and the backwards messages. Our CNN was composed of a convolutional layer, followed by a max-pooling layer, then two further convolutional layers followed by 3 fully-connected layers. We compared our predictive coding CNN to a backprop-trained CNN with the exact same architecture and hyperparameters. We tested our models on three image classification datasets significantly more challenging than MNIST – SVHN, CIFAR10, and CIFAR100. SVHN is a digit recognition task like MNIST, but has more naturalistic backgrounds, is in colour with continuously varying inputs and contains distractor digits. CIFAR10 and CIFAR100 are large image datasets composed of RGB 32x32 images. CIFAR10 has\n10 classes of image, while CIFAR100 is substantially more challenging with 100 possible classes. In general (Figure 3), performance was identical between the predictive coding and backprop CNNs and comparable to the standard performance of basic CNN models on these datasets, Moreover, the predictive coding gradient remained close to the true numerical gradient throughout training.\nWe also constructed predictive coding RNN and LSTM models, thus demonstrating the ability of predictive coding to scale to non-parameter-linear, branching, computation graphs. The RNN was trained on a character-level name classification task, while the LSTM was trained on a next-character prediction task on the full works of Shakespeare. Full implementation details can be found in Appendices B and C. LSTMs and RNNs are recurrent networks which are trained through backpropagation through time (BPTT). BPTT simply unrolls the network through time and backpropagates through the unrolled graph. Analogously we trained the predictive coding RNN and LSTM by applying predictive coding to the unrolled computation graph. The depth of the unrolled graph depends heavily on the sequence length, and in our tasks using a sequence length of 100 we still found that predictive coding evinced rapid convergence to the correct numerical gradient, and that the performance was approximately identical to the equivalent backprop-trained networks (Figure 3), thus showing that the algorithm is scalable even to very deep computation graphs." }, { "heading": "5 DISCUSSION", "text": "We have shown that predictive coding provides a local and potentially biologically plausible approximation to backprop on arbitrary, deep, and branching computation graphs. Moreover, convergence to the exact backprop gradients is rapid and robust, even in extremely deep graphs such as the unrolled LSTM. Our algorithm is fully parallelizable, does not require separate phases, and can produce equivalent performance to backprop in core machine-learning architectures. These results broaden the horizon of local approximations to backprop by demonstrating that they can be implemented on arbitrary computation graphs, not only simple MLP architectures. Our work prescribes a straightforward recipe for backpropagating through any computation graph with predictive coding using only local learning rules. In the future, this process could potentially be made fully automatic and translated onto neuromorphic hardware. Our results also raise the possibility that the brain may implement machine-learning type architectures much more directly than often considered. Many lines of work suggest a close correspondence between the representations and activations of CNNs and activity in higher visual areas (Yamins et al., 2014; Tacchetti et al., 2017; Eickenberg et al., 2017; Khaligh-Razavi and Kriegeskorte, 2014; Lindsay, 2020), for instance, and this similarity may be found to extend to other machine learning architectures.\nIt is important to note that predictive coding, as advanced here, still retains some biologically implausible features. Although using only local and Hebbian updates, the predictive coding algorithm still requires identical forward and backwards weights, as well as mandating a very precise oneto-one connectivity structure between value neurons vi and error neurons i. However, recent work (Millidge et al., 2020) has begun to show that these implausibilities can be relaxed using learnable backwards weights instead of requiring weight symmetry, and allowing for learnable dense connectivity between value and error neurons, without harm to performance in simple MLP settings. An additional limitation to the biological plausibility of our method is the fixed-prediction assumption, which requires that the feedforward pass values be somehow stored during the backwards\niteration phase. In biological neurons this could potentially be implemented by utilizing synaptic mechanisms for maintaining information over short periods, such as eligibility traces, or alternatively through synchronised phase locking (Buzsaki, 2006). Alternatively, it is important to note that this fixed-prediction assumption is only required for exact convergence to backprop, and predictive coding networks have been shown to be able to attain strong discriminative classification performance without it (Whittington and Bogacz, 2017).\nAlthough we have implemented three core machine learning architectures as predictive coding networks, we have nevertheless focused on relatively small and straightforward networks and thus both our backprop and predictive coding networks perform below the state of the art on the presented tasks. This is primarily because our focus was on demonstrating the theoretical convergence between the two algorithms. Nevertheless, we believe that due to the generality of our theoretical results, ’scaling up’ the existing architectures to implement performance-matched predictive coding versions of more advanced machine learning architectures such as resnets (He et al., 2016), GANs (Goodfellow et al., 2014), and transformers (Vaswani et al., 2017) should be relatively straightforward.\nIn terms of computational cost, one inference iteration in the predictive coding network is about as costly as a backprop backwards pass. Thus, due to using 100-200 iterations for full convergence, our algorithm is substantially more expensive than backprop which limits the scalability of our method. However, this serial cost is misleading when talking about highly parallel neural architectures. In the brain, neurons cannot wait for a sequential forward and backward sweep. By phrasing our algorithm as a global descent, our algorithm is fully parallel across layers. There is no waiting and no phases to be coordinated. Each neuron need only respond to its local driving inputs and downwards error signals. We believe that this local and parallelizable property of our algorithm may engender the possibility of substantially more efficient implementations on neuromorphic hardware (Furber et al., 2014; Merolla et al., 2014; Davies et al., 2018), which may ameliorate much of the computational overhead compared to backprop. Future work could also examine whether our method is more capable than backprop of handling the continuously varying inputs the brain is presented with in practice, rather than the artificial paradigm of being presented with a series of i.i.d. datapoints.\nOur work also reveals a close connection between backprop and inference. Namely, the recursive computation of gradients is effectively a by-product of a variational-inference algorithm which infers the values of the vertices of the computation graph under a hierarchical Gaussian generative model. While the deep connections between stochastic gradient descent and inference in terms of Kalman filtering (Ruck et al., 1992; Ollivier, 2019) or MCMC sampling methods (Chen et al., 2014; Mandt et al., 2017) is known, the relation between recursive gradient computation itself and variational inference is underexplored except in the case of a single layer (Amari, 1995). Our method can provide a principled generalisation of backprop through the inverse-variance Σ−1 parameters of the Gaussian generative model. These parameters weight the relative contribution of different factors to the overall gradient by their uncertainty, thus naturally handling the case of backprop with differentially noisy inputs. Moreover, the Σ−1 parameters can be learnt as a gradient descent on F : dΣidt = − dF dΣi = −Σ−1i i Ti Σ −1 i − Σ −1 i . This specific generalisation is afforded by the Gaussian form of the generative model, however, and other generative models may yield novel optimisation algorithms able to quantify and handle uncertainties throughout the entire computational graph." }, { "heading": "APPENDIX A: PREDICTIVE CODING CNN IMPLEMENTATION DETAILS", "text": "The key concept in a CNN is that of an image convolution, where a small weight matrix is ’slid’ (or convolved) across an image to produce an output image. Each patch of the output image only depends on a relatively small patch of the input image. Moreover, the weights of the filter stay the same during the convolution, so each pixel of the output image is generated using the same weights. The weight sharing implicit in the convolution operation enforces translational invariance, since different image patches are all processed with the same weights.\nThe forward equations of a convolutional layer for a specific output pixel\nvi,j = k=i+f∑ k=i−f l=j+f∑ l=j−f θk,lxi+k,j+l\nWhere vi,j is the (i, j)th element of the output, xi,j is the element of the input image and θk,l is an weight element of a feature map. To setup a predictive coding CNN, we augment each intermediate xi and vi with error units i of the same dimension as the output of the convolutional layer.\nPredictions v̂ are projected forward using the forward equations. Prediction errors also need to be transmitted backwards for the architecture to work. To achieve this we must have that prediction errors are transmitted upwards by a ’backwards convolution’. We thus define the backwards prediction errors ̂j as follows:\n̂i,j = i+f∑ k=i−f j+f∑ l=j−f θj,ĩi,j\nWhere ̃ is an error map zero-padded to ensure the correct convolutional output size. Inference in the predictive coding network then proceeds by updating the intermediate values of each layer as follows:\ndvl dt = l − ̂l+1\nSince the CNN is also parameter-linear, weights can be updated using the simple Hebbian rule of the multiplication of the pre and post synaptic potentials.\ndθl dt = ∑ i,j li,jvl−1 T i,j\nThere is an additional biological implausibility here due to the weight sharing of the CNN. Since the same weights are copied for each position on the image, the weight updates have contributions from all aspects of the image simultaneously which violates the locality condition. A simple fix for this, which makes the network scheme plausible is to simply give each position on the image a filter with separate weights, thus removing the weight sharing implicit in the CNN. In effect this gives each patch of pixels a local receptive field with its own set of weights. The performance and scalability\nof such a locally connected predictive coding architecture would be an interesting avenue for future work, as this architecture has substantial homologies with the structure of the visual cortex.\nIn our experiments we used a relatively simple CNN architecture consisting of one convolutional layer of kernel size 5, and a filter bank of 6 filters. This was followed by a max-pooling layer with a (2,2) kernel and a further convolutional layer with a (5,5) kernel and filter bank of 16 filters. This was then followed by three fully connected layers of 200, 150, and 10 (or 100 for CIFAR100) output units. Each convolutional and fully connected layer used the relu activation function, except the output layer which was linear. Although this architecture is far smaller than state of the art for convolutional networks, the primary point of our paper was to demonstrate the equivalence of predictive coding and backprop. Further work could investigate scaling up predictive coding to more state-of-the-art architectures.\nOur datasets consisted of 32x32 RGB images. We normalised the values of all pixels of each image to lie between 0 and 1, but otherwise performed no other image preprocessing. We did not use data augmentation of any kind. We set the weight learning rate for the predictive coding and backprop networks 0.0001. A minibatch size of 64 was used. These parameters were chosen without any detailed hyperparameter search and so are likely suboptimal. The magnitude of the gradient updates was clamped to lie between -50 and 50 in all of our models. This was done to prevent divergences, as occasionally occurred in the LSTM networks, likely due to exploding gradients.\nThe predictive coding scheme converged to the exact backprop gradients very precisely within 100 inference iterations using an inference learning rate of 0.1. This gives the predictive coding CNN approximately a 100x computational overhead compared to backprop. The divergence between the true and approximate gradients remained approximately constant throughout training, as shown by Figure 5, which shows the mean divergence for each layer of the CNN over the course of an example training run on the CIFAR10 dataset. The training loss of the predictive coding and backprop networks for SVHN, CIFAR10 and CIFAR100 are presented in Figure 4.\nWhile the experiments in the main paper all used the mean-squared-error loss function, it is also possible to use alternative loss functions. In Figure 6, we show performance of the CNN on CIFAR and SVHN datasets is also very close to backprop when trained with a multi-class cross-entropy loss L = ∑ i Ti ln vLi. In this case the output layer used a softmax function as its nonlinearity, to ensure that the logits passed to the cross-entropy loss were valid probabilities. The cross-entropy loss is also straightforward to fit into the predictive coding framework since the gradient with respect to the pre-activations of the output is also just the negative prediction error ∂L∂vL = T − vL, although the softmax function itself may be challenging to implement neurally since it is non-local as its’ normalisation coefficient requires of the exponentiated activities of all neurons in a layer. Nevertheless, this demonstrates that predictive coding can approximate backprop for any given loss function, not simply mean-square-error." }, { "heading": "APPENDIX B: PREDICTIVE CODING RNN", "text": "The computation graph on RNNs is relatively straightforward. We consider only a single layer RNN here although the architecture can be straightforwardly extended to hierarchically stacked RNNs. An RNN is similar to a feedforward network except that it possesses an additional hidden state h which is maintained and updated over time as a function of both the current input x and the previous hidden\n(a) Conv Layer 1 (b) Conv Layer 2\n(c) FC Layer 1 (d) FC Layer 2\nMean divergence between the true numerical and predictive coding backprops over the course of training. In general, the divergence appeared to follow a largely random walk pattern, and was generally neglible.\nImportantly, the divergence did not grow over time throughout training, implying that errors from slightly incorrect gradients did not appear to compound.\nSVHN training accuracy (a) SVHN test accuracy\n(b) CIFAR training accuracy (c) CIFAR test accuracy\nTraining and test accuracies of the CNN network on the SVHN and CIFAR datasets using the cross-entropy loss. As can be seen performance remains very close to backprop, thus demonstrating that our predictive coding\nalgorithm can be used with different loss functions, not just mean-squared-error.\nstate. The output of the network y is a function of h. By considering the RNN at a single timestep we obtain the following equations.\nht = f(θhht−1 + θxxt) (8) yt = g(θyht)\nWhere f and g are elementwise nonlinear activation functions. And θh, θx, θy are weight matrices for each specific input. To predict a sequence the RNN simply rolls forward the above equations to generate new predictions and hidden states at each timestep.\nRNNs are typically trained through an algorithm called backpropagation through time (BPTT) which essentially just unrolls the RNN into a single feedforward computation graph and then performs backpropagation through this unrolled graph. To train the RNN using predictive coding we take the same approach and simply apply predictive coding to the unrolled graph.\nIt is important to note that this is an additional aspect of biological implausibility that we do not address in this paper. BPTT requires updates to proceed backwards through time from the end of the sequence to the beginning. Ignoring any biological implausibility with the rules themselves, this updating sequence is clearly not biologically plausible as naively it requires maintaining the entire sequence of predictions and prediction errors perfectly in memory until the end of the sequence, and waiting until the sequence ends before making any updates. There is a small literature on trying to produce biologically plausible, or forward-looking approximations to BPTT which does not require updates to be propagated back through time (Williams and Zipser, 1989; Lillicrap and Santoro, 2019; Steil, 2004; Ollivier et al., 2015; Tallec and Ollivier, 2017). While this is a fascinating area, we do not address it in this paper. We are solely concerned with the fact that predictive coding approximates backpropagation on feedforward computation graphs for which the unrolled RNN graph is a sufficient substrate.\nTo learn a predictive coding RNN, we first augment each of the variables ht and yt of the original graph with additional error units ht and yt . Predictions ŷt, ĥt are generated according to the feedforward rules (16). A sequence of true labels {T1...TT } is then presented to the network, and then inference proceeds by recursively applying the following rules backwards through time until convergence.\nyt = L− ŷt ht = ht − ĥt dht dt = ht − ytθTy − ht+1θTh\nUpon convergence the weights are updated according to the following rules.\ndθy dt = T∑ t=0 yt ∂g(θyht) ∂θy hTt\ndθx dt = T∑ t=0 ht ∂f(θhht−1 + θxxt) ∂θx xTt\ndθh dt = T∑ t=0 ht ∂f(θhht−1 + θxxt) ∂θh hTt+1\nSince the RNN feedforward updates are parameter-linear, these rules are Hebbian, only requiring the multiplication of pre and post-synaptic potentials. This means that the predictive coding updates proposed here are biologically plausible and could in theory be implemented in the brain. The only biological implausibility remains the BPTT learning scheme.\nOur RNN was trained on a simple character-level name-origin dataset which can be found here: https://download.pytorch.org/tutorial/data.zip. The RNN was presented with sequences of characters representing names and had to predict the national origin of the name – French, Spanish, Russian, etc. The characters were presented to the network as one-hot-encoded vectors without any embedding. The output categories were also presented as a one-hot vector. The RNN has a hidden size of 256\nunits. A tanh nonlinearity was used between hidden states and the output layer was linear. The network was trained on randomly selected name-category pairs from the dataset. The training loss for the predictive coding and backprop RNNs, averaged over 5 seeds is presented below (Figure 7).\nAPPENDIX C: PREDICTIVE CODING LSTM IMPLEMENTATION DETAILS\nUnlike the other two models, the LSTM possesses a complex and branching internal computation graph, and is thus a good opportunity to make explicit the predictive coding ’recipe’ for approximating backprop on arbitrary computation graphs. The computation graph for a single LSTM cell is shown (with backprop updates) in Figure 8. Prediction for the LSTM occurs by simply rolling forward a copy of the LSTM cell for each timestep. The LSTM cell receives its hidden state ht and cell state ct from the previous timestep. During training we compute derivatives on the unrolled computation graph and receive backwards derivatives (or prediction errors) from the LSTM cell at time t+ 1.\nThe equations that specify the computation graph of the LSTM cell are as follows.\nv1 = ht ⊕ xt v2 = σ(θiv1)\nv3 = ctv2\nv4 = σ(θinpv1)\nv5 = tanh(θcv1)\nv6 = v4v5\nv7 = v3 + v6\nv8 = σ(θov1)\nv9 = tanh(v7)\nv10 = v8v9\ny = σ(θyv10)\nThe recipe to convert this computation graph into a predictive coding algorithm is straightforward. We first rewire the connectivity so that the predictions are set to the forward functions of their parents. We then compute the errors between the vertices and the predictions.\nv̂1 = ht ⊕ xt v̂2 = σ(θiv1)\nv̂3 = ctv2\nv̂4 = σ(θinpv1)\nv̂5 = tanh(θcv1)\nv̂6 = v4v5\nv̂7 = v3 + v6\nv̂ = σ(θov1)\nv̂9 = tanh(v7)\nv̂10 = v8v9\nv̂y = σ(θyv10)\n1 = v1 − v̂1 2 = v2 − v̂2 3 = v3 − v̂3 4 = v4 − v̂4 5 = v5 − v̂5 6 = v6 − v̂6 7 = v7 − v̂7 8 = v8 − v̂8 9 = v9 − v̂9 10 = v10 − v̂10\nDuring inference, the inputs ht,xt and the output yt are fixed. The vertices and then the prediction errors are updated according to Equation 2. This recipe is straightforward and can easily be extended to other more complex machine learning architectures. The full augmented computation graph, including the vertex update rules, is presented in Figure 9.\nEmpirically, we observed rapid convergence to the exact backprop gradients even in the case of very deep computation graphs (as is an unrolled LSTM with a sequence length of 100). Although convergence was slower than was the case for CNNs or lesser sequence lengths, it was still straightforward to achieve convergence to the exact numerical gradients with sufficient iterations.\nBelow we plot the mean divergence between the predictive coding and true numerical gradients as a function of sequence length (and hence depth of graph) for a fixed computational budget of 200 iterations with an inference learning rate of 0.05. As can be seen, the divergence increases roughly\nlinearly with sequence length. Importantly, even with long sequences, the divergence is not especially large, and can be decreased further by increasing the computational budget. As the increase is linear, we believe that predictive coding approaches should be scalable even for backpropagating through very deep and complex graphs.\nWe also plot the number of iterations required to reach a given convergence threshold (here taken to be 0.005) as a function of sequence length (Figure 11). We see that the number of iterations required increases sublinearly with the sequence length, and likely asymptotes at about 300 iterations. Although this is a lot of iterations, the sublinear convergence nevertheless shows that the method can scale to even extremely deep graphs.\nOur architecture consisted of a single LSTM layer (more complex architectures would consist of multiple stacked LSTM layers). The LSTM was trained on a next-character character-level prediction task. The dataset was the full works of Shakespeare, downloadable from Tensorflow. The text was shuffled and split into sequences of 50 characters, which were fed to the LSTM one character at a time. The LSTM was trained then to predict the next character, so as to ultimately be able to generate text. The characters were presented as one-hot-encoded vectors. The LSTM had a hidden size and a cell-size of 1056 units. A minibatch size of 64 was used and a weight learning rate of 0.0001 was used for both predictive coding and backprop networks. To achieve sufficient numerical convergence to the correct gradient, we used 200 variational iterations with an inference learning rate of 0.1. This rendered the predictive LSTM approximately 200x as costly as the backprop LSTM to run. A graph of the LSTM training loss for both predictive coding and backprop LSTMs, averaged over 5 random seeds, can be found below (Figure 12)." }, { "heading": "APPENDIX D: DERIVATION OF THE FREE ENERGY FUNCTIONAL", "text": "Here we derive in detail the form of the free-energy functional used in sections 2 and 4. We also expand upon the assumptions required and the precise form of the generative model and variational density. Much of this material is presented with considerably more detail in Buckley et al. (2017), and more approachably in Bogacz (2017).\nGiven an arbitrary computation graph with vertices {yi}, which we treat as random variables. Here we treat explicitly an important fact that we glossed over for notational convenience in the introduction. The vis which are optimized in the free-energy functional are technically the mean parameters of\nthe variational density Q(yi; vi, σi) – i.e. they represent the mean (variational) belief of the value of the vertex. The vertex values in the model, which we here denote as {yi}, are technically separate. However, due to our Gaussian assumptions, and the expectation under the variational density, in effect we end up replacing the yi with the vi and optimizing the vis, so in the interests of space and notational simplicity we began as if the vis were variables in the generative model, but they are not. They are parameters of the variational distribution.\nGiven an input y0 and a target yN (the multiple input and/or output case is a straightforward generalization). We wish to infer the posterior p(y1:N−1|y0, yN ). We approximate this intractable posterior with variational inference. Variational inference proceeds by defining an approximate posterior Q(y1:N−1;φ) with some arbitrary parameters φ. We then wish to minimize the KL divergence between the true and approximate posterior.\nargmin φ\nKL[Q(y1:N−1;φ)‖p(y1:N−1|y0, yN )]\nAlthough this KL is itself intractable, since it includes the intractable posterior, we can derive a tractable bound on this KL called the variational free-energy.\nKL[Q(y1:N−1;φ)‖p(y1:N |y0, yN )] = KL[Q(y1:N−1)‖ p(y1:N , y0, yN )\np(y0, yN ) ]\n= KL[Q(y1:N ;φ)‖p(y1:N , y0)] + ln p(y0, yN ) ⇒ KL[Q(y1:N ;φ)‖p(y1:N−1, y0, yN )]︸ ︷︷ ︸\n−F\n≤ KL[Q(y1:N−1;φ)‖p(y1:N−1|y0, yN )]\n(9)\nWe define the negative free-energy−F = KL[Q(y1:N−1)‖p(y1:N−1, y0, yN )] which is a lower bound on the divergence between the true and approximate posteriors. By thus maximizing the negative free-energy (which is identical to the ELBO (Beal et al., 2003; Blei et al., 2017)), or equivalently minimizing the free-energy, we decrease this divergence and make the variational distribution a better approximation to the true posterior.\nTo proceed further, it is necessary to define an explicit form of the generative model p(y0, y1:N−1, yN ) and the approximate posterior Q(y1:N−1;φ). In predictive coding, we define a hierarchical Gaussian generative model which mirrors the exact structure of the computation graph\np(y0:N ) = N (y0; ȳ0,Σ0) N∏ i=1 N (yi; f(P(yi); θyj∈P(yi)),Σi);\nWhere essentially each vertex yi is a Gaussian with a mean which is a function of the prediction of all the parents of the vertex, and the parameters of their edge-functions. ȳ0 is effectively an ”input-prior” which is set to 0 throughout and ignored. The output vertices yN = T are set to the target T .\nWe also define the variational density to be Gaussian with mean v1:N−1 and variance σ1:N−1, but under a mean field approximation, so that the approximation at each node is independent of all others\n(note the variational variance is denoted σ while the variance of the generative model is denoted Σ. The lower-case σ is not used to denote a scalar variable – both variances can be multivariate – but to distinguish between variational and generative variances)\nQ(y1:N−1; v1:N−1, σ1:N−1) = N−1∏ i=1 N (yi; vi, σi)\nWe now can express the free-energy functional concretely. First we decompose it as the sum of an energy and an entropy −F = KL[Q(y1:N−1; v1:N−1, σ1:N−1)‖p(y0, y1:N−1, yN )]\n= −EQ(y1:N−1;v1:N−1,σ1:N−1)[ln p(y0, y1:N−1, yN )]︸ ︷︷ ︸ Energy +EQ(y1:N−1;v1:N−1,σ1:N−1)[lnQ(y1:N−1; v1:N−1, σ1:N−1)]︸ ︷︷ ︸ Entropy\nThen, taking the entropy term first, we can express it concretely in terms of normal distributions. EQ(y1:N−1;v1:N−1,σ1:N−1)[lnQ(y1:N−1; v1:N−1, σ1:N−1)] = EQ(y1:N−1;v1:N−1,σ1:N−1)[ N−1∑ i=1 lnN (yi; vi, σi)]\n= N−1∑ i=1 EQ(yi;vi,σi)[lnN (yi; vi, σi)]\n= N−1∑ i=1 EQ(yi;vi,σi)[− 1 2 ln det(2πσi]) + EQ(yi;vi,σi)[ (yi − vi)2 2σi ]\n= N−1∑ i=1 −1 2 ln det(2πσi]) + σi 2σi\n= N\n2 + N−1∑ i=1 −1 2 ln det(2πσi)\nThe entropy of a multivariate gaussian has a simple analytical form depending only on the variance. Next we turn to the energy term, which is more complex. To derive a clean analytical result, we must make a further assumption, the Laplace approximation, which requires the variational density to be tightly peaked around the mean so the only non-negligible contribution to the expectation is from regions around the mean. This means that we can successfully approximate the approximate posterior with a second-order Taylor expansion around the mean. From the first line onwards we ignore the ln p(y0) and ln p(yN |P(yN )) which lie outside the expectation. EQ(y1:N−1;v1:N−1,σ1:N−1)[ln p(y0:N )] = ln p(y0) + ln p(yN |P(yN )) + N−1∑ i=1 EQ(yi;vi,σi)[ln p(yi|P(yi))]\n= N∑ i=1 EQ[ln p(vi|P(yi))] + EQ[ ∂ ln p(yi|P(yi)) ∂yi (vi − yi)]\n+ EQ[ d2 ln p(vi|P(yi))\ndy2i (vi − yi)2]\n= N∑ i=1 ln p(vi|P(yi)) + ∂2 ln p(vi|P(yi)) ∂y2i σi\nWhere the second term in the Taylor expansion evaluates to 0 since EQ[yi − vi] = (vi − vi) = 0 and the third term contains the expression for the variance EQ[(yi − vi)2] = σi. We can then write out the full Laplace-encoded free-energy as:\n−F = N∑ i=1 ln p(vi|P(yi)) + ∂2 ln p(vi|P(yi)) ∂y2i σi −− 1 2 ln det(2πσi)\nWe wish to minimize F with respect to the variational parameters vi and σi. There is in fact a closedform expression for the optimal variational variance which can be obtained simply by differentiating and setting the derivative to 0.\n∂F ∂σi = ∂2 ln p(vi|P(yi)) ∂y2i − σ−1i\n∂F ∂σi = 0⇒ σ∗i = ∂2 ln p(vi|P(yi)) ∂y2i\n−1\nBecause of this analytical result for the variational variance, we do not need to consider it further in the optimisation problem, and only consider minimizing the variational means vi. This renders all the terms in the free-energy except the ln p(vi|P(yi)) terms constant with respect to the variational parameters. This allows us to write:\n−F ≈ ln p(yN |P(yN )) + N∑ i=1 ln p(vi|P(yi)) (10)\nas presented in section 2. The first term ln p(yN |P(yN )) is effectively the loss at the output (yN = T ) so becomes an additional prediction error ln p(yN |P(yN )) ∝ (T − v̂N )TΣ−1N (T − v̂N ) which can be absorbed into the sum over other prediction errors. Crucially, although the variational variances have an analytical form, the variances of the generative model (the precisions Σi) do not and can be optimised directly to improve the log model-evidence. These precisions allow for a kind of ’uncertainty-aware’ backprop.\nDERIVATION OF VARIATIONAL UPDATE RULES AND FIXED POINTS\nHere, starting from Equation 10, we show how to obtain the variational update rule for the vi’s (Equation 2), and the fixed point equations (Equation 5) (Friston, 2008; 2005; Bogacz, 2017). We first reduce the free-energy to a sum of prediction errors.\n−F ≈ N∑ i=1 ln p(vi|P(vi))\n≈ N∑ i=1 (vi − f(P(v1))TΣ−1i (vi − f(P(v1)) T + ln 2πΣ−1i\n= N∑ i=1 Ti i + ln 2πΣ −1 i\nWhere i = vi − f(P(v1)), and we have utilized the assumption made in section 2 that Σ−1 = I. By setting all precisions to the identity, we are implicitly assuming that all datapoints and vertices of the computational graph have equal variance. Next we assume that the dynamics of each vertex vi follow a gradient descent on the free-energy.\n−dvi dt = ∂F ∂vi = ∂ ∂vi [ N∑ j=1 Tj j ]\n= i ∂ i ∂vi\n+ ∑\nj∈C(vi)\nj ∂ j ∂vi\n= i − ∑\nj∈C(vi)\nj ∂v̂j ∂vi\nWhere we have used the fact that ∂ i∂vi = 1 and ∂ j ∂vj = −∂v̂j∂vi . To obtain the fixed point of the dynamics, we simply solve for dvidt = 0.\ndvi dt = ∂F ∂vi = 0\n⇒ 0 = i − ∑\nj∈C(vi)\nj ∂v̂j ∂vi\n⇒ ∗i = ∑\nj∈C(vi)\nj ∂v̂∗j ∂v∗i\nSimilarly, since ∗i = v ∗ i − v̂∗i then v∗i = ∗i + v̂∗i . So:\nv∗i = ∗ i + v̂ ∗ i\n= v̂∗i − ∑\nj∈C(vi)\nj ∂v̂∗j ∂v∗i" } ]
2,020
null
SP:9e6b5b7d9e7459c015130f4b80f7bc75424de050
[ "This paper proposes a simple scheme for training with multiple augmentations of training data in one iteration and reweighting the instances by their relative loss. As authors note in their related works, the idea of reweighting examples based on their relative loss has been widely studied in a variety of machine learning problems. In contrast, this work proposes the reweighting only within augmentations of a single sample. They derive their particular reweighting scheme by proposing an alternative risk (Eq. 3). The new objective is a function of both model parameters and the distribution of augmentations. They propose to find the model parameters that minimize the alternative risk for the hardest distribution of augmentations that maximizes their alternative risk (Eq. 4). Then they consider the distribution of augmentations that are a function of model parameters and input and show that for fixed model parameters, the optimal distribution on a fixed finite set of augmentations is determined by the softmax on the loss of the model for each augmented input. In section 3.2, they propose two variations of their loss using the ground-truth label to evaluate the loss (hard loss) versus using the prediction of the model for the original raw input (soft loss). In section 3.3, they propose specific considerations for augmenting text data. They provide experiments on image and text data with ablations studies." ]
Data augmentation is an effective technique to improve the generalization of deep neural networks. However, previous data augmentation methods usually treat the augmented samples equally without considering their individual impacts on the model. To address this, for the augmented samples from the same training example, we propose to assign different weights to them. We construct the maximal expected loss which is the supremum over any reweighted loss on augmented samples. Inspired by adversarial training, we minimize this maximal expected loss (MMEL) and obtain a simple and interpretable closed-form solution: more attention should be paid to augmented samples with large loss values (i.e., harder examples). Minimizing this maximal expected loss enables the model to perform well under any reweighting strategy. The proposed method can generally be applied on top of any data augmentation methods. Experiments are conducted on both natural language understanding tasks with token-level data augmentation, and image classification tasks with commonly-used image augmentation techniques like random crop and horizontal flip. Empirical results show that the proposed method improves the generalization performance of the model.
[ { "affiliations": [], "name": "ING THE" }, { "affiliations": [], "name": "MAXIMAL EXPECTED LOSS" }, { "affiliations": [], "name": "Mingyang Yi" }, { "affiliations": [], "name": "Lu Hou" }, { "affiliations": [], "name": "Lifeng Shang" }, { "affiliations": [], "name": "Xin Jiang" }, { "affiliations": [], "name": "Qun Liu" }, { "affiliations": [], "name": "Zhi-Ming Ma" } ]
[ { "authors": [ "S. Behpour", "K. Kitani", "B. Ziebart" ], "title": "Ada: Adversarial data augmentation for object detection", "venue": "In IEEE Winter Conference on Applications of Computer Vision,", "year": 2019 }, { "authors": [ "Y. Bengio", "J. Louradour", "R. Collobert", "J. Weston" ], "title": "Curriculum learning", "venue": "In International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Y. Cheng", "L. Jiang", "W. Macherey" ], "title": "Robust neural machine translation with doubly adversarial inputs", "venue": "In Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Y. Cheng", "L. Jiang", "W. Macherey", "J. Eisenstein" ], "title": "Advaug: Robust adversarial augmentation for neural machine translation", "venue": "In Annual Conference of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "D. Csiba", "P. Richtárik" ], "title": "Importance sampling for minibatches", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "E.D. Cubuk", "B. Zoph", "D. Mane", "V. Vasudevan", "Q.V. Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "Preprint arXiv:1805.09501,", "year": 2018 }, { "authors": [ "A.P. Dempster", "N.M. Laird", "D.B. Rubin" ], "title": "Maximum likelihood from incomplete data via the em algorithm", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1977 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L. Li", "K. Li", "L. Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "J. Devlin", "M. Chang", "K. Lee", "K. Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In North American Chapter of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "T. DeVries", "G.W. Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "Preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Y. Freund", "R. Schapire" ], "title": "A short introduction to boosting", "venue": "Journal-Japanese Society For Artificial Intelligence,", "year": 1999 }, { "authors": [ "P. Goyal", "K. He" ], "title": "Focal loss for dense object detection", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "H. Guo", "Y. Mao", "R. Zhang" ], "title": "Augmenting data with mixup for sentence classification: An empirical study", "venue": "Preprint arXiv:1905.08941,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "L. Jiang", "D. Meng", "S. Yu", "Z. Lan", "S. Shan", "A. Hauptmann" ], "title": "Self-paced learning with diversity", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "L. Jiang", "Z. Zhou", "T. Leung", "L. Li", "F. Li" ], "title": "Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "L. Jiang", "D. Huang", "M. Liu", "W. Yang" ], "title": "Beyond synthetic noise: Deep learning on controlled noisy labels", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "H.P. Jiang", "He", "W. Chen", "X. Liu", "J. Gao", "T. Zhao" ], "title": "Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization", "venue": "In Annual Conference of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "X. Jiao", "Y. Yin", "L. Shang", "X. Jiang", "X. Chen", "L. Li", "F. Wang", "Q. Liu" ], "title": "Tinybert: Distilling bert for natural language understanding", "venue": null, "year": 1909 }, { "authors": [ "A. Katharopoulos", "F. Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "A. Krizhevsky", "V. Nair", "G. Hinton" ], "title": "The cifar-10 dataset", "venue": "online: http://www. cs. toronto. edu/kriz/cifar. html,", "year": 2014 }, { "authors": [ "V. Kumar", "A. Choudhary", "E. Cho" ], "title": "Data augmentation using pre-trained transformer models", "venue": "Preprint arXiv:2003.02245,", "year": 2020 }, { "authors": [ "J. Li", "B. Ziebart", "B. Berger-Wolf" ], "title": "A game-theoretic adversarial approach to dynamic network prediction", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2018 }, { "authors": [ "S. Lim", "I. Kim", "T. Kim", "C. Kim", "S. Kim" ], "title": "Fast autoaugment", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "T. Lin", "P. Goyal", "R. Girshick", "K. He", "P. Dollár" ], "title": "Focal loss for dense object detection", "venue": "In IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "I. Loshchilov", "F. Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "T. Malisiewicz", "A. Gupta", "A.A. Efros" ], "title": "Ensemble of exemplar-svms for object detection and beyond", "venue": "In International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "J. Martens" ], "title": "New insights and perspectives on the natural gradient method", "venue": "Preprint arXiv:1412.1193,", "year": 2019 }, { "authors": [ "D. Needell", "R. Ward", "N. Srebro" ], "title": "Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "D.S. Park", "W. Chan", "Y. Zhang", "C. Chiu", "B. Zoph", "E.D. Cubuk", "Q.V. Le" ], "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "venue": null, "year": 2019 }, { "authors": [ "A. Radford", "J. Wu", "R. Child", "D. Luan", "D. Amodei", "I. Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "A. Raghunathan", "S.M. Xie", "F. Yang", "J. C Duchi", "P. Liang" ], "title": "Adversarial training can hurt generalization", "venue": "Preprint arXiv:1906.06032,", "year": 2019 }, { "authors": [ "M. Ren", "W. Zeng", "B. Yang", "R. Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "J. Shu", "Q. Xie", "L. Yi", "Q. Zhao", "S. Zhou", "Z. Xu", "D. Meng" ], "title": "Meta-weight-net: Learning an explicit mapping for sample weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "I. Sutskever", "O. Vinyals", "Q.V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "T. Tran", "T. Pham", "G. Carneiro", "L. Palmer", "I. Reid" ], "title": "A bayesian data augmentation approach for learning deep models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A. N Gomez", "Ł. Kaiser", "I. Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A. Wang", "A. Singh", "J. Michael", "F. Hill", "O. Levy", "S.R. Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "W.Y.D. Wang", "Yang" ], "title": "That’s so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets", "venue": "In Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "J. Wei", "K. Zou" ], "title": "Eda: Easy data augmentation techniques for boosting performance on text classification tasks", "venue": "In Conference on Empirical Methods in Natural Language Processing,", "year": 2019 }, { "authors": [ "Q. Xie", "Z. Dai", "E. Hovy", "M. Luong", "Q.V. Le" ], "title": "Unsupervised data augmentation for consistency training", "venue": "Preprint arXiv:1904.12848,", "year": 2019 }, { "authors": [ "Z. Xie", "S.I. Wang", "J. Li", "D. Lévy", "A. Nie", "D. Jurafsky", "A.Y. Ng" ], "title": "Data noising as smoothing in neural network language models", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Y. Yang", "L. Huang", "M. Ma" ], "title": "Breaking the beam search curse: A study of (re-) scoring methods and stopping criteria for neural machine translation", "venue": "In Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "H. Zhang", "M. Cisse", "Y.N. Dauphin", "D. Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "X. Zhang", "J. Zhao", "Y. LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "P. Zhao", "T. Zhang" ], "title": "Accelerating minibatch stochastic gradient descent using stratified sampling", "venue": null, "year": 2014 }, { "authors": [ "C. Zhu", "Y. Cheng", "Z. Gan", "S. Sun", "T. Goldstein", "J. Liu" ], "title": "Freelb: Enhanced adversarial training for natural language understanding", "venue": "In International Conference on Learning Representations,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have achieved state-of-the-art results in various tasks in natural language processing (NLP) tasks (Sutskever et al., 2014; Vaswani et al., 2017; Devlin et al., 2019) and computer vision (CV) tasks (He et al., 2016; Goodfellow et al., 2016). One approach to improve the generalization performance of deep neural networks is data augmentation (Xie et al., 2019; Jiao et al., 2019; Cheng et al., 2019; 2020). However, there are some problems if we directly incorporate these augmented samples into the training set. Minimizing the average loss on all these samples means treating them equally, without considering their different implicit impacts on the loss.\nTo address this, we propose to minimize a reweighted loss on these augmented samples to make the model utilize them in a cleverer way. Example reweighting has previously been explored extensively in curriculum learning (Bengio et al., 2009; Jiang et al., 2014), boosting algorithms (Freund & Schapire, 1999), focal loss (Lin et al., 2017) and importance sampling (Csiba & Richtárik, 2018). However, none of them focus on the reweighting of augmented samples instead of the original training samples. A recent work (Jiang et al., 2020a) also assigns different weights on augmented samples. But weights in their model are predicted by a mentor network while we obtain the weights from the closed-form solution by minimizing the maximal expected loss (MMEL). In addition, they focus on image samples with noisy labels, while our method can generally be applied to also textual data as well as image data. Tran et al. (2017) propose to minimize the loss on the augmented samples under the framework of Expectation-Maximization algorithm. But they mainly focus on the generation of augmented samples.\n∗This work is done when Mingyang Yi is an intern at Huawei Noah’s Ark Lab.\nUnfortunately, in practise there is no way to directly access the optimal reweighting strategy. Thus, inspired by adversarial training (Madry et al., 2018), we propose to minimize the maximal expected loss (MMEL) on augmented samples from the same training example. Since the maximal expected loss is the supremum over any possible reweighting strategy on augmented samples’ losses, minimizing this supremum makes the model perform well under any reweighting strategy. More importantly, we derive a closed-form solution of the weights, where augmented samples with larger training losses have larger weights. Intuitively, MMEL allows the model to keep focusing on augmented samples that are harder to train.\nThe procedure of our method is summarized as follows. We first generate the augmented samples with commonly-used data augmentation technique, e.g., lexical substitution for textual input (Jiao et al., 2019), random crop and horizontal flip for image data (Krizhevsky et al., 2012). Then we explicitly derive the closed-form solution of the weights on each of the augmented samples. After that, we update the model parameters with respect to the reweighted loss. The proposed method can generally be applied above any data augmentation methods in various domains like natural language processing and computer vision. Empirical results on both natural language understanding tasks and image classification tasks show that the proposed reweighting strategy consistently outperforms the counterpart of without using it, as well as other reweighting strategies like uniform reweighting." }, { "heading": "2 RELATED WORK", "text": "Data augmentation. Data augmentation is proven to be an effective technique to improve the generalization ability of various tasks, e.g., natural language processing (Xie et al., 2019; Zhu et al., 2020; Jiao et al., 2019), computer vision (Krizhevsky et al., 2014), and speech recognition (Park et al., 2019). For image data, baseline augmentation methods like random crop, flip, scaling, and color augmentation (Krizhevsky et al., 2012) have been widely used. Other heuristic data augmentation techniques like Cutout (DeVries & Taylor, 2017) which masks image patches and Mixup (Zhang et al., 2018) which combines pairs of examples and their labels, are later proposed. Automatically searching for augmentation policies (Cubuk et al., 2018; Lim et al., 2019) have recently proposed to improve the performance further. For textual data, Zhang et al. (2015); Wei & Zou (2019) and Wang (2015) respectively use lexical substitution based on the embedding space. Jiao et al. (2019); Cheng et al. (2019); Kumar et al. (2020) generate augmented samples with a pre-trained language model. Some other techniques like back translation (Xie et al., 2019), random noise injection (Xie et al., 2017) and data mixup (Guo et al., 2019; Cheng et al., 2020) are also proven to be useful.\nAdversarial training. Adversarial learning is used to enhance the robustness of model (Madry et al., 2018), which dynamically constructs the augmented adversarial samples by projected gradient descent across training. Although adversarial training hurts the generalization of model on the task of image classification (Raghunathan et al., 2019), it is shown that adversarial training can be used as data augmentation to help generalization in neural machine translation (Cheng et al., 2019; 2020) and natural language understanding (Zhu et al., 2020; Jiang et al., 2020b). Our proposed method differs from adversarial training in that we adversarially decide the weight on each augmented sample, while traditional adversarial training adversarially generates augmented input samples.\nIn (Behpour et al., 2019), adversarial learning is used as data augmentation in object detection. The adversarial samples (i.e., bounding boxes that are maximally different from the ground truth) are reweighted to form the underlying annotation distribution. However, besides the difference in the model and task, their training objective and the resultant solution are also different from ours.\nSample reweighting. Minimizing a reweighted loss on training samples has been widely explored in literature. Curriculum learning (Bengio et al., 2009; Jiang et al., 2014) feeds first easier and then harder data into the model to accelerate training. Zhao & Zhang (2014); Needell et al. (2014); Csiba & Richtárik (2018); Katharopoulos & Fleuret (2018) use importance sampling to reduce the variance of stochastic gradients to achieve faster convergence rate. Boosting algorithms (Freund & Schapire, 1999) choose harder examples to train subsequent classifiers. Similarly, hard example mining (Malisiewicz et al., 2011) downsamples the majority class and exploits the most difficult examples. Focal loss (Lin et al., 2017; Goyal & He, 2018) focuses on harder examples by reshaping the standard cross-entropy loss in object detection. Ren et al. (2018); Jiang et al. (2018); Shu et al. (2019) use meta-learning method to reweight examples to handle the noisy label problem. Unlike all\nthese existing methods, in this work, we reweight the augmented samples’ losses instead of training samples." }, { "heading": "3 MINIMIZE THE MAXIMAL EXPECTED LOSS", "text": "In this section, we derive our reweighting strategy on augmented samples from the perspective of maximal expected loss. We first give a derivation of the closed-form solution of the weights on augmented samples. Then we describe two kinds of loss under this formulation. Finally, we give the implementation details using the natural language understanding task as an example." }, { "heading": "3.1 WHY MAXIMAL EXPECTED LOSS", "text": "Consider a classification task with N training samples. For the i-th training sample xi, its label is denoted as yxi . Let fθ(·) be the model with parameter θ which outputs the classification probabilities. `(·, ·) denotes the loss function, e.g. the cross-entropy loss between outputs fθ(xi) and the groundtruth label yxi . Given an original training sample xi, the set of augmented samples generated by some method isB(xi). Without loss of generality, we assume xi ∈ B(xi). The conventional training objective is to minimize the loss on every augmented sample z in B(xi) as\nmin θ\n1\nN N∑ i=1 1 |B(xi)| ∑ (z,yz)∈B(xi) `(fθ(z), yz) , (1) where yz is the label of z ∈ B(xi), and can be different with yxi . |B(xi)| is the number of augmented samples in B(xi), which is assumed to be finite.\nIn equation (1), for each given xi, the weights on its augmented samples are the same (i.e., 1/|B(xi)|). However, different samples have different implicit impacts on the loss, and we can assign different weights on them to facilitate training. Note that computing the weighted sum of losses of each augmented sample in B(xi) can be viewed as taking expectation of loss on augmented samples z ∈ B(xi) under a certain distribution. When the augmented samples generated from the same training sample are drawn from a uniform distribution, the loss in equation (1) can be rewritten as\nmin θ Rθ(PU ) = min θ\n1\nN N∑ i=1 [ Ez∼PU (·|xi) [`(fθ(z), yz)]− λPKL(PU (· | xi) ‖ PU (· | xi)) ] , (2)\nwhere the Kullback–Leibler (KL) divergence KL(PU (· | xi) ‖ PU (· | xi)) equals zero. Here PU (· | xi) denotes the uniform distribution on B(xi). When the augmented samples are drawn from a more general distribution PB(· | ·)1 instead of the uniform distribution, we can generalize PU (· | ·) here to some other conditional distribution PB .\nmin θ Rθ(PB) = min θ\n1\nN N∑ i=1 [ Ez∼PB(·|xi) [`(fθ(z), yz)]− λPKL(PB(· | xi) ‖ PU (· | xi)) ] . (3)\nRemark 1. When PB(· | xi) reduces to the uniform distribution PU (· | xi) for any xi, since KL(PU (· | xi) ‖ PU (· | xi)) = 0, the objective in equation (3) reduces to the one in equation (1).\nThe KL divergence term in equation (3) is used as a regularizer to encourage PB close to PU (see Remark 2). From equation (3), the conditional distribution PB determines the weights of each augmented sample in B(xi). There may exist an optimal formulation of PB in some regime, e.g. corresponding to the optimal generalization ability of model. Unfortunately, we can not explicitly characterize such an unknown optimal PB . To address this, we borrow the idea from adversarial training (Madry et al., 2018) and minimize the maximal reweighted loss on augmented samples. Then, the model is guaranteed to perform well under any reweighting strategy, including the underlying optimal one. Specifically, let the conditional distribution PB be P∗θ = arg supPB Rθ(PB). Our objective is to minimize the following reweighted loss\nmin θ Rθ(P∗θ) = min\nθ sup PB\nRθ(PB). (4)\nThe following Remark 2 discusses about the KL divergence term in equation (3). 1In the following, we simplify PB(· | ·) as PB if there is no obfuscation.\nRemark 2. Since we take a supremum over PB in equation (4), the regularizer KL(PB ‖ PU ) encourages PB to be close to PU because it reaches the minimal value zero when PB = PU . Thus the regularizer controls the diversity among the augmented samples by constraining the discrepancy between PB and uniform distribution PU , e.g., a larger λP promotes a larger diversity among the augmented samples.\nThe following Theorem 1 gives the explicit formulation of Rθ(P∗θ). Theorem 1. Let Rθ(PB) and Rθ(P∗θ) be defined in equation (1) and (4), then we have\nRθ(P∗θ) = 1\nN N∑ i=1 ∑ z∈B(xi) P∗θ(z | xi)`(fθ(z), yz)− λPP∗θ(z | xi) log (|B(xi)|P∗θ(z | xi)) , (5) where\nP∗θ(z | xi) = exp\n( 1 λP `(fθ(z), yz) ) ∑\nz∈B(xi) exp ( 1 λP `(fθ(z), yz)\n) = Softmaxz ( 1 λP `(fθ(B(xi)), yB(xi)) ) , (6)\nwhere Softmaxz( 1λP `(fθ(B(xi)), yB(xi))) represents the output probability of z for vector ( 1λP `(fθ(z1), yz1), · · · , 1 λP `(fθ(z|B(xi)|), y|B(xi)|)). Remark 3. If we ignore the KL divergence term in equation (3), due to the equivalence of minimizing cross-entropy loss and MLE loss (Martens, 2019), the proposed MMEL also falls into the generalized Expectation-Maximization (GEM) framework (Dempster et al., 1977). Specifically, given a training example, the augmented samples of it can be viewed as latent variable, and any reweighting on these augmented samples corresponds to a specific conditional distribution of these augmented samples given the training sample. In the expectation step (E-step), we explicitly derive the closed-form solution of the weights on each of these augmented samples according to (6). In the maximization step, since there is no analytical solution for deep neural networks, following (Tran et al., 2017), we update the model parameters with respect to the reweighted loss by one step of gradient descent.\nThe proof of this theorem can be found in Appendix A. From Theorem 1, the loss of it decides the weight on each augmented sample z ∈ Bxi , and the weight is normalized by Softmax over all augmented samples in Bxi . The reweighting strategy allows more attention paid to augmented samples with higher loss values. The strategy is similar to those in (Lin et al., 2017; Zhao & Zhang, 2014) but they apply it on training samples." }, { "heading": "3.2 TWO TYPES OF LOSS", "text": "For augmented sample z ∈ B(xi), instead of computing the discrepancy between the output probability fθ(z) and the hard label yz as in equation (5), one can also compute the discrepancy between fθ(z) and the “soft” probability fθ(xi) in the absence of ground-truth label on augmented samples as in (Xie et al., 2019). In the following, We use superscript “hard\" for the loss in equation (5) as\nRhardθ (P∗θ ,xi) = ∑\nz∈B(xi)\nP∗θ(z | xi)`(fθ(z), yz))− λPP∗θ(z | xi) log (|B(xi)|P∗θ(z | xi)), (7)\nto distinguish with the following objective which uses the “soft probability”: Rsoftθ (P∗θ ,xi) = `(fθ(xi), yxi) + λT ∑\nz∈B(xi);z 6=xi\n( P∗θ(z | xi)`(fθ(z), fθ(xi))\n− λPP∗θ(z | xi) log (|B(xi)| − 1)P∗θ(z | xi) ) .\n(8)\nThe two terms in Rsoftθ (P∗θ,xi) respectively correspond to the loss on original training samples xi and the reweighted loss on the augmented samples. The reweighted loss promotes a small discrepancy between the augmented samples and the original training sample. λT > 0 is the coefficient used to balance the two loss terms, and P∗θ(z | xi) is defined similar to (6) as\nP∗θ(z | xi) = exp\n( 1 λP `(fθ(z), fθ(xi)) ) ∑\nz∈B(xi);z 6=xi exp ( 1 λP `(fθ(z), fθ(xi)) ) . (9)\nAlgorithm 1 Minimize the Maximal Expected Loss (MMEL) Input: Training set {(x1, yx1), · · · , (xN , yxN )}, batch size S, learning rate η, number of training iterations T , Rθ equals Rhardθ or R soft θ .\n1: for i in {1, 2, · · · , N} do . generate augmented samples 2: Generating B(xi) using some data augmentation method. 3: end for 4: for t = 1, · · · , T do . minimize the maximal expected loss 5: Randomly sample a mini-batch S = {(xi1 , yxi1 ), · · · , (xiS , yxiS )} from training set. 6: Fetch the augmented samples B(xi1), B(xi2), · · · , B(xiS ). 7: Compute P∗θ according to (6) or (9). 8: Update model parameters θt+1 = θt − ηS ∑ x∈S ∇θRθ(P∗θ,x). 9: end for\nThe two losses are shown in Figure 1. Summing over all the training samples, we get the two kinds of reweighted training objectives.\nRemark 4. The proposed MMEL-S tries to reduce the discrepancy between fθ(z) and fθ(xi) for z ∈ B(xi). However, if the prediction fθ(xi) is inaccurate, such misleading supervision for z may lead to the degraded performance of MMEL-S. More details are in Appendix B." }, { "heading": "3.3 EXAMPLE: MMEL IMPLEMENTATION ON NATURAL LANGUAGE UNDERSTANDING TASKS", "text": "In this section, we elaborate on implementing the proposed method using textual data in natural language understanding tasks as an example. Our method is separated into two phases. In the first phase, we generate augmented samples. Then in the second phase, with these augmented samples, we update the model parameters under these augmented samples with respect to the hard reweighted loss (7) or the soft counterpart (8). The generation and training procedure can be decoupled, and the augmented samples are offline generated in the first phase by only once. On the other hand, in the second phase, since we have the explicit solution of weights on augmented samples and the multiple forward and backward passes on these augmented samples can be computed in parallel, the whole training time is similar to the regular training counterpart for an appropriate number of augmented samples. The whole training process is shown in Algorithm 1.\nGeneration of Textual Augmented Data. Various methods have been proposed to generate augmented samples for textual data. Recently, large-scale pre-trained language models like BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) learn contextualized representations and have been used widely in generating high-quality augmented sentences (Jiao et al., 2019; Kumar et al., 2020). In this paper, we use a pre-trained BERT trained from masked language modeling to generate augmented samples. For each original input sentence, we randomly mask k tokens. Then we do a forward propagation of the BERT to predict the tokens in those masked positions by greedy search. Details can be found in Algorithm 2 in Appendix C.\nMismatching Label. For Rhardθ in equation (7), the loss term `(fθ(z), yz) on augmented sample z ∈ B(xi) for some xi relies on its label yz . Unlike image data, where conventional augmentation methods like random crop and horizontal flip of an image do not change its label, substituting even one word in a sentence can drastically change its meaning. For instance, suppose the original sentence is “She is my daughter”, and the word “She” is masked. The top 5 words predicted by the pre-trained BERT are “This, She, That, It, He”. Apparently, for the task of linguistic acceptability task, replacing “She” with “He” can change the label from linguistically “acceptable” to “non-acceptable”. Thus for textual input, for the term `(fθ(z), yz) in hard loss (7), instead of directly setting yz as yxi (Zhu et al., 2020), we replace yz with the output probability of a trained teacher model. On the other hand, for the soft loss in equation (8), if an augmented sample z ∈ B(xi) is predicted to a different class from xi by the teacher model, it is unreasonable to still minimize the discrepancy between fθ(z) and fθ(xi). In this case, we replace fθ(xi) in the loss term λT ∑ z∈B(xi);z 6=xi P ∗ θ(z | xi)`(fθ(z), fθ(xi)) with the output probability from the teacher model." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the efficacy of the proposed MMEL algorithm with both hard loss (MMEL-H) and soft loss (MMEL-S). Experiments are conducted on both the image classification tasks CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2014) with the ResNet Model (He et al., 2016), and the General Language Understanding Evaluation (GLUE) tasks (Wang et al., 2019) with the BERT model (Devlin et al., 2019)." }, { "heading": "4.1 EXPERIMENTS ON IMAGE CLASSIFICATION TASKS.", "text": "Data. CIFAR (Krizhevsky et al., 2014) is a benchmark dataset for image classification. We use both CIFAR-10 and CIFAR-100 in our experiments, which are colorful images with 50000 training samples and 10000 validation samples, but from 10 and 100 object classes, respectively.\nSetup. The model we used is ResNet (He et al., 2016) with different depths. We use random crop and horizontal flip (Krizhevsky et al., 2012) to augment the original training images. Since these operations do not change the augmented sample label, we directly adopt the original training sample label for all its augmented samples. Following (He et al., 2016), we use the SGD with momentum optimizer to train each model for 200 epochs. The learning rate starts from 0.1 and decays by a factor of 0.2 at epochs 60, 120 and 160. The batch size is 128, and weight decay is 5e-4. For each xi, |B(xi)| = 10. The λP of the KL regularization coefficient is 1.0 for both MMEL-H and MMEL-S. The λT in equation (8) for MMEL-S is selected from {0.5, 1.0, 2.0}.\nWe compare our proposed MMEL with conventional training with data augmentation (abbreviated as “Baseline(DA)”) under the same number of epochs. Though MMEL can be computed efficiently in parallel, the proposed MMEL encounters |B(xi)| = 10 times more training data. For fair comparison, we also compare with two other baselines that also use 10 times more data: (i) naive training with data augmentation but with 10 times more training epochs compared with MMEL (abbreviated as “Baseline(DA+Long)”). In this case, the learning rate accordingly decays at epochs 600, 1200 and 1600; (ii) training with data augmentation under the framework of MMEL but with uniform weights on the augmented samples (abbreviated as “Baseline(DA+UNI)”).\nMain Results. The results are shown in Table 1. As can be seen, for both CIFAR-10 and CIFAR-100, MMEL-H and MMEL-S significantly outperform the Baseline(DA), with over 0.5 points higher accuracy on all four architectures. Compared to Baseline(DA+Long), the proposed MMEL-H and MMEL-S also have comparable or better performance, while being much more efficient in training. This is because our backward pass only computes the gradient of the weighted loss instead of the separate loss of each example. Compared to Baseline(DA+UNI) which has the same computational cost as MMEL-H and MMEL-S, the proposed methods also have better performance. This indicates the efficacy of the proposed maximal expected loss based reweighting strategy.\nWe further evaluate the proposed method on larege-scale dataset ImageNet(Deng et al., 2009). The detailed results are in Appendix B.\nVarying the Number of Augmented Samples. One hyperparameter of the proposed method is the number of augmented samples |B(xi)|. In Table 2, we evaluate the effect of |B(xi)| on the CIFAR dataset. We vary |B(xi)| in {2, 5, 10, 20} for both MMEL-H and MMEL-S with other settings unchanged. As can be seen, the performance of MMEL improves with more augmented samples for small |B(xi)|. However, the performance gain begins to saturate when |B(xi)| reaches 5 or 10 for some cases. Since a larger |B(xi)| also brings more training cost, we should choose a proper number of augmented samples rather than continually increasing it." }, { "heading": "4.2 RESULTS ON NATURAL LANGUAGE UNDERSTANDING TASKS", "text": "Data. GLUE is a benchmark containing various natural language understanding tasks, including textual entailment (RTE and MNLI), question answering (QNLI), similarity and paraphrase (MRPC, QQP, STS-B), sentiment analysis (SST-2) and linguistic acceptability (CoLA). Among them, STS-B is a regression task, CoLA and SST-2 are single sentence classification tasks, while the rest are sentence-pair classification tasks. Following (Devlin et al., 2019), for the development set, we report Spearman correlation for STS-B, Matthews correlation for CoLA and accuracy for the other tasks. For the test set for QQP and MRPC, we report “F1”.\nSetup. The backbone model is BERTBASE (Devlin et al., 2019). We use the method in Section 3.3 to generate augmented samples. For the problem of mismatching label as described in Section 3.3, we use a BERTBASE model fine-tuned on the downstream task as teacher model to predict the label of each generated sample z in B(xi). For each xi, |B(xi)| = 5. The fraction of masked tokens for each sentence is 0.4. The λP of the KL regularization coefficient is 1.0 for both MMEL-H and MMEL-S. The λT in equation (8) for MMEL-S is 1.0. The other detailed hyperparameters in training can be found in Appendix D.\nThe derivation of MMEL in Section 3 is based on the classification task, while STS-B is a regression task. Hence, we generalize our loss function accordingly for regression tasks as follows. For the hard loss in equation (7), we directly replace yz ∈ R with the prediction of teacher model on z. For the soft loss (8), for each entry of fθ(xi) in loss term λT ∑ z∈B(xi);z 6=xi P ∗ θ(z | xi)MSE(fθ(z), fθ(xi)), we replace it with the prediction of teacher model if the difference between them is larger than 0.5.\nSimilar to Section 4.1, We compare with three baselines. However, we change the first baseline to naive training without data augmentation (abbreviated as “Baseline”) since data augmentation is not used by default in NLP tasks. The other two baselines are similar to those in Section 4.1: (i) “Baseline(DA+Long)” which fine-tunes BERT with data augmentation with the same batch size; and\n(ii)“Baseline(DA+UNI)” which fine-tunes BERT with augmented samples by using average loss. We also compare with another recent data augmentation technique SMART (Jiang et al., 2020b).\nMain Results. The development and test set results on the GLUE benchmark are shown in Table 3. The development set results for the BERT baseline are from our re-implementation, which is comparable or better than the reported results in the original paper (Devlin et al., 2019). The results for SMART are taken from (Jiang et al., 2020b), and there are no test set results in (Jiang et al., 2020b). As can be seen, data augmentation significantly improves the generalization of GLUE tasks. Compared to the baseline without data augmentation (Baseline), MMEL-H or MMEL-S consistently achieves better performance, especially on small datasets like CoLA and RTE. Similar to the observation in the image classification task in Section 4.1, the proposed MMEL-H and MMEL-S are more efficient and have better performance than Baseline(DA+Long). MMEL-H and MMEL-S also outperform Baseline(DA+UNI), indicating the superiority of using the proposed reweighting strategy. In addition, our proposed method also beats SMART in both accuracy and efficiency because they use PGD-k (Madry et al., 2018) to construct adversarial augmented samples which requires nearly k times more training cost. Figure 2 shows the development set accuracy across over the training procedure. As can be seen, training with MMEL-H or MMEL-S converges faster and has better accuracy except SST-2 and RTE where the performance is similar.\nEffect of Predicted Labels. For the augmented samples from same origin, we use a fine-tuned task-specific BERTBASE teacher model to predict their labels as mentioned in Section 3.3 to handle the problem of mismatching label. In Table 4, we show the comparison between using the label of the original sample and using predicted labels. As can be seen, using the predicted label significantly improves the performance. By comparing with the results in Table 3, using the label of the original sample even hurts the performance." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose to minimize a reweighted loss over the augmented samples which directly considers their implicit impacts on the loss. Since we can not access the optimal reweighting strategy, we propose to minimize the supremum of the loss under all reweighting strategies, and give a closedform solution of the optimal weights. Our method can be applied on top of any data augmentation methods. Experiments on both image classification tasks and natural language understanding tasks show that the proposed method improves the generalization performance of the model, while being efficient in training." }, { "heading": "A PROOF OF THEOREM 1", "text": "Proof. For any given xi and B(xi), we aim to find Pθ(· | xi) on B(xi) such that\nmax Pθ(·|xi) ∑ z∈B(xi) Pθ(z | xi)`(fθ(z), yz))− λPPθ(z | xi) log (|B(xi)|Pθ(z | xi))\ns.t. ∑\nz∈B(xi)\nPθ(z | xi) = 1. (10)\nSince the objective is convex, by Lagrange multiplier method, let\nL(Pθ, λ) = ∑\nz∈B(xi)\nPθ(z | xi)`(fθ(z), yz))− λPPθ(z | xi) log (|B(xi)|Pθ(z | xi))\n+ λ ∑ z∈B(xi) Pθ(z | xi)− 1 . (11)\nFrom ∇PθL(Pθ, λ) = ∇λL(Pθ, λ) = 0, for any pairs of zu, zv ∈ B(xi), we have\n`(fθ(zu), yzu)− λP (log |B(xi)|+ 1 + log Pθ(zu | xi)) = `(fθ(zv), yzv )− λP (log |B(xi)|+ 1 + log Pθ(zv | xi)).\n(12)\nHence we have Pθ(zv | xi) = Pθ(zu | xi) exp ( `(fθ(zv), yzv )− `(fθ(zu), yzu)\nλP\n) . (13)\nSumming over zv ∈ B(xi), we have\nPθ(zu | xi) ∑\nzv∈B(xi)\nexp\n( `(fθ(zv), yzu)− `(fθ(zu), yzu)\nλP\n) = 1. (14)\nThe proof completes." }, { "heading": "B MMEL ON LARGE-SCALE DATASET", "text": "In this section, we evaluate the proposed method MMEL on large-scale image classification task ImageNet(Deng et al., 2009).\nData. ImageNet is a benchmark dataset which contains colorful images with over 1 million training samples and 50000 validation samples from 1000 categories.\nSetup. The model we used is ResNet for ImageNet with three different depths (He et al., 2016). All these experiments are conducted for 100 epochs, and the learning rate decays at epochs 30, 60, and 90. We set batch size as 256, and |B(xi)| = 10 for each xi. The other experimental settings follow Section 4.1, expect for the following hyperparameters. We compare the proposed method with “Baseline(DA)”.\nMain Results. The results are shown in Table 5. From the results, the proposed MMEL-H improves the performance of the model for all three depths. However, the proposed MMEL-S is beaten by the baseline method. We speculate this is due to the relatively larger proportion of inaccurate prediction of original training samples on the large-scale dataset. More specifically, as in equation (8), for each augmented sample z ∈ B(xi), the proposed MMEL-S encourages the model to fit the output of original training sample fθ(xi). However, the accuracy of the original training samples in the ImageNet dataset can not reach 100% e.g., about 80% for ResNet50 on ImageNet. The inaccurate prediction fθ(xi) can be a misleading supervision for augmented sample z ∈ B(xi), leading to degraded performance of the proposed MMEL-S. Thus, we suggest using the MMEL-H if the accuracy of the original training samples is relative low." }, { "heading": "C GENERATING AUGMENTED SAMPLES FOR TEXTUAL SAMPLES", "text": "In this section, we elaborate the procedure of generating augmented sentences using greedy-based and beam-based method for a sequence. For each original input sentence, we randomly mask k tokens (which is obtained by rounding the product of masking ratio and length of the sequence to the nearest number) and then we do a forward propagation of the BERT to predict the tokens in those masked positions using greedy search. The detailed procedure is shown in Algorithm 2. We also use beam search (Yang et al., 2018) to generate augmented data. The details of beam search can be referred to (Yang et al., 2018). For sentence-pair tasks, we treat the two sentences separately and generate augmented samples for each of them.\nAlgorithm 2 Augmented Sample Generation by Greedy Search Input: Pre-trained language model BertModel, original sentence x, number of augmented samples |B(x)| − 1, number of masked tokens k. Output: Augmented samples B(x) = {z1, z2, · · · , z|B(x)|−1}.\n1: Randomly sample k positions {p1, · · · , pk} and get xmask. 2: for i = 1, 2, · · · |B(x)| − 1 do . Generate the i-th augmented sample 3: zi ← xmask. 4: zi[p1]← the ith most likely word predicted by BertModel(zi[p1]|zi). 5: for j in {2, 3. · · · , k} do 6: zi[pj ]← the most likely word predicted by BertModel(zi[pj ]|zi). 7: end for 8: end for\nIn the following, we vary the factors that may affect the quality of the generated augmented samples. These factors include\n1. The number of masked tokens, which equals the replacement proportion multiplied with the sentence length. This affects the diversity of augmented samples, i.e., replacing a large proportion of tokens makes the augmented sample less similar to the original one.\n2. Treating the two sentences separately in sentence-pair tasks when generating augmented examples, or concatenate them as a single sentence;\n3. Different generation methods like greedy search (Algorithm 2) and beam search.\nThe results are shown in Table 6. As can be seen, compared with Baseline without data augmentation, MMEL-H and MMEL-S under all hyperparameter configurations have higher accuracy, showing the efficacy of data augmentation and the proposed reweighting strategy. There is no significant difference in using greedy search or beam search to generate the augmented samples. In this natural understanding task, training with augmented samples generated with proper larger replacement proportion (i.e., larger diversity) has slightly better performance. For sentence-pair tasks, treating the two sentences separately and generate augmented samples for each of them has slightly better performance. In the experiments in Section 4.2, we use Greedy search, masking proportion 0.4, and generate augmented sentence for each sentence in sentence-pair tasks." }, { "heading": "D HYPERPARAMETERS FOR THE EXPERIMENT ON THE GLUE BENCHMARK.", "text": "The optimizer we used is AdamW (Loshchilov & Hutter, 2018). The hyperparameters of BERTBASE model are listed in Table 7." } ]
2,021
REWEIGHTING AUGMENTED SAMPLES BY MINIMIZ-