# Datasets: allenai /mup-full

Dataset Preview
paper_id (string)summaries (json)abstractText (string)authors (json)references (json)sections (json)year (int)title (string)
SP:4d08cdb2de2044bcb574a425b42963b83fbebfbc
[ "This paper investigates kernel ridge-less regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridge-less regression are still sparse. The present study fills this gap, which, in my opinion, is also one of the main contributions of the present study. " ]
We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm minimizes a bound on CVloo stability, which in turn is controlled by the condition number of the empirical kernel matrix. The latter can be characterized in the asymptotic regime where both the dimension and cardinality of the data go to infinity. Under the assumption of random kernel matrices, the corresponding test error should be expected to follow a double descent curve.
[]
[ { "authors": [ "Jerzy K Baksalary", "Oskar Maria Baksalary", "Götz Trenkler" ], "title": "A revisitation of formulae for the moore–penrose inverse of modified matrices", "venue": "Linear Algebra and Its Applications,", "year": 2003 }, { "authors": [ "Peter L. Bartlett", "Philip M. Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "CoRR, abs/1906.11300,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Stéphane Boucheron", "Olivier Bousquet", "Gábor Lugosi" ], "title": "Theory of classification: A survey of some recent advances", "venue": "ESAIM: probability and statistics,", "year": 2005 }, { "authors": [ "O. Bousquet", "A. Elisseeff" ], "title": "Stability and generalization", "venue": "Journal Machine Learning Research,", "year": 2001 }, { "authors": [ "Peter Bühlmann", "Sara Van De Geer" ], "title": "Statistics for high-dimensional data: methods, theory and applications", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Noureddine El Karoui" ], "title": "The spectrum of kernel random matrices", "venue": "arXiv e-prints, art", "year": 2010 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J. Tibshirani" ], "title": "Surprises in HighDimensional Ridgeless Least Squares Interpolation", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "S. Kutin", "P. Niyogi" ], "title": "Almost-everywhere algorithmic stability and generalization error", "venue": "Technical report TR-2002-03,", "year": 2002 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin", "Xiyu Zhai" ], "title": "On the Risk of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin" ], "title": "Just interpolate: Kernel “ridgeless” regression can generalize", "venue": "Annals of Statistics,", "year": 2020 }, { "authors": [ "V.A. Marchenko", "L.A. Pastur" ], "title": "Distribution of eigenvalues for some sets of random matrices", "venue": "Mat. Sb. (N.S.),", "year": 1967 }, { "authors": [ "Song Mei", "Andrea Montanari" ], "title": "The generalization error of random features regression: Precise asymptotics and double descent curve", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Carl Meyer" ], "title": "Generalized inversion of modified matrices", "venue": "SIAM J. Applied Math,", "year": 1973 }, { "authors": [ "C.A. Micchelli" ], "title": "Interpolation of scattered data: distance matrices and conditionally positive definite functions", "venue": "Constructive Approximation,", "year": 1986 }, { "authors": [ "Sayan Mukherjee", "Partha Niyogi", "Tomaso Poggio", "Ryan Rifkin" ], "title": "Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization", "venue": "Advances in Computational Mathematics,", "year": 2006 }, { "authors": [ "T. Poggio", "R. Rifkin", "S. Mukherjee", "P. Niyogi" ], "title": "General conditions for predictivity in learning theory", "venue": "Nature,", "year": 2004 }, { "authors": [ "T. Poggio", "G. Kur", "A. Banburski" ], "title": "Double descent in the condition number", "venue": "Technical report, MIT Center for Brains Minds and Machines,", "year": 2019 }, { "authors": [ "Tomaso Poggio" ], "title": "Stable foundations for learning. Center for Brains, Minds and Machines", "venue": "(CBMM) Memo No", "year": 2020 }, { "authors": [ "Alexander Rakhlin", "Xiyu Zhai" ], "title": "Consistency of Interpolation with Laplace Kernels is a HighDimensional Phenomenon", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Lorenzo Rosasco", "Silvia Villa" ], "title": "Learning with incremental iterative regularization", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding Machine Learning: From Theory to Algorithms", "venue": null, "year": 2014 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Nathan Srebro", "Karthik Sridharan" ], "title": "Learnability, stability and uniform convergence", "venue": "J. Mach. Learn. Res.,", "year": 2010 }, { "authors": [ "Ingo Steinwart", "Andreas Christmann" ], "title": "Support vector machines", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "CV" ], "title": "Note that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi", "venue": null, "year": 2006 }, { "authors": [ "Mukherjee" ], "title": "Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM", "venue": null, "year": 2002 }, { "authors": [ "Mukherjee" ], "title": "For ERM and bounded loss functions, CVloo stability in probability with β", "venue": null, "year": 2006 }, { "authors": [ "Mukherjee" ], "title": "zi)− V (fS", "venue": null, "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Statistical learning theory studies the learning properties of machine learning algorithms, and more fundamentally, the conditions under which learning from finite data is possible. In this context, classical learning theory focuses on the size of the hypothesis space in terms of different complexity measures, such as combinatorial dimensions, covering numbers and Rademacher/Gaussian complexities (Shalev-Shwartz & Ben-David, 2014; Boucheron et al., 2005). Another more recent approach is based on defining suitable notions of stability with respect to perturbation of the data (Bousquet & Elisseeff, 2001; Kutin & Niyogi, 2002). In this view, the continuity of the process that maps data to estimators is crucial, rather than the complexity of the hypothesis space. Different notions of stability can be considered, depending on the data perturbation and metric considered (Kutin & Niyogi, 2002). Interestingly, the stability and complexity approaches to characterizing the learnability of problems are not at odds with each other, and can be shown to be equivalent as shown in Poggio et al. (2004) and Shalev-Shwartz et al. (2010).\nIn modern machine learning overparameterized models, with a larger number of parameters than the size of the training data, have become common. The ability of these models to generalize is well explained by classical statistical learning theory as long as some form of regularization is used in the training process (Bühlmann & Van De Geer, 2011; Steinwart & Christmann, 2008). However, it was recently shown - first for deep networks (Zhang et al., 2017), and more recently for kernel methods (Belkin et al., 2019) - that learning is possible in the absence of regularization, i.e., when perfectly fitting/interpolating the data. Much recent work in statistical learning theory has tried to find theoretical ground for this empirical finding. Since learning using models that interpolate is not exclusive to deep neural networks, we study generalization in the presence of interpolation in the case of kernel methods. We study both linear and kernel least squares problems in this paper.\nOur Contributions:\n• We characterize the generalization properties of interpolating solutions for linear and kernel least squares problems using a stability approach. While the (uniform) stability properties of regularized kernel methods are well known (Bousquet & Elisseeff, 2001), we study interpolating solutions of the unregularized (\"ridgeless\") regression problems.\n• We obtain an upper bound on the stability of interpolating solutions, and show that this upper bound is minimized by the minimum norm interpolating solution. This also means that among all interpolating solutions, the minimum norm solution has the best test error. In\nparticular, the same conclusion is also true for gradient descent, since it converges to the minimum norm solution in the setting we consider, see e.g. Rosasco & Villa (2015). • Our stability bounds show that the average stability of the minimum norm solution is\ncontrolled by the condition number of the empirical kernel matrix. It is well known that the numerical stability of the least squares solution is governed by the condition number of the associated kernel matrix (see the discussion of why overparametrization is “good” in Poggio et al. (2019)). Our results show that the condition number also controls stability (and hence, test error) in a statistical sense.\nOrganization: In section 2, we introduce basic ideas in statistical learning and empirical risk minimization, as well as the notation used in the rest of the paper. In section 3, we briefly recall some definitions of stability. In section 4, we study the stability of interpolating solutions to kernel least squares and show that the minimum norm solutions minimize an upper bound on the stability. In section 5 we discuss our results in the context of recent work on high dimensional regression. We conclude in section 6." }, { "heading": "2 STATISTICAL LEARNING AND EMPIRICAL RISK MINIMIZATION", "text": "We begin by recalling the basic ideas in statistical learning theory. In this setting, X is the space of features, Y is the space of targets or labels, and there is an unknown probability distribution µ on the product space Z = X × Y . In the following, we consider X = Rd and Y = R. The distribution µ is fixed but unknown, and we are given a training set S consisting of n samples (thus |S| = n) drawn i.i.d. from the probability distribution on Zn, S = (zi)ni=1 = (xi, yi) n i=1. Intuitively, the goal of supervised learning is to use the training set S to “learn” a function fS that evaluated at a new value xnew should predict the associated value of ynew, i.e. ynew ≈ fS(xnew). The loss is a function V : F × Z → [0,∞), where F is the space of measurable functions from X to Y , that measures how well a function performs on a data point. We define a hypothesis space H ⊆ F where algorithms search for solutions. With the above notation, the expected risk of f is defined as I[f ] = EzV (f, z) which is the expected loss on a new sample drawn according to the data distribution µ. In this setting, statistical learning can be seen as the problem of finding an approximate minimizer of the expected risk given a training set S. A classical approach to derive an approximate solution is empirical risk minimization (ERM) where we minimize the empirical risk IS [f ] = 1 n ∑n i=1 V (f, zi).\nA natural error measure for our ERM solution fS is the expected excess risk ES [I[fS ]−minf∈H I[f ]]. Another common error measure is the expected generalization error/gap given by ES [I[fS ]− IS [fS ]]. These two error measures are closely related since, the expected excess risk is easily bounded by the expected generalization error (see Lemma 5)." }, { "heading": "2.1 KERNEL LEAST SQUARES AND MINIMUM NORM SOLUTION", "text": "The focus in this paper is on the kernel least squares problem. We assume the loss function V is the square loss, that is, V (f, z) = (y − f(x))2. The hypothesis space is assumed to be a reproducing kernel Hilbert space, defined by a positive definite kernel K : X ×X → R or an associated feature map Φ : X → H, such that K(x,x′) = 〈Φ(x),Φ(x′)〉H for all x,x′ ∈ X , where 〈·, ·〉H is the inner product in H. In this setting, functions are linearly parameterized, that is there exists w ∈ H such that f(x) = 〈w,Φ(x)〉H for all x ∈ X . The ERM problem typically has multiple solutions, one of which is the minimum norm solution:\nf†S = arg min f∈M ‖f‖H , M = arg min f∈H\n1\nn n∑ i=1 (f(xi)− yi)2. (1)\nHere ‖·‖H is the norm onH induced by the inner product. The minimum norm solution can be shown to be unique and satisfy a representer theorem, that is for all x ∈ X:\nf†S(x) = n∑ i=1 K(x,xi)cS [i], cS = K †y (2)\nwhere cS = (cS [1], . . . , cS [n]),y = (y1 . . . yn) ∈ Rn, K is the n by n matrix with entries Kij = K(xi,xj), i, j = 1, . . . , n, and K† is the Moore-Penrose pseudoinverse of K. If we assume n ≤ d and that we have n linearly independent data features, that is the rank of X is n, then it is possible to show that for many kernels one can replace K† by K−1 (see Remark 2). Note that invertibility is necessary and sufficient for interpolation. That is, if K is invertible, f†S(xi) = yi for all i = 1, . . . , n, in which case the training error in (1) is zero.\nRemark 1 (Pseudoinverse for underdetermined linear systems) A simple yet relevant example are linear functions f(x) = w>x, that correspond toH = Rd and Φ the identity map. If the rank of X ∈ Rd×n is n, then any interpolating solution wS satisfies w>S xi = yi for all i = 1, . . . , n, and the minimum norm solution, also called Moore-Penrose solution, is given by (w†S)\n> = y>X† where the pseudoinverse X† takes the form X† = X>(XX>)−1.\nRemark 2 (Invertibility of translation invariant kernels) Translation invariant kernels are a family of kernel functions given by K(x1,x2) = k(x1 − x2) where k is an even function on Rd. Translation invariant kernels are Mercer kernels (positive semidefinite) if the Fourier transform of k(·) is non-negative. For Radial Basis Function kernels (K(x1,x2) = k(||x1 − x2||)) we have the additional property due to Theorem 2.3 of Micchelli (1986) that for distinct points x1,x2, . . . ,xn ∈ Rd the kernel matrix K is non-singular and thus invertible.\nThe above discussion is directly related to regularization approaches.\nRemark 3 (Stability and Tikhonov regularization) Tikhonov regularization is used to prevent potential unstable behaviors. In the above setting, it corresponds to replacing Problem (1) by minf∈H 1 n ∑n i=1(f(xi) − yi)2 + λ ‖f‖ 2 H where the corresponding unique solution is given by\nfλS (x) = ∑n i=1K(x,xi)c[i], c = (K + λIn)\n−1y. In contrast to ERM solutions, the above approach prevents interpolation. The properties of the corresponding estimator are well known. In this paper, we complement these results focusing on the case λ→ 0.\nFinally, we end by recalling the connection between minimum norm and the gradient descent.\nRemark 4 (Minimum norm and gradient descent) In our setting, it is well known that both batch and stochastic gradient iterations converge exactly to the minimum norm solution when multiple solutions exist, see e.g. Rosasco & Villa (2015). Thus, a study of the properties of the minimum norm solution explains the properties of the solution to which gradient descent converges. In particular, when ERM has multiple interpolating solutions, gradient descent converges to a solution that minimizes a bound on stability, as we show in this paper." }, { "heading": "3 ERROR BOUNDS VIA STABILITY", "text": "In this section, we recall basic results relating the learning and stability properties of Empirical Risk Minimization (ERM). Throughout the paper, we assume that ERM achieves a minimum, albeit the extension to almost minimizer is possible (Mukherjee et al., 2006) and important for exponential-type loss functions (Poggio, 2020). We do not assume the expected risk to achieve a minimum. Since we will be considering leave-one-out stability in this section, we look at solutions to ERM over the complete training set S = {z1, z2, . . . , zn} and the leave one out training set Si = {z1, z2, . . . , zi−1, zi+1, . . . , zn} The excess risk of ERM can be easily related to its stability properties. Here, we follow the definition laid out in Mukherjee et al. (2006) and say that an algorithm is Cross-Validation leave-one-out (CVloo) stable in expectation, if there exists βCV > 0 such that for all i = 1, . . . , n,\nES [V (fSi , zi)− V (fS , zi)] ≤ βCV . (3) This definition is justified by the following result that bounds the excess risk of a learning algorithm by its average CVloo stability (Shalev-Shwartz et al., 2010; Mukherjee et al., 2006).\nLemma 5 (Excess Risk & CVloo Stability) For all i = 1, . . . , n, ES [I[fSi ]− inf\nf∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (4)\nRemark 6 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z |V (fSi , z)− V (fS , z)| ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nWe recall the proof of Lemma 5 in Appendix A.2 due to lack of space. In Appendix A, we also discuss other definitions of stability and their connections to concepts in statistical learning theory like generalization and learnability.\n4 CVloo STABILITY OF KERNEL LEAST SQUARES\nIn this section we analyze the expected CVloo stability of interpolating solutions to the kernel least squares problem, and obtain an upper bound on their stability. We show that this upper bound on the expected CVloo stability is smallest for the minimum norm interpolating solution (1) when compared to other interpolating solutions to the kernel least squares problem.\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping f ∈ H, that minimizes the empirical least squares risk. Here H is a reproducing kernel hilbert space (RKHS) defined by a positive definite kernel K : X × X → R. All interpolating solutions are of the form f̂S(·) =∑n j=1 ĉS [j]K(xj , ·), where ĉS = K†y + (I −K†K)v. Similarly, all interpolating solutions on\nthe leave one out dataset Si can be written as f̂Si(·) = ∑n j=1,j 6=i ĉSi [j]K(xj , ·), where ĉSi = K†Siyi + (I−K † Si KSi)vi. Here K,KSi are the empirical kernel matrices on the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nTheorem 7 (Main Theorem) Consider the kernel least squares problem with a bounded kernel and bounded outputs y, that is there exist κ,M > 0 such that\nK(x,x′) ≤ κ2, |y| ≤M, (5)\nalmost surely. Then for any interpolating solutions f̂Si , f̂S ,\nES [V (f̂Si , zi)− V (f̂S , zi)] ≤ βCV (K†,y,v,vi) (6) This bound βCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . For the minimum norm solutions we have\nβCV = C1β1 + C2β2, where β1 = ES [ ||K 12 ||op||K†||op × cond(K)× ||y|| ] and, β2 =\nES [ ||K 12 ||2op||K†||2op × (cond(K))2 × ||y||2 ] , andC1, C2 are absolute constants that do not depend\non either d or n.\nIn the above theorem ||K||op refers to the operator norm of the kernel matrix K, ||y|| refers to the standard 2 norm for y ∈ Rn, and cond(K) is the condition number of the matrix K. We can combine the above result with Lemma 5 to obtain the following bound on excess risk for minimum norm interpolating solutions to the kernel least squares problem:\nCorollary 8 The excess risk of the minimum norm interpolating kernel least squares solution can be bounded as: ES [ I[f†Si ]− inff∈H I[f ] ] ≤ C1β1 + C2β2\nwhere β1, β2 are as defined previously.\nRemark 9 (Underdetermined Linear Regression) In the case of underdetermined linear regression, ie, linear regression where the dimensionality is larger than the number of samples in the training set, we can prove a version of Theorem 7 with β1 = ES [∥∥X†∥∥ op ‖y‖ ] and\nβ2 = ES [∥∥X†∥∥2 op ‖y‖2 ] . Due to space constraints, we present the proof of the results in the linear regression case in Appendix B." }, { "heading": "4.1 KEY LEMMAS", "text": "In order to prove Theorem 7 we make use of the following lemmas to bound the CVloo stability using the norms and the difference of the solutions.\nLemma 10 Under assumption (5), for all i = 1. . . . , n, it holds that ES [V (f̂Si , zi)− V (f̂S , zi)] ≤ ES [( 2M + κ (∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H))× κ∥∥∥f̂S − f̂Si∥∥∥H]\nProof We begin, recalling that the square loss is locally Lipschitz, that is for all y, a, a′ ∈ R, with |(y − a)2 − (y − a′)2| ≤ (2|y|+ |a|+ |a′|))|a− a′|.\nIf we apply this result to f, f ′ in a RKHSH, |(y − f(x))2 − (y − f ′(x))2| ≤ κ(2M + κ (‖f‖H + ‖f ′‖H)) ‖f − f ′‖H .\nusing the basic properties of a RKHS that for all f ∈ H |f(x)| ≤ ‖f‖∞ = supx|f(x)| = supx|〈f,Kx〉H| ≤ κ ‖f‖H (7)\nIn particular, we can plug f̂Si and f̂S into the above inequality, and the almost positivity of ERM (Mukherjee et al., 2006) will allow us to drop the absolute value on the left hand side. Finally the desired result follows by taking the expectation over S.\nNow that we have bounded the CVloo stability using the norms and the difference of the solutions, we can find a bound on the difference between the solutions to the kernel least squares problem. This is our main stability estimate.\nLemma 11 Let f̂S , f̂Si be any interpolating kernel least squares solutions on the full and leave one out datasets (as defined at the top of this section), then ∥∥∥f̂S − f̂Si∥∥∥H ≤ BCV (K†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . Also for some absolute constant C,∥∥∥f†S − f†Si∥∥∥H ≤ C × ∥∥∥K 12 ∥∥∥op ∥∥K†∥∥op × cond(K)× ‖y‖ (8) Since the minimum norm interpolating solutions minimize both ∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H and ∥∥∥f̂S − f̂Si∥∥∥H (from lemmas 10, 11), we can put them together to prove theorem 7. In the following section we provide the proof of Lemma 11.\nRemark 12 (Zero training loss) In Lemma 10 we use the locally Lipschitz property of the squared loss function to bound the leave one out stability in terms of the difference between the norms of the solutions. Under interpolating conditions, if we set the term V (f̂S , zi) = 0, the leave one\nout stability reduces to ES [ V (f̂Si , zi)− V (f̂S , zi) ] = ES [ V (f̂Si , zi) ] = ES [(f̂Si(xi)− yi)2] =\nES [(f̂Si(xi)− f̂S(xi))2] = ES [〈f̂Si(·)− f̂S(·),Kxi(·)〉2] ≤ ES [ ||f̂S − f̂Si ||2H × κ2 ] . We can plug\nin the bound from Lemma 11 to obtain similar qualitative and quantitative (up to constant factors) results as in Theorem 7.\nSimulation: In order to illustrate that the minimum norm interpolating solution is the best performing interpolating solution we ran a simple experiment on a linear regression problem. We synthetically generated data from a linear model y = w>X, where X ∈ Rd×n was i.i.d N (0, 1). The dimension of the data was d = 1000 and there were n = 200 samples in the training dataset. A held out dataset of 50 samples was used to compute the test mean squared error (MSE). Interpolating solutions were computed as ŵ> = y>X†+v>(I−XX†) and the norm of v was varied to obtain the plot. The results are shown in Figure 1, where we can see that the training loss is 0 for all interpolants, but test MSE increases as ||v|| increases, with (w†)> = y>X† having the best performance. The figure reports results averaged over 100 trials." }, { "heading": "4.2 PROOF OF LEMMA 11", "text": "We can write any interpolating solution to the kernel regression problem as f̂S(x) =∑n i=1 ĉS [i]K(xi,x) where ĉS = K\n†y + (I − K†K)v, and K ∈ Rn×n is the kernel matrix K on S and v is any vector in Rn. i.e. Kij = K(xi,xj), and y ∈ Rn is the vector y = [y1 . . . yn]>. Similarly, the coefficient vector for the corresponding interpolating solution to the problem over the leave one out dataset Si is ĉSi = (KSi)\n†yi + (I− (KSi)†KSi)vi. Where yi = [y1, . . . , 0, . . . yn]> and KSi is the kernel matrix K with the i\nth row and column set to zero, which is the kernel matrix for the leave one out training set.\nWe define a = [−K(x1,xi), . . . ,−K(xn,xi)]> ∈ Rn and b ∈ Rn as a one-hot column vector with all zeros apart from the ith component which is 1. Let a∗ = a +K(xi,xi)b. Then, we have:\nK∗ = K + ba > ∗\nKSi = K∗ + ab > (9)\nThat is, we can write KSi as a rank-2 update to K. This can be verified by simple algebra, and using the fact that K is a symmetric kernel. Now we are interested in bounding ||f̂S − f̂Si ||H. For a function h(·) = ∑m i=1 piK(xi, ·) ∈ H we have ||h||H = √ p>Kp = ||K 12p||. So we have:\n||f̂S − f̂Si ||H = ||K 1 2 (ĉS − ĉSi)||\n= ||K 12 (K†y + (I−K†K)v − (KSi)†yi − (I− (KSi)†KSi)vi)||\n= ||K 12 (K†y − (KSi)†y + yi(KSi)†b + (I−K†K)(v − vi) + (K†K− (KSi)†KSi)vi)||\n= ||K 12 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi]|| (10)\nHere we make use of the fact that (KSi) †b = 0. If K has full rank (as in Remark 2), we see that b lies in the column space of K and a∗ lies in the column space of K>. Furthermore, β∗ = 1 +a>∗K †b = 1 +a>K†b+K(xi,xi)b >K†b = Kii(K †)ii 6= 0. Using equation 2.2 of Baksalary\net al. (2003) we obtain:\nK†∗ = K † − (Kii(K†)ii)−1K†ba>∗K†\n= K† − (Kii(K†)ii)−1K†ba>K† − ((K†)ii)−1K†bb>K†\n= K† + (Kii(K †)ii) −1K†bb> − ((K†)ii)−1K†bb>K† (11)\nHere we make use of the fact that a>K† = −b. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have K†∗K∗ = K†K.\nNext, we see that since K∗ has the same rank as K, a lies in the column space of K∗, and b lies in the column space of K>∗ . Furthermore β = 1 + b\n>K∗a = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (KSi) †, with k = K†∗a and h = b>K † ∗.\n(KSi) † = K†∗ − kk†K†∗ −K†∗h†h + (k†K†∗h†)kh\n=⇒ (KSi)† −K†∗ = (k†K†∗h†)kh− kk†K†∗ −K†∗h†h =⇒ ||(KSi)† −K†∗||op ≤ 3||K†∗||op\n(12)\nAbove, we use the fact that the operator norm of a rank 1 matrix is given by ||uv>||op = ||u|| × ||v||. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have:\n(KSi) †KSi = K † ∗K∗ − kk†\n=⇒ K†K− (KSi)†KSi = kk† (13)\nPutting the two parts together we obtain the bound on ∥∥(KSi)† −K†∥∥op:\n||K† − (KSi)†||op = ||K† −K†∗ + K†∗ − (KSi)†||op ≤ 3||K†∗||op + ||K† −K†∗||op ≤ 3||K†||op + 4(Kii(K†)ii)−1||K†||op + 4((K†)ii)−1||K†||2op ≤ ||K†||op(3 + 8||K†||op||K||op)\n(14)\nThe last step follows from (Kii)−1 ≤ ||K†||op and ((K†)ii)−1 ≤ ||K||op. Plugging in these calculations into equation 10 we get:\n||f̂S − f̂Si ||H = ||K 1 2 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi]|| ≤ ||K 12 ||op ( ||(K† − (KSi)†)y||+ ||(I−K†K)(v − vi)||+ ||kk†vi|| ) ≤ ||K 12 ||op(B0 + ||I−K†K||op||v − vi||+ ||vi||)\n(15)\nWe see that the right hand side is minimized when v = vi = 0. We have also computed B0 = C × ||K†||op × cond(K)× ||y||, which concludes the proof of Lemma 11." }, { "heading": "5 REMARK AND RELATED WORK", "text": "In the previous section we obtained bounds on the CVloo stability of interpolating solutions to the kernel least squares problem. Our kernel least squares results can be compared with stability bounds for regularized ERM (see Remark 3). Regularized ERM has a strong stability guarantee in terms of a uniform stability bound which turns out to be inversely proportional to the regularization parameter λ and the sample size n (Bousquet & Elisseeff, 2001). However, this estimate becomes vacuous as λ→ 0. In this paper, we establish a bound on average stability, and show that this bound is minimized when the minimum norm ERM solution is chosen. We study average stability since one can expect\n20 40 60 80 100 120 140\nn\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nC on\ndi tio\nn nu\nm be\nr\nd=nd>n d<n\nFigure 2: Typical double descent of the condition number (y axis) of a radial basis function kernel K(x, x′) = exp ( − ||x−x ′||2\n2σ2\n) built from a random data matrix distributed asN (0, 1): as in the linear\ncase, the condition number is worse when n = d, better if n > d (on the right of n = d) and also better if n < d (on the left of n = d). The parameter σ was chosen to be 5. From Poggio et al. (2019)\nworst case scenarios where the minimum norm is arbitrarily large (when n ≈ d). One of our key findings is the relationship between minimizing the norm of the ERM solution and minimizing a bound on stability.\nThis leads to a second observation, namely, that we can consider the limit of our risk bounds as both the sample size (n) and the dimensionality of the data (d) go to infinity, but the ratio dn → γ > 1 as n, d→∞ . This is a classical setting in statistics which allows us to use results from random matrix theory (Marchenko & Pastur, 1967). In particular, for linear kernels the behavior of the smallest eigenvalue of the kernel matrix (which appears in our bounds) can be characterized in this asymptotic limit. In fact, under appropriate distributional assumptions, our bound for linear regression can be computed as (||X†|| × ||y||)2 ≈ √ n√\nd− √ n → 1√γ−1 . Here the dimension of the data coincides with\nthe number of parameters in the model. Interestingly, analogous results hold for more general kernels (inner product and RBF kernels) (El Karoui, 2010) where the asymptotics are taken with respect to the number and dimensionality of the data. These results predict a double descent curve for the condition number as found in practice, see Figure 2. While it may seem that our bounds in Theorem 7 diverge if d is held constant and n→∞, this case is not covered by our theorem, since when n > d we no longer have interpolating solutions.\nRecently, there has been a surge of interest in studying linear and kernel least squares models, since classical results focus on situations where constraints or penalties that prevent interpolation are added to the empirical risk. For example, high dimensional linear regression is considered in Mei & Montanari (2019); Hastie et al. (2019); Bartlett et al. (2019), and “ridgeless” kernel least squares is studied in Liang et al. (2019); Rakhlin & Zhai (2018) and Liang et al. (2020). While these papers study upper and lower bounds on the risk of interpolating solutions to the linear and kernel least squares problem, ours are the first to derived using stability arguments. While it might be possible to obtain tighter excess risk bounds through careful analysis of the minimum norm interpolant, our simple approach helps us establish a link between stability in statistical and in numerical sense.\nFinally, we can compare our results with observations made in Poggio et al. (2019) on the condition number of random kernel matrices. The condition number of the empirical kernel matrix is known to control the numerical stability of the solution to a kernel least squares problem. Our results show that the statistical stability is also controlled by the condition number of the kernel matrix, providing a natural link between numerical and statistical stability." }, { "heading": "6 CONCLUSIONS", "text": "In summary, minimizing a bound on cross validation stability minimizes the expected error in both the classical and the modern regime of ERM. In the classical regime (d < n), CVloo stability implies generalization and consistency for n→∞. In the modern regime (d > n), as described in this paper, CVloo stability can account for the double descent curve in kernel interpolants (Belkin et al., 2019) under appropriate distributional assumptions. The main contribution of this paper is characterizing stability of interpolating solutions, in particular deriving excess risk bounds via a stability argument. In the process, we show that among all the interpolating solutions, the one with minimum norm also minimizes a bound on stability. Since the excess risk bounds of the minimum norm interpolant depend on the pseudoinverse of the kernel matrix, we establish here an elegant link between numerical and statistical stability. This also holds for solutions computed by gradient descent, since gradient descent converges to minimum norm solutions in the case of “linear” kernel methods. Our approach is simple and combines basic stability results with matrix inequalities." }, { "heading": "A EXCESS RISK, GENERALIZATION, AND STABILITY", "text": "We use the same notation as introduced in Section 2 for the various quantities considered in this section. That is in the supervised learning setup V (f, z) is the loss incurred by hypothesis f on the sample z, and I[f ] = Ez[V (f, z)] is the expected error of hypothesis f . Since we are interested in different forms of stability, we will consider learning problems over the original training set S = {z1, z2, . . . , zn}, the leave one out training set Si = {z1, . . . , zi−1, zi+1, . . . , zn}, and the replace one training set (Si, z) = {z1, . . . , zi−1, zi+1, . . . , zn, z}\nA.1 REPLACE ONE AND LEAVE ONE OUT ALGORITHMIC STABILITY\nSimilar to the definition of expected CVloo stability in equation (3) of the main paper, we say an algorithm is cross validation replace one stable (in expectation), denoted as CVro, if there exists βro > 0 such that\nES,z[V (fS , z)− V (f(Si,z), z)] ≤ βro.\nWe can strengthen the above stability definition by introducing the notion of replace one algorithmic stability (in expectation) Bousquet & Elisseeff (2001). There exists αro > such that for all i = 1, . . . , n,\nES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ αro.\nWe make two observations: First, if the loss is Lipschitz, that is if there exists CV > 0 such that for all f, f ′ ∈ H\n‖V (f, z)− V (f ′, z)‖ ≤ CV ‖f − f ′‖ ,\nthen replace one algorithmic stability implies CVro stability with βro = CV αro. Moreover, the same result holds if the loss is locally Lipschitz and there exists R > 0, such that ‖fS‖∞ ≤ R almost surely. In this latter case the Lipschitz constant will depend on R. Later, we illustrate this situation for the square loss.\nSecond, we have for all i = 1, . . . , n, S and z, ES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ ES,z[‖fS − fSi‖∞] + ES,z[∥∥f(Si,z) − fSi∥∥∞].\nThis observation motivates the notion of leave one out algorithmic stability (in expectation) Bousquet & Elisseeff (2001)]\nES,z[‖fS − fSi‖∞] ≤ αloo.\nClearly, leave one out algorithmic stability implies replace one algorithmic stability with αro = 2αloo and it implies also CVro stability with βro = 2CV αloo.\nA.2 EXCESS RISK AND CVloo, CVro STABILITY\nWe recall the statement of Lemma 5 in section 3 that bounds the excess risk using the CVloo stability of a solution.\nLemma 13 (Excess Risk & CVloo Stability) For all i = 1, . . . , n,\nES [I[fSi ]− inf f∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (16)\nIn this section, two properties of ERM are useful, namely symmetry, and a form of unbiasedeness.\nSymmetry. A key property of ERM is that it is symmetric with respect to the data set S, meaning that it does not depend on the order of the data in S.\nA second property relates the expected ERM with the minimum of expected risk.\nERM Bias. The following inequality holds.\nE[[IS [fS ]]−min f∈H I[f ] ≤ 0. (17)\nTo see this, note that IS [fS ] ≤ IS [f ]\nfor all f ∈ H by definition of ERM, so that taking the expectation of both sides ES [IS [fS ]] ≤ ES [IS [f ]] = I[f ]\nfor all f ∈ H. This implies ES [IS [fS ]] ≤ min\nf∈H I[f ]\nand hence (17) holds.\nRemark 14 Note that the same argument gives more generally that\nE[ inf f∈H [IS [f ]]− inf f∈H I[f ] ≤ 0. (18)\nGiven the above premise, the proof of Lemma 5 is simple.\nProof [of Lemma 5] Adding and subtracting ES [IS [fS ]] from the expected excess risk we have that ES [I[fSi ]−min\nf∈H I[f ]] = ES [I[fSi ]− IS [fS ] + IS [fS ]−min f∈H I[f ]], (19)\nand since ES [IS [fS ]]−minf∈H I[f ]] is less or equal than zero, see (18), then ES [I[fSi ]−min\nf∈H I[f ]] ≤ ES [I[fSi ]− IS [fS ]]. (20)\nMoreover, for all i = 1, . . . , n\nES [I[fSi ]] = ES [EziV (fSi , zi)] = ES [V (fSi , zi)] and\nES [IS [fS ]] = 1\nn n∑ i=1 ES [V (fS , zi)] = ES [V (fS , zi)].\nPlugging these last two expressions in (20) and in (19) leads to (4).\nWe can prove a similar result relating excess risk with CVro stability.\nLemma 15 (Excess Risk & CVro Stability) Given the above definitions, the following inequality holds for all i = 1, . . . , n,\nES [I[fS ]− inf f∈H I[f ]] ≤ ES [I[fS ]− IS [fS ]] = ES,z[V (fS , z)− V (f(Si,z), z)]. (21)\nProof The first inequality is clear from adding and subtracting IS [fS ] from the expected risk I[fS ] we have that\nES [I[fS ]−min f∈H I[f ]] = ES [I[fS ]− IS [fS ] + IS [fS ]−min f∈H I[f ]],\nand recalling (18). The main step in the proof is showing that for all i = 1, . . . , n,\nE[IS [fS ]] = E[V (f(Si,z), z)] (22)\nto be compared with the trivial equality, E[IS [fS ] = E[V (fS , zi)]. To prove Equation (22), we have for all i = 1, . . . , n,\nES [IS [fS ]] = ES,z[ 1\nn n∑ i=1 V (fS , zi)] = 1 n n∑ i=1 ES,z[V (f(Si,z), z)] = ES,z[V (f(Si,z), z)]\nwhere we used the fact that by the symmetry of the algorithm ES,z[V (f(Si,z), z)] is the same for all i = 1, . . . , n. The proof is concluded noting that ES [I[fS ]] = ES,z[V (fS , z)].\nA.3 DISCUSSION ON STABILITY AND GENERALIZATION\nBelow we discuss some more aspects of stability and its connection to other quantities in statistical learning theory.\nRemark 16 (CVloo stability in expectation and in probability) In Mukherjee et al. (2006), CVloo stability is defined in probability, that is there exists βPCV > 0, 0 < δ P CV ≤ 1 such that\nPS{|V (fSi , zi)− V (fS , zi)| ≥ βPCV } ≤ δPCV .\nNote that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi , zi)− V (fS , zi) > 0. Then CVloo stability in probability and in expectation are clearly related and indeed equivalent for bounded loss functions. CVloo stability in expectation (3) is what we study in the following sections.\nRemark 17 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z |V (fSi , z) − V (fS , z)| ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nRemark 18 (CVloo Stability & Learnability) A natural question is to which extent suitable notions of stability are not only sufficient but also necessary for controlling the excess risk of ERM. Classically, the latter is characterized in terms of a uniform version of the law of large numbers, which itself can be characterized in terms of suitable complexity measures of the hypothesis class. Uniform stability is too strong to characterize consistency while CVloo stability turns out to provide a suitably weak definition as shown in Mukherjee et al. (2006), see also Kutin & Niyogi (2002), Mukherjee et al. (2006). Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM:\nTheorem 19 Mukherjee et al. (2006) For ERM and bounded loss functions, CVloo stability in probability with βPCV converging to zero for n→∞ is equivalent to consistency and generalization of ERM.\nRemark 20 (CVloo stability & in-sample/out-of-sample error) Let (S, z) = {z1, . . . , zn, z}, (z is a data point drawn according to the same distribution) and the corresponding ERM solution f(S,z), then (4) can be equivalently written as,\nES [I[fS ]− inf f∈F I[f ]] ≤ ES,z[V (fS , z)− V (f(S,z), z)].\nThus CVloo stability measures how much the loss changes when we test on a point that is present in the training set and absent from it. In this view, it can be seen as an average measure of the difference between in-sample and out-of-sample error.\nRemark 21 (CVloo stability and generalization) A common error measure is the (expected) generalization gap ES [I[fS ]−IS [fS ]]. For non-ERM algorithms, CVloo stability by itself not sufficient to control this term, and further conditions are needed Mukherjee et al. (2006), since\nES [I[fS ]− IS [fS ]] = ES [I[fS ]− IS [fSi ]] + ES [IS [fSi ]− IS [fS ]].\nThe second term becomes for all i = 1, . . . , n,\nES [IS [fSi ]− IS [fS ]] = 1\nn n∑ i=1 ES [V (fSi , zi)− V (fS , zi)] = ES [V (fSi , zi)− V (fS , zi)]\nand hence is controlled by CV stability. The first term is called expected leave one out error in Mukherjee et al. (2006) and is controlled in ERM as n→∞, see Theorem 19 above.\nB CVloo STABILITY OF LINEAR REGRESSION\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping w ∈ R d, that minimizes the empirical least squares risk. All interpolating solutions are of the form ŵS = y>X†+v>(I−XX†). Similarly, all interpolating solutions on the leave one out dataset Si can be written as ŵSi = y>i (Xi) † + v>i (I−Xi(Xi)†). Here X,Xi ∈ R d×n are the data matrices for the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nIn this section we want to estimate the CVloo stability of the minimum norm solution to the ERM problem in the linear regression case. This is the case outlined in Remark 9 of the main paper. In order to prove Remark 9, we only need to combine Lemma 10 with the linear regression analogue of Lemma 11. We state and prove that result in this section. This result predicts a double descent curve for the norm of the pseudoinverse as found in practice, see Figure 3.\nLemma 22 Let ŵS , ŵSi be any interpolating least squares solutions on the full and leave one out datasets S, Si, then ‖ŵS − ŵSi‖ ≤ BCV (X†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions w†S ,w † Si . Also, ∥∥∥w†S −w†Si∥∥∥ ≤ 3∥∥X†∥∥op × ‖y‖ (23) As mentioned before in section 2.1 of the main paper, linear regression can be viewed as a case of the kernel regression problem whereH = Rd, and the feature map Φ is the identity map. The inner product and norms considered in this case are also the usual Euclidean inner product and 2-norm for vectors in Rd. The notation ‖·‖ denotes the Euclidean norm for vectors both in Rd and Rn. The usage of the norm should be clear from the context. Also, ‖A‖op is the left operator norm for a matrix A ∈ Rn×d, that is ‖A‖op = supy∈Rn,||y||=1 ||y>A||.\nWe have n samples in the training set for a linear regression problem, {(xi, yi)}ni=1. We collect all the samples into a single matrix/vector X = [x1x2 . . .xn] ∈ Rd×n, and y = [y1y2 . . . yn]> ∈ Rn. Then any interpolating ERM solution wS satisfies the linear equation\nw>SX = y > (24)\nAny interpolating solution can be written as:\n(ŵS) > = y>X† + v>(I−XX†). (25)\nIf we consider the leave one out training set Si we can find the minimum norm ERM solution for Xi = [x1 . . .0 . . .xn] and yi = [y1 . . . 0 . . . yn]> as\n(ŵSi) > = y>i (Xi) † + v>i (I−Xi(Xi)†). (26) We can write Xi as:\nXi = X + ab > (27)\nwhere a ∈ Rd is a column vector representing the additive change to the ith column, i.e, a = −xi, and b ∈ Rn×1 is the i−th element of the canonical basis in Rn (all the coefficients are zero but the i−th which is one). Thus ab> is a d × n matrix composed of all zeros apart from the ith column which is equal to a.\nWe also have yi = y − yib. Now per Lemma 10 we are interested in bounding the quantity ||ŵSi − ŵS || = ||(ŵSi)> − (ŵS)>||. This simplifies to:\n||ŵSi − ŵS || = ||y>i (Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||(y> − yib>)(Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + yib>(Xi)† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†)|| (28)\nIn the above equation we make use of the fact that b>(Xi)† = 0. We use an old formula (Meyer, 1973; Baksalary et al., 2003) to compute (Xi)† from X†. We use the development of pseudo-inverses of perturbed matrices in Meyer (1973). We see that a = −xi is a vector in the column space of X and b is in the range space of XT (provided X has full column rank), with β = 1 + b>X†a = 1− b>X†xi = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (Xi)†\n(Xi) † = X† − kk†X† −X†h†h + (k†X†h†)kh (29)\nwhere k = X†a, and h = b>X†, and u† = u >\n||u||2 for any non-zero vector u.\n(Xi) † −X† = (k†X†h†)kh− kk†X† −X†h†h\n= a>(X†)>X†(X†)>b× kh ||k||2||h||2 − kk†X† −X†h†h\n=⇒ ||(Xi)† −X†||op ≤ |a>(X†)>X†(X†)>b| ||X†a||||b>X†|| + 2||X†||op\n≤ ||X †||op||X†a||||b>X†|| ||X†a||||b>X†|| + 2||X†||op\n= 3||X†||op\n(30)\nThe above set of inequalities follows from the fact that the operator norm of a rank 1 matrix is given by ||uv>||op = ||u|| × ||v||\nAlso, from List 2 of Baksalary et al. (2003) we have that Xi(Xi)† = XX† − h†h. Plugging in these calculations into equation 28 we get:\n||ŵSi − ŵS || = ||y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†)|| ≤ B0 + ||I−XX†||op||v − vi||+ ||vi|| × ||h†h||op ≤ B0 + 2||v − vi||+ ||vi|| (31)\nWe see that the right hand side is minimized when v = vi = 0. We can also compute B0 = 3||X†||op||y||, which concludes the proof of Lemma 22." } ]
2,020
[ "This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of gradient diversity to the performance of continual learning, and proposed a regularization that connects metric learning and continual learning. However, there are still some issues to be addressed as below." ]
The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting. In recent studies, several gradientbased approaches have been developed to make more efficient use of compact episodic memories, which constrain the gradients resulting from new samples with those from memorized samples, aiming to reduce the diversity of gradients from different tasks. In this paper, we reveal the relation between diversity of gradients and discriminativeness of representations, demonstrating connections between Deep Metric Learning and continual learning. Based on these findings, we propose a simple yet highly efficient method – Discriminative Representation Loss (DRL) – for continual learning. In comparison with several state-of-theart methods, DRL shows effectiveness with low computational cost on multiple benchmark experiments in the setting of online continual learning.
[]
[ { "heading": "1 INTRODUCTION", "text": "In the real world, we are often faced with situations where data distributions are changing over time, and we would like to update our models by new data in time, with bounded growth in system size. These situations fall under the umbrella of “continual learning”, which has many practical applications, such as recommender systems, retail supply chain optimization, and robotics (Lesort et al., 2019; Diethe et al., 2018; Tian et al., 2018). Comparisons have also been made with the way that humans are able to learn new tasks without forgetting previously learned ones, using common knowledge shared across different skills. The fundamental problem in continual learning is catastrophic forgetting (McCloskey & Cohen, 1989; Kirkpatrick et al., 2017), i.e. (neural network) models have a tendency to forget previously learned tasks while learning new ones.\nThere are three main categories of methods for alleviating forgetting in continual learning: i) regularization-based methods which aim in preserving knowledge of models of previous tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2018) ii) architecture-based methods for incrementally evolving the model by learning task-shared and task-specific components (Schwarz et al., 2018; Hung et al., 2019); iii) replay-based methods which focus in preserving knowledge of data distributions of previous tasks, including methods of experience replay by episodic memories or generative models (Shin et al., 2017; Rolnick et al., 2019), methods for generating compact episodic memories (Chen et al., 2018; Aljundi et al., 2019), and methods for more efficiently using episodic memories (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Riemer et al., 2019; Farajtabar et al., 2020).\nGradient-based approaches using episodic memories, in particular, have been receiving increasing attention. The essential idea is to use gradients produced by samples from episodic memories to constrain the gradients produced by new samples, e.g. by ensuring the inner product of the pair of gradients is non-negative (Lopez-Paz & Ranzato, 2017) as follows:\n〈gt, gk〉 = 〈 ∂L(xt, θ)\n∂θ , ∂L(xk, θ) ∂θ\n〉 ≥ 0, ∀k < t (1)\nwhere t and k are time indices, xt denotes a new sample from the current task, and xk denotes a sample from the episodic memory. Thus, the updates of parameters are forced to preserve the performance on previous tasks as much as possible.\nIn Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017), gt is projected to a direction that is closest to it in L2-norm whilst also satisfying Eq. (1): ming̃ 12 ||gt − g̃|| 2 2, s.t.〈g̃, gk〉 ≥ 0, ∀k < t. Optimization of this objective requires a high-dimensional quadratic program and thus is computationally expensive. Averaged-GEM (A-GEM) (Chaudhry et al., 2019a) alleviates the computational burden of GEM by using the averaged gradient over a batch of samples instead of individual gradients of samples in the episodic memory. This not only simplifies the computation, but also obtains comparable performance with GEM. Orthogonal Gradient Descent (OGD) (Farajtabar et al., 2020) projects gt to the direction that is perpendicular to the surface formed by {gk|k < t}. Moreover, Aljundi et al. (2019) propose Gradient-based Sample Selection (GSS), which selects samples that produce most diverse gradients with other samples into episodic memories. Here diversity is measured by the cosine similarity between gradients. Since the cosine similarity is computed using the inner product of two normalized gradients, GSS embodies the same principle as other gradient-based approaches with episodic memories. Although GSS suggests the samples with most diverse gradients are important for generalization across tasks, Chaudhry et al. (2019b) show that the average gradient over a small set of random samples may be able to obtain good generalization as well.\nIn this paper, we answer the following questions: i) Which samples tend to produce diverse gradients that strongly conflict with other samples and why are such samples able to help with generalization? ii) Why does a small set of randomly chosen samples also help with generalization? iii) Can we reduce the diversity of gradients in a more efficient way? Our answers reveal the relation between diversity of gradients and discriminativeness of representations, and further show connections between Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) and continual learning. Drawing on these findings we propose a new approach, Discriminative Representation Loss (DRL), for classification tasks in continual learning. Our methods show improved performance with relatively low computational cost in terms of time and RAM cost when compared to several state-of-theart (SOTA) methods across multiple benchmark tasks in the setting of online continual learning." }, { "heading": "2 A NEW PERSPECTIVE OF REDUCING DIVERSITY OF GRADIENTS", "text": "According to Eq. (1), negative cosine similarities between gradients produced by current and previous tasks result in worse performance in continual learning. This can be interpreted from the perspective of constrained optimization as discussed by Aljundi et al. (2019). Moreover, the diversity of gradients relates to the Gradient Signal to Noise Ratio (GSNR) (Liu et al., 2020), which plays a crucial role in the model’s generalization ability. Intuitively, when more of the gradients point in diverse directions, the variance will be larger, leading to a smaller GSNR, which indicates that reducing the diversity of gradients can improve generalization. This finding leads to the conclusion that samples with the most diverse gradients contain the most critical information for generalization which is consistent with in Aljundi et al. (2019)." }, { "heading": "2.1 THE SOURCE OF GRADIENT DIVERSITY", "text": "We first conducted a simple experiment on classification tasks of 2-D Gaussian distributions, and tried to identify samples with most diverse gradients in the 2-D feature space. We trained a linear model on the first task to discriminate between two classes (blue and orange dots in Fig. 1a). We then applied the algorithm Gradient-based Sample Selection with Interger Quadratic Programming (GSS-IQP) (Aljundi et al., 2019) to select 10% of the samples of training data that produce gradients with the lowest similarity (black dots in Fig. 1a), and denote this set of samples as M̂ = minM ∑ i,j∈M 〈gi,gj〉 ||gi||·||gj || .\nIt is clear from Fig. 1a that the samples in M̂ are mostly around the decision boundary between the two classes. Increasing the size of M̂ results in the inclusion of samples that trace the outer edges of the data distributions from each class. Clearly the gradients can be strongly opposed when samples from different classes are very similar. Samples close to decision boundaries are most likely to exhibit this characteristic. Intuitively, storing the decision boundaries of previously learned classes should be an effective way to preserve classification performance on those classes. However, if the episodic memory only includes samples representing the learned boundaries, it may miss important information when the model is required to incrementally learn new classes. We show this by introducing a second task - training the model above on a third class (green dots). We display the decision boundaries (which split the feature space in a one vs. all manner) learned by the model after\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 M\n(a) Samples with most diverse gradients (M̂ ) after learning task 1, the green line is the decision boundary.\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(b) Learned decision boundaries (purple lines) after task 2. Here the episodic memory includes samples in M̂ .\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(c) Learned decision boundaries (purple lines) after task 2. Here the episodic memory consists of random samples.\n(a) Splitting samples into several subsets in a 3-class classification task. Dots in different colors are from different classes.\n(b) Estimated distributions of β when drawing negative pairs from different subsets of samples.\n(c) Estimated distributions of α− δ when drawing negative pairs from different subsets of samples.\nFigure 2: Illustration of how Pr(2β > α − δ) in Theorem 1 behaves in various cases by drawing negative pairs from different subsets of a 3-class feature space which are defined in Fig. 2a. The classifier is a linear model. y-axis in the right side of (b) & (c) is for the case of x ∈ S1 ∪ S2. We see that α− δ behaves in a similar way with β but in a smaller range which makes β the key in studying Pr(2β > α − δ). In the case of x ∈ S3 the distribution of β has more mass on larger values than other cases because the predicted probabilities are mostly on the two classes in a pair, and it causes all 〈gn,gm〉 having the opposite sign of 〈xn,xm〉 as shown in Tab. 1.\ntask 2 with M̂ (Fig. 1b) and a random set of samples (Fig. 1c) from task 1 as the episodic memory. The random episodic memory shows better performance than the one selected by GSS-IQP, since the new decision boundaries rely on samples not included in M̂ . It explains why randomly selected memories may generalize better in continual learning. Ideally, with M̂ large enough, the model can remember all edges of each class, and hence learn much more accurate decision boundaries sequentially. However, memory size is often limited in practice, especially for high-dimensional data. A more efficient way could be learning more informative representations. The experimental results indicate that: 1) more similar representations in different classes result in more diverse gradients. 2) more diverse representations in a same class help with learning new boundaries incrementally.\nNow we formalise the connection between the diversity of gradients and the discriminativeness of representations for the linear model (proofs are in Appx. A). Notations: Negative pair represents two samples from different classes. Positive pair represents two samples from a same class. Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a one-hot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = W\nTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have: 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nTheorem 1. Suppose yn 6= ym, and let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , ||pn||2 + ||pm||2, β , pn,cm + pm,cn and δ , ||pn − pm||22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β > α− δ),\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have: sign(〈gn, gm〉) = sign(〈xn,xm〉)\nFor a better understanding of the theorems, we conduct empirical study by partitioning the feature space of three classes into several subsets as shown in Fig. 2a and examine four cases of pairwise samples by these subsets: 1). x ∈ S0, both samples in a pair are near the intersection of the three classes; 2). x ∈ S0∪S1, one sample is close to decision boundaries and the other is far away from the boundaries; 3). x ∈ S3, both samples close to the decision boundary between their true classes but away from the third class; 4). x ∈ S1 ∪ S2, both samples are far away from the decision boundaries. Theorem 1 says that for samples from different classes, 〈gn, gm〉 gets an opposite sign of 〈xn,xm〉 with a probability that depends on the predictions pn and pm. This probability of flipping the sign especially depends on β which reflects how likely to misclassify both samples to its opposite class. We show the empirical distributions of β and (α− δ) obtained by a linear model in Figs. 2b and 2c, respectively. In general, (α− δ) shows similar behaviors with β in the four cases but in a smaller range, which makes 2β > (α − δ) tends to be true except when β is around zero. Basically, a subset including more samples close to decision boundaries leads to more probability mass on large values of β, and the case of x ∈ S3 results in largest mass on large values of β because the predicted probabilities mostly concentrate on the two classes in a pair. As shown in Tab. 1, more mass on large values of β leads to larger probabilities of flipping the sign. These results demonstrate that samples with most diverse gradients (which gradients have largely negative similarities with other samples) are close to decision boundaries because they tend to have large β and 〈xn,xm〉 tend to be positive. In the case of x ∈ S1 ∪ S2 the probability of flipping the sign is zero because β concentrates around zero. According to Lemma 1 〈gn, gm〉 are very close to zero in this case because the predictions are close to true labels, hence, such samples are not considered as with most diverse gradients.\nTheorem 2 says 〈gn, gm〉 has the same sign as 〈xn,xm〉 when the two samples from a same class. We can see the results of positive pairs in Tab. 1 matches Theorem 2. In the case of S0 ∪ S1 the two probabilities do not add up to exactly 1 because the implementation of cross-entropy loss in tensorflow smooths the function by a small value for preventing numerical issues which slightly changes the gradients. As 〈xn,xm〉 is mostly positive for positive pairs, 〈gn, gm〉 hence is also mostly positive, which explains why samples with most diverse gradients are not sufficient to preserve information within classes in experiments of Fig. 1. On the other hand, if 〈xn,xm〉 is negative then 〈gn, gm〉 will be negative, which indicates representations within a class should not be too diverse. Extending this theoretical analysis based on a linear model, we also provide empirical study of non-linear models (Multi-layer Perceptrons (MLPs)). As demonstrated in Tab. 1, the probability of flipping the sign in MLPs are very similar with the linear model since it only depends on the predictions and all models have learned reasonable decision boundaries. The probability of getting\nnegative 〈gn, gm〉 is also similar with the linear model except in the case of S1 ∪ S2 for negative pairs, in which the MLP with ReLU gets much less negative 〈gn, gm〉. As MLP with tanh activations is still consistent with the linear model in this case, we consider the difference is caused by the representations always being positive due to ReLU activations. These results demonstrate that non-linear models exhibit similar behaviors with linear models that mostly align with the theorems.\nSince only negative 〈gn, gm〉 may cause conflicts, reducing the diversity of gradients hence relies on reducing negative 〈gn, gm〉. We consider to reduce negative 〈gn, gm〉 by two ways: 1).minimize the representation inner product of negative pairs, which pushes the inner product to be negative or zero (for positive representations); 2).optimize the predictions to decrease the probability of flipping the sign. In this sense, decreasing the representation similarity of negative pairs might help with both ways. In addition, according to Fig. 2b x ∼ S3 gets larger prediction similarity than x ∼ S0 due to the predictions put most probability mass on both classes of a pair, which indicates decreasing the similarity of predictions may decrease the probability of flipping the sign. Hence, we include logits in the representations. We verify this idea by training two binary classifiers for two groups of MNIST classes ({0, 1} and {7, 9}). The classifiers have two hidden layers each with 100 hidden units and ReLU activations. We randomly chose 100 test samples from each group to compute the pairwise cosine similarities. Representations are obtained by concatenating the output of all layers (including logits) of the neural network, gradients are computed by all parameters of the model. We display the similarities in Figs. 3a and 3b. The correlation coefficients between the gradient and representation similarities of negative pairs are -0.86 and -0.85, which of positive pairs are 0.71 and 0.79. In all cases, the similarities of representations show strong correlations with the similarities of gradients. The classifier for class 0 and 1 gets smaller representation similarities and much less negative gradient similarities for negative pairs (blue dots) and it also gains a higher accuracy than the other classifier (99.95% vs. 96.25%), which illustrates the potential of reducing the gradient diversity by decreasing the representation similarity of negative pairs." }, { "heading": "2.2 CONNECTING DEEP METRIC LEARNING TO CONTINUAL LEARNING", "text": "Reducing the representation similarity between classes shares the same concept as learning larger margins which has been an active research area for a few decades. For example, Kernel Fisher Discriminant analysis (KFD) (Mika et al., 1999) and distance metric learning (Weinberger et al., 2006) aim to learn kernels that can obtain larger margins in an implicit representation space, whereas Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) leverages deep neural networks to learn embeddings that maximize margins in an explicit representation space. In this sense, DML has the potential to help with reducing the diversity of gradients in continual learning.\nHowever, the usual concepts in DML may not entirely be appropriate for continual learning, as they also aim in learning compact representations within classes (Schroff et al., 2015; Wang et al., 2017; Deng et al., 2019). In continual learning, the unused information for the current task might be important for a future task, e.g. in the experiments of Fig. 1 the y-dimension is not useful for task 1 but useful for task 2. It indicates that learning compact representations in a current task might omit important dimensions in the representation space for a future task. In this case, even if we\nstore diverse samples into the memory, the learned representations may be difficult to generalize on future tasks as the omitted dimensions can only be relearned by using limited samples in the memory. We demonstrate this by training a model with and without L1 regulariztion on the first two tasks of split-MNIST and split-Fashion MNIST. The results are shown in Tab. 2. We see that with L1 regularization the model learns much more compact representations and gives a similar performance on task 1 but much worse performance on task 2 comparing to without L1 regularization. The results suggest that continual learning shares the interests of maximizing margins in DML but prefers less compact representation space to preserve necessary information for future tasks. We suggest an opposite way regarding the within-class compactness: minimizing the similarities within the same class for obtaining less compact representation space. Roth et al. (2020) proposed a ρ-spectrum metric to measure the information entropy contained in the representation space (details are provided in Appx. D) and introduced a ρ-regularization method to restrain over-compression of representations. The ρ-regularization method randomly replaces negative pairs by positive pairs with a pre-selected probability pρ. Nevertheless, switching pairs is inefficient and may be detrimental to the performance in an online setting because some negative pairs may never be learned in this way. Thus, we propose a different way to restrain the compression of representations which will be introduced in the following." }, { "heading": "3 DISCRIMINATIVE REPRESENTATION LOSS", "text": "Based on our findings in the above section, we propose an auxiliary objective Discriminative Representation Loss (DRL) for classification tasks in continual learning, which is straightforward, robust, and efficient. Instead of explicitly re-projecting gradients during training process, DRL helps with decreasing gradient diversity by optimizing the representations. As defined in Eq. (2), DRL consists of two parts: one is for minimizing the similarities of representations from different classes (Lbt) which can reduce the diversity of gradients from different classes, the other is for minimizing the similarities of representations from a same class (Lwi) which helps preserve discriminative information for future tasks in continual learning.\nmin Θ LDRL = min Θ (Lbt + αLwi), α > 0,\nLbt = 1\nNbt B∑ i=1 B∑ j 6=i,yj 6=yi 〈hi, hj〉, Lwi = 1 Nwi B∑ i=1 B∑ j 6=i,yj=yi 〈hi, hj〉, (2)\nwhere Θ denotes the parameters of the model, B is training batch size. Nbt, Nwi are the number of negative and positive pairs, respectively. α is a hyperparameter controlling the strength of Lwi, hi is the representation of xi, yi is the label of xi. The final loss function combines the commonly used softmax cross entropy loss for classification tasks (L) with DRL (LDRL) as shown in Eq. (3),\nL̂ = L+ λLDRL, λ > 0, (3)\nwhere λ is a hyperparameter controlling the strength of LDRL, which is larger for increased resistance to forgetting, and smaller for greater elasticity. We provide experimental results to verify the effects of DRL and an ablation study on Lbt and Lwi (Tab. 7) in Appx. E, according to which Lbt and Lwi\nhave shown effectiveness on improving forgetting and ρ-spectrum, respectively. We will show the correlation between ρ-spectrum and the model performance in Sec. 5.\nThe computational complexity of DRL isO(B2H), whereB is training batch size,H is the dimension of representations. B is small (10 or 20 in our experiments) and commonly H W , where W is the number of network parameters. In comparison, the computational complexity of A-GEM and GSS-greedy are O(BrW ) and O(BBmW ), respectively, where Br is the reference batch size in A-GEM and Bm is the memory batch size in GSS-greedy. The computational complexity discussed here is additional to the cost of common backpropagation. We compare the training time of all methods on MNIST tasks in Tab. 9 in Appx. H, which shows the representation-based methods require much lower computational cost than gradient-based approaches." }, { "heading": "4 ONLINE MEMORY UPDATE AND BALANCED EXPERIENCE REPLAY", "text": "We follow the online setting of continual learning as was done for other gradient-based approaches with episodic memories (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Aljundi et al., 2019), in which the model only trained with one epoch on the training data.\nWe update the episodic memories by the basic ring buffer strategy: keep the last nc samples of class c in the memory buffer, where nc is the memory size of a seen class c. We have deployed the episodic memories with a fixed size, implying a fixed budget for the memory cost. Further, we maintain a uniform distribution over all seen classes in the memory. The buffer may not be evenly allocated to each class before enough samples are acquired for newly arriving classes. We show pseudo-code of the memory update strategy in Alg. 1 in Appx. B for a clearer explanation. For class-incremental learning, this strategy can work without knowing task boundaries. Since DRL and methods of DML depend on the pairwise similarities of samples, we would prefer the training batch to include as wide a variety of different classes as possible to obtain sufficient discriminative information. Hence, we adjust the Experience Replay (ER) strategy (Chaudhry et al., 2019b) for the needs of such methods. The idea is to uniformly sample from seen classes in the memory buffer to form a training batch, so that this batch can contain as many seen classes as possible. Moreover, we ensure the training batch includes at least one positive pair of each selected class (minimum 2 samples in each class) to enable the parts computed by positive pairs in the loss. In addition, we also ensure the training batch includes at least one class from the current task. We call this Balanced Experience Replay (BER). The pseudo code is in Alg. 2 of Appx. B. Note that we update the memory and form the training batch based on the task ID instead of class ID for instance-incremental tasks (e.g. permuted MNIST tasks), as in this case each task always includes the same set of classes." }, { "heading": "5 EXPERIMENTS", "text": "In this section we evaluate our methods on multiple benchmark tasks by comparing with several baseline methods in the setting of online continual learning.\nBenchmark tasks: We have conducted experiments on the following benchmark tasks: Permuted MNIST (10 tasks and each task includes the same 10 classes with different permutation of features), Split MNIST, Split Fashion-MNIST, and Split CIFAR-10 (all three having 5 tasks with two classes in each task), Split CIFAR-100 (10 tasks with 10 classes in each task), Split TinyImageNet (20 tasks with 10 classes in each task). All split tasks include disjoint classes. For tasks of MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017), the training size is 1000 samples per task, for CIFAR10 (Krizhevsky et al., 2009) the training size is 3000 per task, for CIFAR-100 and TinyImageNet (Le & Yang, 2015) it is 5000 per task. N.B.: We use single-head (shared output) models in all of our experiments, meaning that we do not require a task identifier at testing time. Such settings are more difficult for continual learning but more practical in real applications.\nBaselines: We compare our methods with: two gradient-based approaches (A-GEM (Chaudhry et al., 2019a) and GSS-greedy (Aljundi et al., 2019)), two standalone experience replay methods (ER (Chaudhry et al., 2019b) and BER), two SOTA methods of DML (Multisimilarity (Wang et al., 2019) and R-Margin (Roth et al., 2020)). We also trained a single task over all classes with one epoch for all benchmarks which performance can be viewed as a upper bound of each benchmark. N.B.: We deploy the losses of Multisimilarity and R-Margin as auxiliary objectives as the same as DRL\nbecause using standalone such losses causes difficulties of convergence in our experimental settings. We provide the definitions of these two losses in Appx. D.\nPerformance measures: We use the Average accuracy, Average forgetting, Average intransigence to evaluate the performance of all methods, the definition of these measures are provided in Appx. C\nExperimental settings: We use the vanilla SGD optimizer for all experiments without any scheduling. For tasks on MNIST and Fashion-MNIST, we use a MLP with two hidden layers and ReLU activations, and each layer has 100 hidden units. For tasks on CIFAR datasets and TinyImageNet, we use the same reduced Resnet18 as used in Chaudhry et al. (2019a). All networks are trained from scratch without regularization scheme. For the MLP, representations are the concatenation of outputs of all layers including logits; for reduced Resnet18, representations are the concatenation of the input of the final linear layer and output logits. We concatenate outputs of all layers as we consider they behave like different levels of representation, and when higher layers (layers closer to the input) generate more discriminative representations it would be easier for lower layers to learn more discriminative representations as well. This method also improves the performance of MLPs. For reduced ResNet18 we found that including outputs of all hidden layers performs almost the same as only including the final representations, so we just use the final layer for lower computational cost. We deploy BER as the replay strategy for DRL, Multisimilarity, and R-Margin. The memory size for tasks on MNIST and Fashion-MNIST is 300 samples. For tasks on CIFAR-10 and CIFAR-100 the memory size is 2000 and 5000 samples, respectively. For TinyImageNet it is also 5000 samples. The standard deviation shown in all results are evaluated over 10 runs with different random seeds. We use 10% of training set as validation set for choosing hyperparameters by cross validation. More details of experimental settings and hyperparameters are given in Appx. I.\nTabs. 3 to 5 give the averaged accuracy, forgetting, and intransigence of all methods on all benchmark tasks, respectively. As we can see, the forgetting and intransigence often conflict with each other which is the most common phenomenon in continual learning. Our method DRL is able to get a better trade-off between them and thus outperforms other methods over most benchmark tasks in terms of average accuracy. This could be because DRL facilitates getting a good intransigence and ρ-spectrum by Lwi and a good forgetting by Lbt. In DRL the two terms are complementary to each other and combining them brings benefits on both sides (an ablation study on the two terms are provide in Appx. E). According to Tabs. 4 and 5, Multisimilarity got better avg. intransigence and similar avg. forgetting on CIFAR-10 compared with DRL which indicates Multisimilarity learns better representations to generalize on new classes in this case. Roth et.al. (2020) also suggests Multisimilarity is a very strong baseline in deep metric learning which outperforms the proposed R-Margin on several datasets. And we use the hyperparameters of Multisimilarity recommended in Roth et.al. (2020) which generally perform well on multiple complex datasets. TinyImageNet gets much worse performance than other benchmarks because it has more classes (200), a longer task sequence (20 tasks), a larger feature space (64× 64× 3), and the accuracy of the single task on it is just about 17.8%. According to Tab. 3 the longer task sequence, more classes, and larger feature space all increase the gap between the performance of the single task and continual learning.\nAs shown in Tab. 6 the rho-spectrum shows high correlation to average accuracy on most benchmarks since it may help with learning new decision boundaries across tasks. Split MNIST has shown a low correlation between the ρ-spectrum and avg. accuracy due to the ρ-spectrum highly correlates with the avg. intransigence and consequently affect the avg. forgetting in an opposite direction so that causes a cancellation of effects on avg. accuracy. In addition, we found that GSS often obtains a smaller ρ than other methods without getting a better performance. In general, the ρ-spectrum is the smaller the better because it indicates the representations are more informative. However, it may be detrimental to the performance when ρ is too small as the learned representations are too noisy. DRL is more robust to this issue because ρ keeps relatively stable when α is larger than a certain value as shown in Fig. 4c in Appx. E." }, { "heading": "6 CONCLUSION", "text": "The two fundamental problems of continual learning with small episodic memories are: (i) how to make the best use of episodic memories; and (ii) how to construct most representative episodic memories. Gradient-based approaches have shown that the diversity of gradients computed on data from different tasks is a key to generalization over these tasks. In this paper we demonstrate that the\nmost diverse gradients are from samples that are close to class boundaries. We formally connect the diversity of gradients to discriminativeness of representations, which leads to an alternative way to reduce the diversity of gradients in continual learning. We subsequently exploit ideas from DML for learning more discriminative representations, and furthermore identify the shared and different interests between continual learning and DML. In continual learning we would prefer larger margins between classes as the same as in DML. The difference is that continual learning requires less compact representations for better compatibility with future tasks. Based on these findings, we provide a simple yet efficient approach to solving the first problem listed above. Our findings also shed light on the second problem: it would be better for the memorized samples to preserve as much variance as possible. In most of our experiments, randomly chosen samples outperform those selected by gradient diversity (GSS) due to the limit on memory size in practice. It could be helpful to select memorized samples by separately considering the representativeness of inter- and intra-class samples, i.e., those representing margins and edges. We will leave this for future work." }, { "heading": "A PROOF OF THEOREMS", "text": "Notations: Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a one-hot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = WTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nProof. Let  ′\nn = ∂L(xn,yn;W)/∂on, by the chain rule, we have:\n〈gn, gm〉 = 〈xn,xm〉〈 ′ n,  ′ m〉,\nBy the definition of L, we can find:\n ′\nn = pn − yn, (4)\nTheorem 1. Suppose yn 6= ym, let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , ||pn||2 + ||pm||2, β , pn,cm + pm,cn and δ , ||pn − pm||22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β + δ > α),\nProof. According to Lemma 1 and yn 6= ym, we have\n〈 ′ n,  ′ m〉 = 〈pn,pm〉 − pn,cm − pm,cn\nAnd\n〈pn,pm〉 = 1\n2 (||pn||2 + ||pm||2 − ||pn − pm||2) =\n1 2 (α− δ)\nwhich gives 〈′n,  ′ m〉 = 12 (α− δ)− β. When 2β > α− δ, we must have 〈 ′ n,  ′\nm〉 < 0. According to Lemma 1, we prove this theorem.\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have:\nsign(〈gn, gm〉) = sign(〈xn,xm〉),\nProof. Because ∑K k=1 pn,k = 1, pn,k ≥ 0,∀k, and cn = cm = c,\n〈 ′ n,  ′ m〉 = K∑ k 6=c pn,kpm,k + (pn,c − 1)(pm,c − 1) ≥ 0 (5)\nAccording to Lemma 1, we prove the theorem." }, { "heading": "B ALGORITHMS OF ONLINE MEMORY UPDATE", "text": "We provide the details of online ring buffer update and Balanced Experience Replay (BER) in Algs. 1 to 3. We directly load new data batches into the memory buffer without a separate buffer for the current task. The memory buffer works like a sliding window for each class in the data stream and we draw training batches from the memory buffer instead of directly from the data stream. In this case, one sample may not be seen only once as long as it stays in the memory buffer. This strategy is a more efficient use of the memory when |B| < nc, where |B| is the loading batch size of the data stream (i.e. the number of new samples added into the memory buffer at each iteration), we set |B| to 1 in all experiments (see Appx. I for a discussion of this).\nAlgorithm 1 Ring Buffer Update with Fixed Buffer Size\nInput: Bt - current data batch of the data stream, Ct - the set of classes in Bt,M - memory buffer, C - the set of classes in M, K - memory buffer size. for c in Ct do\nGet Bt,c - samples of class c in Bt, Mc - samples of class c inM, if c in C then Mc =Mc ∪ Bc else Mc = Bc, C = C ∪ {c}\nend if end for R = |M|+ |B| −K while R > 0 do c′ = arg maxc |Mc| remove the first sample inMc′ , R = R−1 end while returnM\nAlgorithm 2 Balanced Experience Replay Input: M - memory buffer, C - the set of classes in M, B - training batch size, Θ - model parameters, LΘ - loss function, Bt - current data batch from the data stream, Ct - the set of classes in Bt, K - memory buffer size.\nM←MemoryUpdate(Bt, Ct,M, C,K) nc, Cs, Cr ← ClassSelection(Ct, C, B) Btrain = ∅ for c in Cs do\nif c in Cr then mc = nc + 1 else mc = nc end if GetMc - samples of class c inM, Bc\nmc∼ Mc C sample mc samples fromMc Btrain = Btrain ∪ Bc\nend for Θ← Optimizer(Btrain,Θ,LΘ)\nAlgorithm 3 Class Selection for BER Input: Ct - the set of classes in current data batch Bt, C - the set of classes inM, B - training batch size, mp - minimum number of positive pairs of each selected class (mp ∈ {0, 1}) . Btrain = ∅, nc = bB/|C|c, rc = B mod |C|, if nc > 1 or mp == 0 then Cr\nrc∼ C C sample rc classes from all seen classes without replacement. Cs = C\nelse Cr = ∅, nc = 2, ns = bB/2c − |Ct|, C we ensure the training batch include samples from the current task. Cs\nns∼ (C − Ct) C sample ns classes from all seen classes except classes in Ct. Cs = Cs ⋃ Ct if B mod 2 > 0 then Cr\n1∼ Cs C sample one class in Cs to have an extra sample. end if\nend if Return: nc, Cs, Cr" }, { "heading": "C DEFINITION OF PERFORMANCE MEASURES", "text": "We use the following measures to evaluate the performance of all methods: Average accuracy, which is evaluated after learning all tasks: āt = 1t ∑t i=1 at,i, where t is the index of the latest task, at,i is the accuracy of task i after learning task t.\nAverage forgetting (Chaudhry et al., 2018), which measures average accuracy drop of all tasks after learning the whole task sequence: f̄t = 1t−1 ∑t−1 i=1 maxj∈{i,...,t−1}(aj,i − at,i).\nAverage intransigence (Chaudhry et al., 2018), which measures the inability of a model learning new tasks: Īt = 1t ∑t i=1 a ∗ i − ai, where ai is the accuracy of task i at time i. We use the best accuracy among all compared models as a∗i instead of the accuracy obtained by an extra model that is solely trained on task i." }, { "heading": "D RELATED METHODS FROM DML", "text": "ρ-spectrum metric (Roth et al., 2020): ρ = KL(U||SΦX ), which is proposed to measure the information entropy contained in the representation space. The ρ-spectrum computes the KLdivergence between a discrete uniform distribution U and the spectrum of data representations SΦX , where SΦX is normalized and sorted singular values of Φ(X ) , Φ denotes the representation extractor (e.g. a neural network) and X is input data samples. Lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nMultisimilarity(Wang et al., 2019): we adopt the loss function of Multisimilarity as an auxiliary objective in classfication tasks of continual learning, the batch mining process is omitted because we use labels for choosing positive and negative pairs. So the loss function is L̂ = L+ λLmulti, and:\nLmulti = 1\nB B∑ i=1 1 α log[1 + ∑\nj 6=i,yj=yi\nexp (−α(sc(hi, hj)− γ))]\n+ 1\nβ log [1 + ∑ yj 6=yi exp (β(sc(hi, hj)− γ))]\n (6)\nwhere sc(·, ·) is cosine similarity, α, β, γ are hyperparameters. In all of our experiments we set α = 2, β = 40, γ = 0.5 as the same as in Roth et al. (2020).\nR-Margin(Roth et al., 2020): we similarly deploy R-Margin for continual learning as the above, which uses the Margin loss (Wu et al., 2017) with the ρ regularization (Roth et al., 2020) as introduced in Sec. 2.2. So the loss function is L̂ = L+ λLmargin, and:\nLmargin = B∑ i=1 B∑ j=1 γ + Ij 6=i,yj=yi(d(hi, hj)− β)− Iyj 6=yi(d(hi, hj)− β) (7)\nwhere d(·, ·) is Euclidean distance, β is a trainable variable and γ is a hyperparameter. We follow the setting in Roth et al. (2020): γ = 0.2, the initialization of β is 0.6. We set pρ = 0.2 in ρ regularization." }, { "heading": "E ABLATION STUDY ON DRL", "text": "We verify the effects of LDRL by training a model with/without LDRL on Split-MNIST tasks: Fig. 4a shows that LDRL notably reduces the similarities of representations from different classes while making representations from a same class less similar; Fig. 4b shows the analogous effect on gradients from different classes and a same class. Fig. 4c demonstrates increasing α can effectively decrease ρ-spectrum to a low-value level, where lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nTab. 7 provides the results of an ablation study on the effects of the two terms in DRL. In general, Lbt gets a better performance in terms of forgetting, Lwi gets a better performance in terms of intransigence and a lower ρ-spectrum, and both of them show improvements on BER (without any regularization terms). Overall, combining the two terms obtains a better performance on forgetting than standalone Lbt and keeps the advantage on intransigence that brought by Lwi. It indicates preventing over-compact representations while maximizing margins can improve the learned representations that are easier for generalization over previous and new tasks. In addition, we found that using standalone Lbt we can only use a smaller λ otherwise the gradients will explode, and using Lwi together can stablize the gradients. We notice that the lower ρ-spectrum does not necessarily lead to a higher accuracy as it’s correlation coefficients with accuracy depends on datasets and is usually larger than -1." }, { "heading": "F COMPARING DIFFERENT MEMORY SIZES", "text": "Fig. 5 compares average accuracy of DRL+BER on MNIST tasks with different memory sizes. It appears the fixed memory size is more efficient than the incremental memory size. For example, the\n0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 pairwise similarity of representations\n0\n5\n10\n15\n20\npr ob\nab ilit\ny de\nns e\ndiff class sh diff class sDRh same class sh same class sDRh\n(a) Similarities of representations with and without LDRL\n−1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 pairwise similarity of gradients\n0\n1\n2\n3\n4\n5\n6\n7\n8\npr ob\nab ilit\ny de\nns e\ndiff class sg diff class sDRg same class sg same class sDRg\n(b) Similarities of gradients with and without LDRL\n0 2 4 6 8 10 α\n2\n3\n4\n5\n6\n7\nρ\n(c) Relation between α and ρspectrum.\nFigure 4: Effects of LDRL on reducing diveristy of gradients and ρ-spectrum. (a) and (b) display distributions of similarities of representations and gradients. sDRh and sh denote similarities of representations with and without LDRL, respectively, sDRg and sg denote similarities of gradients with and withoutLDRL, respectively. (c) demonstrates increasing α inLDRL can reduce ρ effectively.\nfixed memory size (M = 300) getting very similar average accuracy with memory M = 50 per class in Disjoint MNIST while it takes less cost of the memory after task 3. Meanwhile, the fixed memory size (M = 300) gets much better performance than M = 50 per task in most tasks of Permuted MNIST and it takes less cost of the memory after task 6. Since the setting of fixed memory size takes larger memory buffer in early tasks, the results indicate better generalization of early tasks can benefit later tasks, especially for more homogeneous tasks such as Permuted MNIST. The results also align with findings about Reservoir sampling (which also has fixed buffer size) in Chaudhry et al. (2019b) and we also believe a hybrid memory strategy can obtain better performance as suggested in Chaudhry et al. (2019b)." }, { "heading": "G COMPARING DIFFERENT REPLAY STRATEGY", "text": "We compare DRL with different memory replay strategies in Tab. 8 to show DRL has general improvement based on the applied replay strategy." }, { "heading": "H COMPARING TRAINING TIME", "text": "Tab. 9 compares the training time of MNIST tasks. All representation-based methods are much faster than gradient-based methods and close to the replay-based methods." }, { "heading": "I HYPER-PARAMETERS IN EXPERIMENTS", "text": "To make a fair comparison of all methods, we use following settings: i) The configurations of GSS-greedy are as suggested in Aljundi et al. (2019), with batch size set to 10 and each batch receives 10 iterations. ii) For the other methods, we use the ring buffer memory as described in Alg. 1, the loading batch size is set to 1, following with one iteration, the training batch size is provided in Tab. 10. More hyperparameters are given in Tab. 10 as well.\nIn the setting of limited training data in online continual learning, we either use a small batch size or iterate on one batch several times to obtain necessary steps for gradient optimization. We chose a small batch size with one iteration instead of larger batch size with multiple iterations because by our memory update strategy (Alg. 1) it achieves similar performance with fewer hyperparameters. Since GSS-greedy has a different strategy for updating memories, we leave it at its default settings.\nRegarding the two terms in DRL, a larger weight on Lwi is for less compact representations within classes, but a too dispersed representation space may include too much noise. For datasets that present more difficulty in learning compact representations, we would prefer a smaller weight on Lwi, we therefore set smaller α for CIFAR datasets in our experiments. A larger weight on Lbt is more resistant to forgetting but may be less capable of transferring to a new task, for datasets that are less compatible between tasks a smaller weight on Lbt would be preferred, as we set the largest λ on Permuted MNIST and the smallest λ on CIFAR-100 in our experiments." } ]
2,020
SP:09f2fe6a482bbd6f9bd2c62aa841f995171ba939
[ "This paper proposes a new framework that computes the task-specific representations to modulate the model parameters during the multi-task learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available, in such cases the proposed framework is useful. The proposed framework is evaluated on various datasets spanning multiple modalities, where the MTL model even achieves state-of-the-art results on some datasets. " ]
[]
2,020
SP:a1e2218e6943bf138aeb359e23628676b396ed66
[ "This work proposes a deep reinforcement learning-based optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A continuous time representation of the problem is also used compared to conventional techniques which mostly use a discrete time formulation. " ]
This paper deals with the fuel optimization problem for hybrid electric vehicles in reinforcement learning framework. Firstly, considering the hybrid electric vehicle as a completely observable non-linear system with uncertain dynamics, we solve an open-loop deterministic optimization problem to determine a nominal optimal state. This is followed by the design of a deep reinforcement learning based optimal controller for the non-linear system using concurrent learning based system identifier such that the actual states and the control policy are able to track the optimal state and optimal policy, autonomously even in the presence of external disturbances, modeling errors, uncertainties and noise and signigicantly reducing the computational complexity at the same time, which is in sharp contrast to the conventional methods like PID and Model Predictive Control (MPC) as well as traditional RL approaches like ADP, DDP and DQN that mostly depend on a set of pre-defined rules and provide sub-optimal solutions under similar conditions. The low value of the H-infinity (H∞) performance index of the proposed optimization algorithm addresses the robustness issue. The optimization technique thus proposed is compared with the traditional fuel optimization strategies for hybrid electric vehicles to illustate the efficacy of the proposed method.
[]
2,020
A ROBUST FUEL OPTIMIZATION STRATEGY FOR HY- BRID ELECTRIC VEHICLES: A DEEP REINFORCEMENT LEARNING BASED CONTINUOUS TIME DESIGN AP-
SP:43e525fb3fa611df7fd44bd3bc9843e57b154c66
[ "This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RNA secondary structure) for the generation of RNA secondary structures. They test each model on 3 benchmark tasks: unsupervised generation, semi-supervised learning and targeted generation. This paper has many interesting contributions — a comparison of VAE models that use different RNA secondary structure encoding schemes, including traditional dot-bracket notation and a more complex hierarchical encoding, and they also introduce various decoding schemes to encourage valid secondary structures. " ]
Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions. The design of large scale and complex biological structures spurs dedicated graph-based deep generative modeling techniques, which represents a key but underappreciated aspect of computational drug discovery. In this work, we investigate the principles behind representing and generating different RNA structural modalities, and propose a flexible framework to jointly embed and generate these molecular structures along with their sequence in a meaningful latent space. Equipped with a deep understanding of RNA molecular structures, our most sophisticated encoding and decoding methods operate on the molecular graph as well as the junction tree hierarchy, integrating strong inductive bias about RNA structural regularity and folding mechanism such that high structural validity, stability and diversity of generated RNAs are achieved. Also, we seek to adequately organize the latent space of RNA molecular embeddings with regard to the interaction with proteins, and targeted optimization is used to navigate in this latent space to search for desired novel RNA molecules.
[ { "affiliations": [], "name": "Zichao Yan" }, { "affiliations": [], "name": "William L. Hamilton" } ]
2,021
RNA SECONDARY STRUCTURES
SP:0bd749fe44c37b521bd40f701e1428890aaa9c95
[ "This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty lies in the relatively large scale, spanning three translation directions, four discourse phenomena, and 150-5000 data points per language and phenomenon. A relatively large number of systems from previous work is benchmarked on each test set, and agreement with human judgments is measured." ]
Despite increasing instances of machine translation (MT) systems including extrasentential context information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like BLEU are not expressive or sensitive enough to capture quality improvements or drops that are minor in size but significant in perception. We introduce the first of their kind MT benchmark testsets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several competitive baseline MT systems on the curated datasets. Surprisingly, we find that the complex context-aware models that we test do not improve discourse-related translations consistently across languages and phenomena. Our evaluation benchmark is available as a leaderboard at <dipbenchmark1.github.io>.
[ { "affiliations": [], "name": "MARKS FOR" }, { "affiliations": [], "name": "DISCOURSE PHENOMENA" } ]
[ { "authors": [ "Rachel Bawden", "Rico Sennrich", "Alexandra Birch", "Barry Haddow" ], "title": "Evaluating discourse phenomena in neural machine translation", "venue": null, "year": 2018 }, { "authors": [ "Peter Bourgonje", "Manfred Stede" ], "title": "The potsdam commentary corpus 2.2: Extending annotations for shallow discourse parsing", "venue": "In LREC,", "year": 2020 }, { "authors": [ "Marine Carpuat" ], "title": "One translation per discourse", "venue": "SEW@NAACL-HLT,", "year": 2012 }, { "authors": [ "Mauro Cettolo", "Niehues Jan", "Stüker Sebastian", "Luisa Bentivogli", "R. Cattoni", "Marcello Federico" ], "title": "The iwslt 2016 evaluation campaign", "venue": null, "year": 2016 }, { "authors": [ "Mauro Cettolo", "Marcello Federico", "Luisa Bentivogli", "Niehues Jan", "Stüker Sebastian", "Sudoh Katsuitho", "Yoshino Koichiro", "Federmann Christian" ], "title": "Overview of the iwslt 2017 evaluation campaign", "venue": null, "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Liane Guillou" ], "title": "Improving pronoun translation for statistical machine translation", "venue": "In EACL,", "year": 2012 }, { "authors": [ "Liane Guillou" ], "title": "Analysing lexical consistency in translation", "venue": "In Proceedings of the Workshop on Discourse in Machine Translation,", "year": 2013 }, { "authors": [ "Liane Guillou", "Christian Hardmeier" ], "title": "PROTEST: A test suite for evaluating pronouns in machine translation", "venue": "In Proceedings of the Tenth International Conference on Language Resources and Evaluation", "year": 2016 }, { "authors": [ "Liane Guillou", "Christian Hardmeier" ], "title": "Automatic reference-based evaluation of pronoun translation misses the point", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Najeh Hajlaoui", "Andrei Popescu-Belis" ], "title": "Assessing the accuracy of discourse connective translations: Validation of an automatic metric", "venue": "In CICLing,", "year": 2013 }, { "authors": [ "Christian Hardmeier", "Marcello Federico" ], "title": "Modelling pronominal anaphora in statistical machine translation", "venue": "In Proceedings of the 2010 International Workshop on Spoken Language Translation, IWSLT", "year": 2010 }, { "authors": [ "Hany Hassan", "Anthony Aue", "Chang Chen", "Vishal Chowdhary", "Jonathan R. Clark", "Christian Federmann", "Xuedong Huang", "Marcin Junczys-Dowmunt", "William Lewis", "Mu Li", "Shujie Liu", "T.M. Liu", "Renqian Luo", "Arul Menezes", "Tao Qin", "Frank Seide", "Xu Tan", "Fei Tian", "Lijun Wu", "Shuangzhi Wu", "Yingce Xia", "Dongdong Zhang", "Zhirui Zhang", "Ming Zhou" ], "title": "Achieving human parity on automatic chinese to english news", "venue": "translation. ArXiv,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Kyle P. Johnson", "Patrick Burns", "John Stewart", "Todd Cook" ], "title": "Cltk: The classical language toolkit, 2014–2020", "venue": "URL https://github.com/cltk/cltk", "year": 2020 }, { "authors": [ "Prathyusha Jwalapuram", "Shafiq Joty", "Irina Temnikova", "Preslav Nakov" ], "title": "Evaluating pronominal anaphora in machine translation: An evaluation measure and a test suite", "venue": "EMNLP-IJCNLP,", "year": 2019 }, { "authors": [ "Yunsu Kim", "Thanh Tran", "Hermann Ney" ], "title": "When and why is document-level context useful in neural machine translation? ArXiv", "venue": null, "year": 1910 }, { "authors": [ "Ekaterina Lapshinova-Koltunski", "Christian Hardmeier", "Pauline Krielke" ], "title": "ParCorFull: a parallel corpus annotated with full coreference", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),", "year": 2018 }, { "authors": [ "Samuel Läubli", "Rico Sennrich", "Martin Volk" ], "title": "Has machine translation achieved human parity? a case for document-level evaluation", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Kazem Lotfipour-Saedi" ], "title": "Lexical cohesion and translation equivalence", "venue": null, "year": 1997 }, { "authors": [ "Sameen Maruf", "Gholamreza Haffari" ], "title": "Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Thomas Meyer", "Andrei Popescu-Belis", "N. Hajlaoui", "Andrea Gesmundo" ], "title": "Machine translation of labeled discourse connectives", "venue": "AMTA", "year": 2012 }, { "authors": [ "Lesly Miculicich", "Dhananjay Ram", "Nikolaos Pappas", "James Henderson" ], "title": "Document-level neural machine translation with hierarchical attention networks", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Lesly Miculicich Werlen", "Andrei Popescu-Belis" ], "title": "Validation of an automatic metric for the accuracy of pronoun translation (APT)", "venue": "In Proceedings of the Third Workshop on Discourse in Machine Translation,", "year": 2017 }, { "authors": [ "Han Cheol Moon", "Tasnim Mohiuddin", "Shafiq R. Joty", "Xiaofei Chi" ], "title": "A unified neural coherence model", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Jane Morris", "Graeme Hirst" ], "title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text", "venue": "Computational Linguistics,", "year": 1991 }, { "authors": [ "Maria Nadejde", "Alexandra Birch", "Philipp Koehn" ], "title": "Proceedings of the first conference on machine translation, volume 1: Research papers. The Association for Computational Linguistics, 2016", "venue": null, "year": 2016 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Emily Pitler", "Ani Nenkova" ], "title": "Revisiting readability: A unified framework for predicting text quality", "venue": "In EMNLP,", "year": 2008 }, { "authors": [ "M. Popel", "M. Tomková", "J. Tomek", "Łukasz Kaiser", "Jakob Uszkoreit", "Ondrej Bojar", "Z. Žabokrtský" ], "title": "Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals", "venue": "Nature Communications,", "year": 2020 }, { "authors": [ "Rashmi Prasad", "Bonnie L. Webber", "Aravind K. Joshi" ], "title": "Reflections on the penn discourse treebank, comparable corpora, and complementary annotation", "venue": "Computational Linguistics,", "year": 2014 }, { "authors": [ "Rashmi Prasad", "Bonnie L. Webber", "Alan Lee" ], "title": "Annotation in the pdtb : The next generation", "venue": null, "year": 2018 }, { "authors": [ "Rico Sennrich" ], "title": "Why the time is ripe for discourse in machine translation", "venue": "http://homepages. inf.ed.ac.uk/rsennric/wnmt2018.pdf,", "year": 2018 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016).,", "year": 2016 }, { "authors": [ "Karin Sim Smith", "Wilker Aziz", "Lucia Specia" ], "title": "A proposal for a coherence corpus in machine translation", "venue": "In DiscoMT@EMNLP,", "year": 2015 }, { "authors": [ "Karin Sim Smith", "Wilker Aziz", "Lucia Specia" ], "title": "The trouble with machine translation coherence", "venue": "In Proceedings of the 19th Annual Conference of the European Association for Machine Translation,", "year": 2016 }, { "authors": [ "Swapna Somasundaran", "Jill Burstein", "Martin Chodorow" ], "title": "Lexical chaining for measuring discourse coherence quality in test-taker essays", "venue": "In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers,", "year": 2014 }, { "authors": [ "Jörg Tiedemann", "Yves Scherrer" ], "title": "Neural machine translation with extended context", "venue": "In Proceedings of the Third Workshop on Discourse in Machine Translation,", "year": 2017 }, { "authors": [ "Jörg Tiedemann" ], "title": "Parallel data, tools and interfaces in opus", "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12),", "year": 2012 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Elena Voita", "Pavel Serdyukov", "Rico Sennrich", "Ivan Titov" ], "title": "Context-aware neural machine translation learns anaphora resolution", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Elena Voita", "Pavel Serdyukov", "Rico Sennrich", "Ivan Titov" ], "title": "Context-aware neural machine translation learns anaphora resolution", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Elena Voita", "Rico Sennrich", "Ivan Titov" ], "title": "When a Good Translation is Wrong in Context: ContextAware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence,", "year": 2019 }, { "authors": [ "W. Wagner. Steven" ], "title": "bird, ewan klein and edward loper: Natural language processing with python, analyzing text with the natural language toolkit", "venue": "Language Resources and Evaluation,", "year": 2010 }, { "authors": [ "KayYen Wong", "Sameen Maruf", "Gholamreza Haffari" ], "title": "Contextual neural machine translation improves translation of cataphoric pronouns", "venue": "In ACL,", "year": 2020 }, { "authors": [ "H. Xiong", "Zhongjun He", "Hua Wu", "H. Wang" ], "title": "Modeling coherence for discourse neural machine translation", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Jiacheng Zhang", "Huanbo Luan", "Maosong Sun", "Feifei Zhai", "Jingfang Xu", "Min Zhang", "Yang Liu" ], "title": "Improving the transformer translation model with document-level context", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Y. Zhou", "N. Xue" ], "title": "The chinese discourse treebank: a chinese corpus annotated with discourse relations", "venue": "Language Resources and Evaluation,", "year": 2015 }, { "authors": [ "Michał Ziemski", "Marcin Junczys-Dowmunt", "Bruno Pouliquen" ], "title": "The united nations parallel corpus v1.0", "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016),", "year": 2016 } ]
2,020
DIP BENCHMARK TESTS: EVALUATION BENCH-
[ "The authors present a framework that uses a combination of VAE and GAN to recover private user images using Side channel analysis of memory access . A VAE-LP model first reconstructs a coarse image from side channel information which is reshaped and processed using a convolutional network. The output of the VAE-LP model is refined using a GAN to add fine details. Compelling results are demonstrated for recovery of private information and state of art metrics are reported. " ]
System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to infer program secrets based on observed side channel logs. Given the ever-growing adoption of machine learning as a service (MLaaS), image analysis software on cloud platforms has been exploited by reconstructing private user images from system side channels. Nevertheless, to date, SCA is still highly challenging, requiring technical knowledge of victim software’s internal operations. For existing SCA attacks, comprehending such internal operations requires heavyweight program analysis or manual efforts. This research proposes an attack framework to reconstruct private user images processed by media software via system side channels. The framework forms an effective workflow by incorporating convolutional networks, variational autoencoders, and generative adversarial networks. Our evaluation of two popular side channels shows that the reconstructed images consistently match user inputs, making privacy leakage attacks more practical. We also show surprising results that even one-bit data read/write pattern side channels, which are deemed minimally informative, can be used to reconstruct quality images using our framework.
[ { "affiliations": [], "name": "Yuanyuan Yuan" }, { "affiliations": [], "name": "Shuai Wang" }, { "affiliations": [], "name": "Junping Zhang" } ]
2,021
SP:7fb11c941e8d79248ce5ff7caa0535a466303395
[ "This paper proposes a method of learning ensembles that adhere to an \"ensemble version\" of the information bottleneck principle. Whereas the information bottleneck principle says the representation should avoid spurious correlations between the representation (Z) and the training data (X) that is not useful for predicting the labels (Y), i.e. I(X;Z) or I(X;Z|Y), this paper proposes that ensembles should additionally avoid spurious correlations between the ensemble members that aren't useful for predicting Y, i.e. I(Z_i; Z_j| Y). They show empirically that the coefficient on this term increases diversity at the expense of decreasing accuracy of individual members of the ensemble." ]
Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information, we introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features. The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant. Therefore, besides the classification loss with information bottleneck, we adversarially prevent features from being conditionally predictable from each other. We manage to reduce simultaneous errors while protecting class information. We obtain state-of-the-art accuracy results on CIFAR-10/100: for example, an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently. We further analyze the consequences on calibration, uncertainty estimation, out-of-distribution detection and online co-distillation.
[ { "affiliations": [], "name": "Alexandre Rame" } ]
2,021
SP:5561773ab024b083be4e362db079e371abf79653
[ "The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enables learning of disentangled representations of tangible attributes and achieve novel image synthesis by recombining those swappable components under a zero-shot setting. The framework leverages the underlying semantic links across samples which could be instantiated as a multigraph. Cycle-consistent reconstruction loss as well as reconstruction loss are computed on synthetic samples from swapped latent representations." ]
Visual cognition of primates is superior to that of artificial neural networks in its ability to “envision” a visual object, even a newly-introduced one, in different attributes including pose, position, color, texture, etc. To aid neural networks to envision objects with different attributes, we propose a family of objective functions, expressed on groups of examples, as a novel learning framework that we term Group-Supervised Learning (GSL). GSL allows us to decompose inputs into a disentangled representation with swappable components, that can be recombined to synthesize new samples. For instance, images of red boats & blue cars can be decomposed and recombined to synthesize novel images of red cars. We propose an implementation based on auto-encoder, termed group-supervised zeroshot synthesis network (GZS-Net) trained with our learning framework, that can produce a high-quality red car even if no such example is witnessed during training. We test our model and learning framework on existing benchmarks, in addition to a new dataset that we open-source. We qualitatively and quantitatively demonstrate that GZS-Net trained with GSL outperforms state-of-the-art methods.
[ { "affiliations": [], "name": "Yunhao Ge" }, { "affiliations": [], "name": "Sami Abu-El-Haija" }, { "affiliations": [], "name": "Gan Xin" }, { "affiliations": [], "name": "Laurent Itti" } ]
[ { "authors": [ "Yuval Atzmon", "Gal Chechik" ], "title": "Probabilistic and-or attribute grouping for zero-shot learning", "venue": "In Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "A. Borji", "S. Izadi", "L. Itti" ], "title": "ilab-20m: A large-scale controlled object dataset to investigate deep learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-vae", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Ricky T.Q. Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yunhao Ge", "Jiaping Zhao", "Laurent Itti" ], "title": "Pose augmentation: Class-agnostic object pose transformation for object recognition", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": null, "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial networks", "venue": "In Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "I. Higgins", "L. Matthey", "A. Pal", "C. Burgess", "X. Glorot", "M. Botvinick", "S. Mohamed", "A. Lerchner" ], "title": "β-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Seunghoon Hong", "Dingdong Yang", "Jongwook Choi", "Honglak Lee" ], "title": "Inferring semantic layout for hierarchical text-to-image synthesis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "arXiv preprint arXiv:1802.05983,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Kevin Lai", "Liefeng Bo", "Xiaofeng Ren", "Dieter Fox" ], "title": "A large-scale hierarchical multi-view rgb-d object dataset", "venue": "In 2011 IEEE international conference on robotics and automation,", "year": 2011 }, { "authors": [ "C.H. Lampert" ], "title": "Learning to detect unseen object classes by between-class attribute transfer", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Oliver Langner", "Ron Dotsch", "Gijsbert Bijlstra", "Daniel HJ Wigboldus", "Skyler T Hawk", "AD Van Knippenberg" ], "title": "Presentation and validation of the radboud faces database", "venue": "Cognition and emotion,", "year": 2010 }, { "authors": [ "Nikos K. Logothetis", "Jon Pauls", "Tomaso Poggiot" ], "title": "Shape representation in the inferior temporal cortex of monkeys", "venue": "In Current Biology,", "year": 1995 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dsprites: Disentanglement testing sprites dataset", "venue": null, "year": 2017 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Luan Tran", "Xi Yin", "Xiaoming Liu" ], "title": "Disentangled representation learning gan for pose-invariant face recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Taihong Xiao", "Jiapeng Hong", "Jinwen Ma" ], "title": "Elegant: Exchanging latent encodings with gan for transferring multiple face attributes", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zhuoqian Yang", "Wentao Zhu", "Wayne Wu", "Chen Qian", "Qiang Zhou", "Bolei Zhou", "Chen Change Loy" ], "title": "Transmomo: Invariance-driven unsupervised video motion retargeting", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Han Zhang", "Tao Xu", "Hongsheng Li", "Shaoting Zhang", "Xiaogang Wang", "Xiaolei Huang", "Dimitris N Metaxas" ], "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In International Conference on Computer Vision", "year": 2017 } ]
2,021
ZERO-SHOT SYNTHESIS WITH GROUP-SUPERVISED LEARNING
SP:9f70871f0111b58783f731748d8750c635998f32
[ "This paper presents an approach to learn goal conditioned policies by relying on self-play which sets the goals and discovers a curriculum of tasks for learning. Alice and Bob are the agents. Alice's task is to set a goal by following a number of steps in the environment and she is rewarded when the goal is too challenging for Bob to solve. Bob's task is to solve the task by trying to reproduce the end state of Alice's demonstration. As a result, the learned policy performs various tasks and can work in zero-shot settings." ]
We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. We rely on asymmetric self-play for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob aims to solve them. We show that this method can discover highly diverse and complex goals without any human priors. Bob can be trained with only sparse rewards, because the interaction between Alice and Bob results in a natural curriculum and Bob can learn from Alice’s trajectory when relabeled as a goal-conditioned demonstration. Finally, our method scales, resulting in a single policy that can generalize to many unseen tasks such as setting a table, stacking blocks, and solving simple puzzles. Videos of a learned policy is available at https://robotics-self-play.github.io.
[]
2,020
SP:038a1d3066f8273977337262e975d7a7aab5002f
[ "The paper introduces a theoretical framework for analyzing GNN transferability. The main idea is to view a graph as subgraph samples with the information of both the connections and the features. Based on this view, the authors define EGI score of a graph as a learnable function that needs to be optimized by maximizing the mutual information between the subgraph and the GNN output embedding of the center node. Then, the authors give an upper bound for the difference of EGI scores of two graphs based on the difference of eigenvalues of the graph Laplacian of the subgraph samples from the two graphs. The implication is that if the difference of the eigenvalues is small, then the EGI scores are similar, which means the GNN has a similar ability to encode the structure of the two graphs. ", "This paper develops a novel measure for assessing the transferability of graph neural network models to new data sets. The measure is based on a decomposition of graphs into 'ego networks' (essentially, a distribution of $k$-hop subgraph, extracted from a given larger graph). Transferability is then assessed by means of a spectral criterion using the graph Laplacian. Experiments demonstrate the utility in assessing transferability in such a manner, as the new measure appears to be aligned with improvements in predictive performance.", "This work aims to provide fundamental understanding towards the mechanism and transferability of GNNs, and develops an unsupervised GNN training objective based on their understanding. Novel theoretical analysis has been done to support the design of EGI and establish its transferability bound, while the effectiveness of EGI and the utility of the transferability bound are verified by extensive experiments. The whole story looks new, comprehensive and convincing to me." ]
Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards their transferability. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of EGI (Ego-Graph Information maximization) to analytically achieve this goal. Secondly, when node features are structure-relevant, we conduct an analysis of EGI transferability regarding the difference between the local graph Laplacians of the source and target graphs. We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two real-world network datasets show consistent results in the analyzed setting of direct-transfering, while those on large-scale knowledge graphs show promising results in the more practical setting of transfering with fine-tuning.1
[ { "affiliations": [], "name": "Qi Zhu" }, { "affiliations": [], "name": "Carl Yang" }, { "affiliations": [], "name": "Yidan Xu" }, { "affiliations": [], "name": "Haonan Wang" }, { "affiliations": [], "name": "Chao Zhang" }, { "affiliations": [], "name": "Jiawei Han" } ]
2,022
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
SP:40cba7b6c04d7e44709baed351382c27fa89a129
[ "The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projection and lifting operations are defined to relate descriptions of partition cells to one another through rules. The quality of a description is measured by the divergence between the data and the (special) lifting of the rule set, under the constraint that rules satisfy an upper bound on their entropy." ]
Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some human-interpretable insight into what might govern the nature of the original signal. To summarize the signal, we need several disentangled rules arranged in a hierarchy, formalized by a lattice structure. ILL focuses on explainability and generalizability from “small data”, and aims for rules akin to those humans distill from experience (rather than a representation optimized for a specific task like classification). This paper focuses on a mathematical and algorithmic presentation of ILL, then demonstrates how ILL addresses the core question “what makes X an X” or “what makes X different from Y” to create effective, rule-based explanations designed to help human learners understand. The key part here is what rather than tasks like generating X or predicting labels X,Y. Typical applications of ILL are presented for artistic and scientific knowledge discovery. These use ILL to learn music theory from scores and chemical laws from molecule data, revealing relationships between domains. We include initial benchmarks and assessments for ILL to demonstrate efficacy.
[]
[ { "authors": [ "Amina Adadi", "Mohammed Berrada" ], "title": "Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "Karell Bertet", "Michel Morvan" ], "title": "Computing the sublattice of a lattice generated by a set of elements", "venue": "In Proc. 3rd Int. Conf. Orders, Algorithms Appl.,", "year": 1999 }, { "authors": [ "Karell Bertet", "Michel Morvan", "Lhouari Nourine" ], "title": "Lazy completion of a partial order to the smallest lattice", "venue": "In Proc. 2nd Int. Symp. Knowl. Retr., Use and Storage for Effic. (KRUSE", "year": 1997 }, { "authors": [ "Christina Bodurow" ], "title": "Music and chemistry—what’s the connection", "venue": "Chem. Eng. News,", "year": 2018 }, { "authors": [ "Nathalie Caspard", "Bruno Leclerc", "Bernard Monjardet" ], "title": "Finite Ordered Sets: Concepts, Results and Uses. Number 144 in Encyclopedia of Mathematics and its Applications", "venue": null, "year": 2012 }, { "authors": [ "Gregory J Chaitin" ], "title": "Algorithmic Information Theory", "venue": null, "year": 1987 }, { "authors": [ "Nick Chater", "Paul Vitányi" ], "title": "Simplicity: A unifying principle in cognitive science", "venue": "Trends Cogn. Sci.,", "year": 2003 }, { "authors": [ "François Chollet" ], "title": "On the measure of intelligence", "venue": "arXiv:1911.01547v2 [cs.AI],", "year": 2019 }, { "authors": [ "Erhan Çınlar" ], "title": "Probability and Stochastics, volume 261", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Thomas M Cover", "Joy A Thomas" ], "title": "Elements of Information Theory", "venue": null, "year": 2012 }, { "authors": [ "Constantinos Daskalakis", "Richard M Karp", "Elchanan Mossel", "Samantha J Riesenfeld", "Elad Verbin" ], "title": "Sorting and selection in posets", "venue": "SIAM J. Comput.,", "year": 2011 }, { "authors": [ "Brian A Davey", "Hilary A Priestley" ], "title": "Introduction to Lattices and Order", "venue": null, "year": 2002 }, { "authors": [ "Benjamin Eva" ], "title": "Principles of indifference", "venue": "J. Philos.,", "year": 2019 }, { "authors": [ "Ruma Falk", "Clifford Konold" ], "title": "Making sense of randomness: Implicit encoding as a basis for judgment", "venue": "Psychol. Rev.,", "year": 1997 }, { "authors": [ "Bernhard Ganter", "Rudolf Wille" ], "title": "Formal Concept Analysis: Mathematical Foundations", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Vijay K Garg" ], "title": "Introduction to Lattice Theory with Computer Science Applications", "venue": "Wiley Online Library,", "year": 2015 }, { "authors": [ "Lejaren Hiller", "Leonard Maxwell Isaacson" ], "title": "Illiac Suite, for String Quartet", "venue": "New Music Edition,", "year": 1957 }, { "authors": [ "Steven Holtzen", "Todd Millstein", "Guy Van den Broeck" ], "title": "Generating and sampling orbits for lifted probabilistic inference", "venue": "arXiv:1903.04672v3 [cs.AI],", "year": 2019 }, { "authors": [ "Anubhav Jain", "Shyue Ping Ong", "Geoffroy Hautier", "Wei Chen", "William Davidson Richards", "Stephen Dacek", "Shreyas Cholia", "Dan Gunter", "David Skinner", "Gerbrand Ceder", "Kristin A. Persson" ], "title": "The Materials Project: A materials genome approach to accelerating materials innovation", "venue": "APL Materials,", "year": 2013 }, { "authors": [ "Michael I Jordan" ], "title": "Artificial intelligence—the revolution hasn’t happened yet", "venue": "Harvard Data Science Review,", "year": 2019 }, { "authors": [ "David Kaiser", "Jonathan Moreno" ], "title": "Self-censorship is not", "venue": "enough. Nature,", "year": 2012 }, { "authors": [ "Martin Kauer", "Michal Krupka" ], "title": "Subset-generated complete sublattices as concept lattices", "venue": "In Proc. 12th Int. Conf. Concept Lattices Appl., pp", "year": 2015 }, { "authors": [ "Kristian Kersting" ], "title": "Lifted probabilistic inference", "venue": "In Proc. 20th European Conf. Artif. Intell. (ECAI", "year": 2012 }, { "authors": [ "Risi Kondor", "Shubhendu Trivedi" ], "title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups", "venue": "[stat.ML],", "year": 2018 }, { "authors": [ "Holbrook Mann MacNeille" ], "title": "Partially ordered sets", "venue": "Trans. Am. Math. Soc.,", "year": 1937 }, { "authors": [ "Gary Marcus" ], "title": "Innateness, AlphaZero, and artificial intelligence", "venue": "[cs.AI],", "year": 2018 }, { "authors": [ "Christoph Molnar" ], "title": "Interpretable Machine Learning", "venue": "Lulu.com,", "year": 2019 }, { "authors": [ "Andreas D Pape", "Kenneth J Kurtz", "Hiroki Sayama" ], "title": "Complexity measures and concept learning", "venue": "J. Math. Psychol.,", "year": 2015 }, { "authors": [ "Uta Priss" ], "title": "Formal concept analysis in information science", "venue": "Ann. Rev. Inform. Sci. Tech.,", "year": 2006 }, { "authors": [ "Anna Rogers", "Olga Kovaleva", "Anna Rumshisky" ], "title": "A primer in BERTology: What we know about how BERT works", "venue": "[cs.CL],", "year": 2020 }, { "authors": [ "Andrew D Selbst", "Danah Boyd", "Sorelle A Friedler", "Suresh Venkatasubramanian", "Janet Vertesi" ], "title": "Fairness and abstraction in sociotechnical systems", "venue": "In Proc. Conf. Fairness, Account., and Transpar.,", "year": 2019 }, { "authors": [ "Claude Shannon" ], "title": "The lattice theory of information", "venue": "Trans. IRE Prof. Group Inf. Theory,", "year": 1953 }, { "authors": [ "Charles Percy Snow" ], "title": "The Two Cultures", "venue": null, "year": 1959 }, { "authors": [ "Harini Suresh", "John V Guttag" ], "title": "A framework for understanding unintended consequences of machine learning", "venue": "[cs.LG],", "year": 2019 }, { "authors": [ "Dmitri Tymoczko" ], "title": "A Geometry of Music: Harmony and Counterpoint in the Extended Common Practice", "venue": null, "year": 2010 }, { "authors": [ "Haizi Yu", "Lav R. Varshney" ], "title": "Towards deep interpretability (MUS-ROVER II): Learning hierarchical representations of tonal music", "venue": "In Proc. 5th Int. Conf. Learn. Represent", "year": 2017 }, { "authors": [ "Haizi Yu", "Lav R Varshney", "Guy E Garnett", "Ranjitha Kumar" ], "title": "MUS-ROVER: A self-learning system for musical compositional rules", "venue": "In Proc. 4th Int. Workshop Music. Metacreation (MUME", "year": 2016 }, { "authors": [ "Haizi Yu", "Tianxi Li", "Lav R Varshney" ], "title": "Probabilistic rule realization and selection", "venue": "In Proc. 31th Annu. Conf. Neural Inf. Process. Syst. (NeurIPS 2017),", "year": 2017 }, { "authors": [], "title": "PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many", "venue": null, "year": 2016 }, { "authors": [ "Sokol (Sokol" ], "title": "2016), a music professor at York University, which we quote below: “The idea of Figured Soprano is simply a way of taking this thinking from the top-down and bringing it into greater prominence as a creative gesture. So these exercises are not anything new in their ideation, but they can bring many new ideas, chord progressions and much else. It’s a somewhat neglected area of harmonic study and it’s a lot of fun to play with.", "venue": null, "year": 2016 }, { "authors": [ "transparency", "explainability" ], "title": "Extensions to ILL could enable it to better cooperate with other models, e.g., as a pre-processing or a post-interpretation tool to achieve superior task performance as well as controllability and interpretability. One such possibility could leverage ILL to analyze the attention matrices (as signals) learned from a Transformer-based NLP model like BERT or GPT (Rogers et al., 2020)", "venue": null, "year": 2020 } ]
2,020
SP:1ee00313e354c4594bbf6cf8bdbe33e3ec8df62f
[ "This paper proposes searching for an architecture generator that outputs good student architectures for a given teacher. The authors claim that by learning the parameters of the generator instead of relying directly on the search space, it is possible to explore the search space of architectures more effectively, increasing the diversity of the architectures explored. They show that this approach combined with the standard knowledge distillation loss is able to learn good student architectures requiring substantially less samples and achieving competitive performances when comparing to other knowledge distillation algorithms." ]
State-of-the-art results in deep learning have been improving steadily, in good part due to the use of larger models. However, widespread use is constrained by device hardware limitations, resulting in a substantial performance gap between state-ofthe-art models and those that can be effectively deployed on small devices. While Knowledge Distillation (KD) theoretically enables small student models to emulate larger teacher models, in practice selecting a good student architecture requires considerable human expertise. Neural Architecture Search (NAS) appears as a natural solution to this problem but most approaches can be inefficient, as most of the computation is spent comparing architectures sampled from the same distribution, with negligible differences in performance. In this paper, we propose to instead search for a family of student architectures sharing the property of being good at learning from a given teacher. Our approach AutoKD, powered by Bayesian Optimization, explores a flexible graphbased search space, enabling us to automatically learn the optimal student architecture distribution and KD parameters, while being 20× more sample efficient compared to existing state-of-the-art. We evaluate our method on 3 datasets; on large images specifically, we reach the teacher performance while using 3× less memory and 10× less parameters. Finally, while AutoKD uses the traditional KD loss, it outperforms more advanced KD variants using hand-designed students.
[]
2,020
SP:eea3b3ec32cce61d6b6df8574cf7ce9376f2230a
[ "The paper proposes a defense that works by adding multiple targeted adversarial perturbations (with random classes) on the input sample before classifying it. There is little theoretical reasoning for why this is a sensible defense. More importantly though, the defense is only evaluated in an oblivious threat model where the attacker is unaware of the defense mechanism. As has been argued again and again in the literature and in community guidelines such as [1, 2], the oblivious threat model is trivial and yields absolutely no insights into the effectiveness of a defense (e.g. you can just manipulate the backpropagated gradient in random ways to prevent any gradient-based attack from finding adversarial perturbations). The problem with oblivious attacks is clearly visible in the results section where more PGD iterations are less effective than fewer iterations - a clear red flag that the evaluation is ineffective. The paper also fails to point out that Pang et al. 2020, one of the methods they combine their method with, has been shown to be ineffective [2]." ]
Studies show that neural networks are susceptible to adversarial attacks. This exposes a potential threat to neural network-based artificial intelligence systems. We observe that the probability of the correct result outputted by the neural network increases by applying small perturbations generated for non-predicted class labels to adversarial examples. Based on this observation, we propose a method of counteracting adversarial perturbations to resist adversarial examples. In our method, we randomly select a number of class labels and generate small perturbations for these selected labels. The generated perturbations are added together and then clamped onto a specified space. The obtained perturbation is finally added to the adversarial example to counteract the adversarial perturbation contained in the example. The proposed method is applied at inference time and does not require retraining or finetuning the model. We validate the proposed method on CIFAR-10 and CIFAR-100. The experimental results demonstrate that our method effectively improves the defense performance of the baseline methods, especially against strong adversarial examples generated using more iterations.
[]
2,020
[ { "heading": "1 INTRODUCTION", "text": "Mixed Integer Programming (MIP) has been applied widely in many real-world problems, such as scheduling (Barnhart et al., 2003) and transportation (Melo & Wolsey, 2012). Branch and Bound (B&B) is a general and widely used paradigm for solving MIP problems (Wolsey & Nemhauser, 1999). B&B recursively partitions the solution space into a search tree and compute relaxation bounds along the way to prune subtrees that provably can not contain an optimal solution. This iterative process requires sequential decision makings: node selection: selecting the next solution space to evaluate, variable selection: selecting the variable by which to partition the solution space (Achterberg & Berthold, 2009). In this work, we focus on learning a variable selection strategy, which is the core of the B&B algorithm (Achterberg & Wunderling, 2013).\nVery often, instances from the same MIP problem family are solved repeatedly in industry, which gives rise to the opportunity for learning to improve the variable selection policy (Bengio et al., 2020). Based on the human-designed heuristics, Di Liberto et al. (2016) learn a classifier that dynamically selects an existing rule to perform variable selection; Balcan et al. (2018) consider a weighted score of multiple heuristics and analyse the sample complexity of finding such a good weight. The first step towards learning a variable selection policy was taken by Khalil et al. (2016), who learn an instance customized policy in an online fashion, as well as Alvarez et al. (2017) and Hansknecht et al. (2018) who learn a branching rule offline on a collection of similar instances. Those methods need extensively feature engineering and require strong domain knowledge in MIP. To avoid that, Gasse et al. (2019) propose a graph convolutional neural network approach to obtain competitive performance, only requiring raw features provided by the solver. In each case, the branching policy is learned by imitating the decision of strong branching as it consistently leads to the smallest B&B trees empirically (Achterberg et al., 2005).\nIn this work, we argue that strong branching is not a good expert to imitate. The excellent performance (the smallest B&B tree) of strong branching relies mostly on the information obtained in solving branch linear programming (LP) rather than the decision it makes. This factor prevents learning a good policy by imitating only the decision made by strong branching. To obtain more effective and non-myopic policies,i.e. minimizing the total solving nodes rather than maximizing the immediate duality gap gap, we use reinforcement learning (RL) and model the variable selection process as a Markov Decision Process (MDP). Though the MDP formulation for MIP has been mentioned in the previous works (Gasse et al., 2019; Etheve et al., 2020), the advantage of RL has not been demonstrated clearly in literature.\nThe challenges of using RL are multi-fold. First, the state space is a complex search tree, which can involve hundreds or thousands of nodes (with a linear program on each node) and evolve over time. In the meanwhile, the objective of MIP is to solve problems faster. Hence a trade-off between decision quality and computation time is required when representing the state and designing a policy based on this state representation. Second, learning a branching policy by RL requires rolling out on a distribution of instances. Moreover, for each instance, the solving trajectory could contain thousands of steps and actions can have long-lasting effects. These result in a large variance in gradient estimation. Third, each step of variable selection can have hundreds of candidates. The large action set makes the exploration in MIP very hard.\nIn this work, we address these challenges by designing a policy network inspired by primal-dual iteration and employing a novelty search evolutionary strategy (NS-ES) to improve the policy. For efficiency-effectiveness trade-off, the primal-dual policy ignores the redundant information and makes high-quality decisions on the fly. For reducing variance, the ES algorithm is an attractive choice as its gradient estimation is independent of the trajectory length (Salimans et al., 2017). For exploration, we introduce a new representation of the B&B solving process employed by novelty search (Conti et al., 2018) to encourage visiting new states.\nWe evaluate our RL trained agent over a range of problems (namely, set covering, maximum independent set, capacitated facility location). The experiments show that our approach significantly outperforms stateof-the-art human-designed heuristics (Achterberg & Berthold, 2009) as well as imitation based learning methods (Khalil et al., 2016; Gasse et al., 2019). In the ablation study, we compare our primal-dual policy net with GCN (Gasse et al., 2019), our novelty based ES with vanilla ES (Salimans et al., 2017). The results confirm that both our policy network and the novelty search evolutionary strategy are indispensable for the success of the RL agent. In summary, our main contributions are the followings:\n• We point out the overestimation of the decision quality of strong branching and suggest that methods other than imitating strong branching are needed to find better variable selection policy. • We model the variable selection process as MDP and design a novel policy net based on primal-dual iteration over reduced LP relaxation. • We introduce a novel set representation and optimal transport distance for the branching process associated with a policy, based on which we train our RL agent using novelty search evolution strategy and obtain substantial improvements in empirical evaluation." }, { "heading": "2 BACKGROUND", "text": "Mixed Integer Programming. MIP is an optimization problem, which is typically formulated as\nminx∈Rn {cTx : Ax ≤ b,  ≤ x ≤ u, xj ∈ Z, ∀j ∈ J} (1)\nwhere c ∈ Rn is the objective vector, A ∈ Rm×n is the constraint coefficient matrix, b ∈ Rm is the constraint vector, ,u ∈ Rn are the variable bounds. The set J ⊆ {1, · · · , n} is an index set for integer variables. We denote the feasible region of x as X .\nLinear Programming Relaxation. LP relaxation is an important building block for solving MIP problems, where the integer constraints are removed:\nminx∈Rn {cTx : Ax ≤ b, ` ≤ x ≤ u}. (2)\nAlgorithm 1: Branch and Bound Input: A MIP P in form Equation 1 Output: An optimal solution set x∗ and\noptimal value c∗ 1 Initialize the problem set S := {PLP }. where PLP is in form Equation 2. Set x∗ = φ, c∗ =∞ ; 2 If S = φ, exit by returning x∗ and c∗ ; 3 Select and pop a LP relaxation Q ∈ S ; 4 Solve Q with optimal solution x̂ and optimal\nvalue ĉ ; 5 If ĉ ≥ c∗, go to 2 ; 6 If x̂ ∈ X , set x∗ = x̂, c∗ = ĉ, go to 2 ; 7 Select variable j, split Q into two subproblems Q+j and Q − j , add them to S and go to 3 ;\nBranch and Bound. LP based B&B is the most successful method in solving MIP. A typical LP based B&B algorithm for solving MIP looks as Algorithm 1 (Achterberg et al., 2005).\nIt consists of two major decisions: node selection, in line 3, and variable selection, in line 7. In this paper, we will focus on the variable selection. Given a LP relaxation and its optimal solution x̂, the variable selection means selecting an index j. Then, branching splits the current problem into two subproblems, each representing the original LP relaxation with a new constraint xj ≤ bx̂jc for Q−j and xj ≥ dx̂je for Q + j respectively. This procedure can be visualized by a binary tree, which is commonly called search tree. We give a simple visualization in Section A.1.\nEvolution Strategy. Evolution Strategies (ES) is a class of black box optimization algorithm (Rechenberg, 1978). In this work, we refer to the definition in Natural Evolution Strategies (NES) (Wierstra et al., 2008). NES represents the population as a distribution of parameter vectors θ characterized by parameters φ : pφ(θ). NES optimizes φ to maximize the expectation of a fitness f(θ) over the population Eθ∼pφ [f(θ)]. In recent work, Salimans et al. (2017) outlines a version of NES applied to standard RL benchmark problems, where θ parameterizes the policy πθ, φt = (θt, σ) parameterizes a Gaussian distribution pφ(θ) = N (θt, σ2I) and f(θ) is the cumulative reward R(θ) over a full agent interaction. At every iteration, Salimans et al. (2017) apply n additive Gaussian noises to the current parameter and update the population as\nθt+1 = θt + α 1\nnσ n∑ i=1 f(θt + σ i) i (3)\nTo encourage exploration, Conti et al. (2018) propose Novelty Search Evolution Strategy (NS-ES). In NSES, the fitness function f(θ) = λN(θ)+(1−λ)R(θ) is selected as a combination of domain specific novelty score N and cumulative reward R, where λ is the balancing weight." }, { "heading": "3 WHY IMITATING STRONG BRANCHING IS NOT GOOD", "text": "Strong branching is a human-designed heuristic, which solves all possible branch LPs Q+j , Q − j ahead of branching. As strong branching usually produces the smallest B&B search trees (Achterberg, 2009), many learning-based variable selection policy are trained by mimicking strong branching (Gasse et al., 2019; Khalil et al., 2016; Alvarez et al., 2017; Hansknecht et al., 2018). However, we claim that strong branching is not a good expert: the reason strong branching can produce a small search tree is the reduction obtained in solving branch LP, rather than its decision quality. Specifically, (i) Strong branching can check lines 5, 6 in Algorithm 1 before branching. If the pruning condition is satisfied, strong branching does not need to add the subproblem into the problem set S. (ii) Strong branching can strengthen other LP relaxations in the problem set S via domain propagation (Rodosek et al., 1999) and conflict analysis (Achterberg, 2007). For example, if strong branching finds x1 ≥ 1 and x2 ≥ 1 can be pruned during solving branch LP, then any other LP relaxations containing x1 ≥ 1 can be strengthened by adding x2 ≤ 0. These two reductions are\nthe direct consequence of solving branch LP, and they can not be learned by a variable selection policy. (iii) Strong branching activates primal heuristics (Berthold, 2006) after solving LPs.\nTo examine the decision quality of strong branching, we employ vanilla full strong branching (Gamrath et al., 2020), which takes the same decision as full strong branching, while the side-effect of solving branch LP is switched off. Experiments in Section 5.2 show that vanilla full strong branching has poor decision quality. Hence, imitating strong branching is not a wise choice for learning variable selection policy." }, { "heading": "4 METHOD", "text": "Due to line 5 in Algorithm 1, a good variable selection polic