paper_id (string)summaries (sequence)abstractText (string)authors (list)references (list)sections (list)year (int64)title (string)
[ "This paper investigates kernel ridge-less regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridge-less regression are still sparse. The present study fills this gap, which, in my opinion, is also one of the main contributions of the present study. " ]
"We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm minimizes a bound on CVloo stability, which in turn is controlled by the condition number of the empirical kernel matrix. The latter can be characterized in the asymptotic regime where both the dimension and cardinality of the data go to infinity. Under the assumption of random kernel matrices, the corresponding test error should be expected to follow a double descent curve."
[ { "authors": [ "Jerzy K Baksalary", "Oskar Maria Baksalary", "Götz Trenkler" ], "title": "A revisitation of formulae for the moore–penrose inverse of modified matrices", "venue": "Linear Algebra and Its Applications,", "year": 2003 }, { "authors": [ "Peter L. Bartlett", "Philip M. Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "CoRR, abs/1906.11300,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Stéphane Boucheron", "Olivier Bousquet", "Gábor Lugosi" ], "title": "Theory of classification: A survey of some recent advances", "venue": "ESAIM: probability and statistics,", "year": 2005 }, { "authors": [ "O. Bousquet", "A. Elisseeff" ], "title": "Stability and generalization", "venue": "Journal Machine Learning Research,", "year": 2001 }, { "authors": [ "Peter Bühlmann", "Sara Van De Geer" ], "title": "Statistics for high-dimensional data: methods, theory and applications", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Noureddine El Karoui" ], "title": "The spectrum of kernel random matrices", "venue": "arXiv e-prints, art", "year": 2010 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J. Tibshirani" ], "title": "Surprises in HighDimensional Ridgeless Least Squares Interpolation", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "S. Kutin", "P. Niyogi" ], "title": "Almost-everywhere algorithmic stability and generalization error", "venue": "Technical report TR-2002-03,", "year": 2002 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin", "Xiyu Zhai" ], "title": "On the Risk of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin" ], "title": "Just interpolate: Kernel “ridgeless” regression can generalize", "venue": "Annals of Statistics,", "year": 2020 }, { "authors": [ "V.A. Marchenko", "L.A. Pastur" ], "title": "Distribution of eigenvalues for some sets of random matrices", "venue": "Mat. Sb. (N.S.),", "year": 1967 }, { "authors": [ "Song Mei", "Andrea Montanari" ], "title": "The generalization error of random features regression: Precise asymptotics and double descent curve", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Carl Meyer" ], "title": "Generalized inversion of modified matrices", "venue": "SIAM J. Applied Math,", "year": 1973 }, { "authors": [ "C.A. Micchelli" ], "title": "Interpolation of scattered data: distance matrices and conditionally positive definite functions", "venue": "Constructive Approximation,", "year": 1986 }, { "authors": [ "Sayan Mukherjee", "Partha Niyogi", "Tomaso Poggio", "Ryan Rifkin" ], "title": "Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization", "venue": "Advances in Computational Mathematics,", "year": 2006 }, { "authors": [ "T. Poggio", "R. Rifkin", "S. Mukherjee", "P. Niyogi" ], "title": "General conditions for predictivity in learning theory", "venue": "Nature,", "year": 2004 }, { "authors": [ "T. Poggio", "G. Kur", "A. Banburski" ], "title": "Double descent in the condition number", "venue": "Technical report, MIT Center for Brains Minds and Machines,", "year": 2019 }, { "authors": [ "Tomaso Poggio" ], "title": "Stable foundations for learning. Center for Brains, Minds and Machines", "venue": "(CBMM) Memo No", "year": 2020 }, { "authors": [ "Alexander Rakhlin", "Xiyu Zhai" ], "title": "Consistency of Interpolation with Laplace Kernels is a HighDimensional Phenomenon", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Lorenzo Rosasco", "Silvia Villa" ], "title": "Learning with incremental iterative regularization", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding Machine Learning: From Theory to Algorithms", "venue": null, "year": 2014 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Nathan Srebro", "Karthik Sridharan" ], "title": "Learnability, stability and uniform convergence", "venue": "J. Mach. Learn. Res.,", "year": 2010 }, { "authors": [ "Ingo Steinwart", "Andreas Christmann" ], "title": "Support vector machines", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "CV" ], "title": "Note that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi", "venue": null, "year": 2006 }, { "authors": [ "Mukherjee" ], "title": "Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM", "venue": null, "year": 2002 }, { "authors": [ "Mukherjee" ], "title": "For ERM and bounded loss functions, CVloo stability in probability with β", "venue": null, "year": 2006 }, { "authors": [ "Mukherjee" ], "title": "zi)− V (fS", "venue": null, "year": 2006 } ]
[ { "heading": "1 INTRODUCTION", "text": "Statistical learning theory studies the learning properties of machine learning algorithms, and more fundamentally, the conditions under which learning from finite data is possible. In this context, classical learning theory focuses on the size of the hypothesis space in terms of different complexity measures, such as combinatorial dimensions, covering numbers and Rademacher/Gaussian complexities (Shalev-Shwartz & Ben-David, 2014; Boucheron et al., 2005). Another more recent approach is based on defining suitable notions of stability with respect to perturbation of the data (Bousquet & Elisseeff, 2001; Kutin & Niyogi, 2002). In this view, the continuity of the process that maps data to estimators is crucial, rather than the complexity of the hypothesis space. Different notions of stability can be considered, depending on the data perturbation and metric considered (Kutin & Niyogi, 2002). Interestingly, the stability and complexity approaches to characterizing the learnability of problems are not at odds with each other, and can be shown to be equivalent as shown in Poggio et al. (2004) and Shalev-Shwartz et al. (2010).\nIn modern machine learning overparameterized models, with a larger number of parameters than the size of the training data, have become common. The ability of these models to generalize is well explained by classical statistical learning theory as long as some form of regularization is used in the training process (Bühlmann & Van De Geer, 2011; Steinwart & Christmann, 2008). However, it was recently shown - first for deep networks (Zhang et al., 2017), and more recently for kernel methods (Belkin et al., 2019) - that learning is possible in the absence of regularization, i.e., when perfectly fitting/interpolating the data. Much recent work in statistical learning theory has tried to find theoretical ground for this empirical finding. Since learning using models that interpolate is not exclusive to deep neural networks, we study generalization in the presence of interpolation in the case of kernel methods. We study both linear and kernel least squares problems in this paper.\nOur Contributions:\n• We characterize the generalization properties of interpolating solutions for linear and kernel least squares problems using a stability approach. While the (uniform) stability properties of regularized kernel methods are well known (Bousquet & Elisseeff, 2001), we study interpolating solutions of the unregularized (\"ridgeless\") regression problems.\n• We obtain an upper bound on the stability of interpolating solutions, and show that this upper bound is minimized by the minimum norm interpolating solution. This also means that among all interpolating solutions, the minimum norm solution has the best test error. In\nparticular, the same conclusion is also true for gradient descent, since it converges to the minimum norm solution in the setting we consider, see e.g. Rosasco & Villa (2015). • Our stability bounds show that the average stability of the minimum norm solution is\ncontrolled by the condition number of the empirical kernel matrix. It is well known that the numerical stability of the least squares solution is governed by the condition number of the associated kernel matrix (see the discussion of why overparametrization is “good” in Poggio et al. (2019)). Our results show that the condition number also controls stability (and hence, test error) in a statistical sense.\nOrganization: In section 2, we introduce basic ideas in statistical learning and empirical risk minimization, as well as the notation used in the rest of the paper. In section 3, we briefly recall some definitions of stability. In section 4, we study the stability of interpolating solutions to kernel least squares and show that the minimum norm solutions minimize an upper bound on the stability. In section 5 we discuss our results in the context of recent work on high dimensional regression. We conclude in section 6." }, { "heading": "2 STATISTICAL LEARNING AND EMPIRICAL RISK MINIMIZATION", "text": "We begin by recalling the basic ideas in statistical learning theory. In this setting, X is the space of features, Y is the space of targets or labels, and there is an unknown probability distribution µ on the product space Z = X × Y . In the following, we consider X = Rd and Y = R. The distribution µ is fixed but unknown, and we are given a training set S consisting of n samples (thus |S| = n) drawn i.i.d. from the probability distribution on Zn, S = (zi)ni=1 = (xi, yi) n i=1. Intuitively, the goal of supervised learning is to use the training set S to “learn” a function fS that evaluated at a new value xnew should predict the associated value of ynew, i.e. ynew ≈ fS(xnew). The loss is a function V : F × Z → [0,∞), where F is the space of measurable functions from X to Y , that measures how well a function performs on a data point. We define a hypothesis space H ⊆ F where algorithms search for solutions. With the above notation, the expected risk of f is defined as I[f ] = EzV (f, z) which is the expected loss on a new sample drawn according to the data distribution µ. In this setting, statistical learning can be seen as the problem of finding an approximate minimizer of the expected risk given a training set S. A classical approach to derive an approximate solution is empirical risk minimization (ERM) where we minimize the empirical risk IS [f ] = 1 n ∑n i=1 V (f, zi).\nA natural error measure for our ERM solution fS is the expected excess risk ES [I[fS ]−minf∈H I[f ]]. Another common error measure is the expected generalization error/gap given by ES [I[fS ]− IS [fS ]]. These two error measures are closely related since, the expected excess risk is easily bounded by the expected generalization error (see Lemma 5)." }, { "heading": "2.1 KERNEL LEAST SQUARES AND MINIMUM NORM SOLUTION", "text": "The focus in this paper is on the kernel least squares problem. We assume the loss function V is the square loss, that is, V (f, z) = (y − f(x))2. The hypothesis space is assumed to be a reproducing kernel Hilbert space, defined by a positive definite kernel K : X ×X → R or an associated feature map Φ : X → H, such that K(x,x′) = 〈Φ(x),Φ(x′)〉H for all x,x′ ∈ X , where 〈·, ·〉H is the inner product in H. In this setting, functions are linearly parameterized, that is there exists w ∈ H such that f(x) = 〈w,Φ(x)〉H for all x ∈ X . The ERM problem typically has multiple solutions, one of which is the minimum norm solution:\nf†S = arg min f∈M ‖f‖H , M = arg min f∈H\n1\nn n∑ i=1 (f(xi)− yi)2. (1)\nHere ‖·‖H is the norm onH induced by the inner product. The minimum norm solution can be shown to be unique and satisfy a representer theorem, that is for all x ∈ X:\nf†S(x) = n∑ i=1 K(x,xi)cS [i], cS = K †y (2)\nwhere cS = (cS [1], . . . , cS [n]),y = (y1 . . . yn) ∈ Rn, K is the n by n matrix with entries Kij = K(xi,xj), i, j = 1, . . . , n, and K† is the Moore-Penrose pseudoinverse of K. If we assume n ≤ d and that we have n linearly independent data features, that is the rank of X is n, then it is possible to show that for many kernels one can replace K† by K−1 (see Remark 2). Note that invertibility is necessary and sufficient for interpolation. That is, if K is invertible, f†S(xi) = yi for all i = 1, . . . , n, in which case the training error in (1) is zero.\nRemark 1 (Pseudoinverse for underdetermined linear systems) A simple yet relevant example are linear functions f(x) = w>x, that correspond toH = Rd and Φ the identity map. If the rank of X ∈ Rd×n is n, then any interpolating solution wS satisfies w>S xi = yi for all i = 1, . . . , n, and the minimum norm solution, also called Moore-Penrose solution, is given by (w†S)\n> = y>X† where the pseudoinverse X† takes the form X† = X>(XX>)−1.\nRemark 2 (Invertibility of translation invariant kernels) Translation invariant kernels are a family of kernel functions given by K(x1,x2) = k(x1 − x2) where k is an even function on Rd. Translation invariant kernels are Mercer kernels (positive semidefinite) if the Fourier transform of k(·) is non-negative. For Radial Basis Function kernels (K(x1,x2) = k(||x1 − x2||)) we have the additional property due to Theorem 2.3 of Micchelli (1986) that for distinct points x1,x2, . . . ,xn ∈ Rd the kernel matrix K is non-singular and thus invertible.\nThe above discussion is directly related to regularization approaches.\nRemark 3 (Stability and Tikhonov regularization) Tikhonov regularization is used to prevent potential unstable behaviors. In the above setting, it corresponds to replacing Problem (1) by minf∈H 1 n ∑n i=1(f(xi) − yi)2 + λ ‖f‖ 2 H where the corresponding unique solution is given by\nfλS (x) = ∑n i=1K(x,xi)c[i], c = (K + λIn)\n−1y. In contrast to ERM solutions, the above approach prevents interpolation. The properties of the corresponding estimator are well known. In this paper, we complement these results focusing on the case λ→ 0.\nFinally, we end by recalling the connection between minimum norm and the gradient descent.\nRemark 4 (Minimum norm and gradient descent) In our setting, it is well known that both batch and stochastic gradient iterations converge exactly to the minimum norm solution when multiple solutions exist, see e.g. Rosasco & Villa (2015). Thus, a study of the properties of the minimum norm solution explains the properties of the solution to which gradient descent converges. In particular, when ERM has multiple interpolating solutions, gradient descent converges to a solution that minimizes a bound on stability, as we show in this paper." }, { "heading": "3 ERROR BOUNDS VIA STABILITY", "text": "In this section, we recall basic results relating the learning and stability properties of Empirical Risk Minimization (ERM). Throughout the paper, we assume that ERM achieves a minimum, albeit the extension to almost minimizer is possible (Mukherjee et al., 2006) and important for exponential-type loss functions (Poggio, 2020). We do not assume the expected risk to achieve a minimum. Since we will be considering leave-one-out stability in this section, we look at solutions to ERM over the complete training set S = {z1, z2, . . . , zn} and the leave one out training set Si = {z1, z2, . . . , zi−1, zi+1, . . . , zn} The excess risk of ERM can be easily related to its stability properties. Here, we follow the definition laid out in Mukherjee et al. (2006) and say that an algorithm is Cross-Validation leave-one-out (CVloo) stable in expectation, if there exists βCV > 0 such that for all i = 1, . . . , n,\nES [V (fSi , zi)− V (fS , zi)] ≤ βCV . (3) This definition is justified by the following result that bounds the excess risk of a learning algorithm by its average CVloo stability (Shalev-Shwartz et al., 2010; Mukherjee et al., 2006).\nLemma 5 (Excess Risk & CVloo Stability) For all i = 1, . . . , n, ES [I[fSi ]− inf\nf∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (4)\nRemark 6 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z |V (fSi , z)− V (fS , z)| ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nWe recall the proof of Lemma 5 in Appendix A.2 due to lack of space. In Appendix A, we also discuss other definitions of stability and their connections to concepts in statistical learning theory like generalization and learnability.\n4 CVloo STABILITY OF KERNEL LEAST SQUARES\nIn this section we analyze the expected CVloo stability of interpolating solutions to the kernel least squares problem, and obtain an upper bound on their stability. We show that this upper bound on the expected CVloo stability is smallest for the minimum norm interpolating solution (1) when compared to other interpolating solutions to the kernel least squares problem.\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping f ∈ H, that minimizes the empirical least squares risk. Here H is a reproducing kernel hilbert space (RKHS) defined by a positive definite kernel K : X × X → R. All interpolating solutions are of the form f̂S(·) =∑n j=1 ĉS [j]K(xj , ·), where ĉS = K†y + (I −K†K)v. Similarly, all interpolating solutions on\nthe leave one out dataset Si can be written as f̂Si(·) = ∑n j=1,j 6=i ĉSi [j]K(xj , ·), where ĉSi = K†Siyi + (I−K † Si KSi)vi. Here K,KSi are the empirical kernel matrices on the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nTheorem 7 (Main Theorem) Consider the kernel least squares problem with a bounded kernel and bounded outputs y, that is there exist κ,M > 0 such that\nK(x,x′) ≤ κ2, |y| ≤M, (5)\nalmost surely. Then for any interpolating solutions f̂Si , f̂S ,\nES [V (f̂Si , zi)− V (f̂S , zi)] ≤ βCV (K†,y,v,vi) (6) This bound βCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . For the minimum norm solutions we have\nβCV = C1β1 + C2β2, where β1 = ES [ ||K 12 ||op||K†||op × cond(K)× ||y|| ] and, β2 =\nES [ ||K 12 ||2op||K†||2op × (cond(K))2 × ||y||2 ] , andC1, C2 are absolute constants that do not depend\non either d or n.\nIn the above theorem ||K||op refers to the operator norm of the kernel matrix K, ||y|| refers to the standard `2 norm for y ∈ Rn, and cond(K) is the condition number of the matrix K. We can combine the above result with Lemma 5 to obtain the following bound on excess risk for minimum norm interpolating solutions to the kernel least squares problem:\nCorollary 8 The excess risk of the minimum norm interpolating kernel least squares solution can be bounded as: ES [ I[f†Si ]− inff∈H I[f ] ] ≤ C1β1 + C2β2\nwhere β1, β2 are as defined previously.\nRemark 9 (Underdetermined Linear Regression) In the case of underdetermined linear regression, ie, linear regression where the dimensionality is larger than the number of samples in the training set, we can prove a version of Theorem 7 with β1 = ES [∥∥X†∥∥ op ‖y‖ ] and\nβ2 = ES [∥∥X†∥∥2 op ‖y‖2 ] . Due to space constraints, we present the proof of the results in the linear regression case in Appendix B." }, { "heading": "4.1 KEY LEMMAS", "text": "In order to prove Theorem 7 we make use of the following lemmas to bound the CVloo stability using the norms and the difference of the solutions.\nLemma 10 Under assumption (5), for all i = 1. . . . , n, it holds that ES [V (f̂Si , zi)− V (f̂S , zi)] ≤ ES [( 2M + κ (∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H))× κ∥∥∥f̂S − f̂Si∥∥∥H]\nProof We begin, recalling that the square loss is locally Lipschitz, that is for all y, a, a′ ∈ R, with |(y − a)2 − (y − a′)2| ≤ (2|y|+ |a|+ |a′|))|a− a′|.\nIf we apply this result to f, f ′ in a RKHSH, |(y − f(x))2 − (y − f ′(x))2| ≤ κ(2M + κ (‖f‖H + ‖f ′‖H)) ‖f − f ′‖H .\nusing the basic properties of a RKHS that for all f ∈ H |f(x)| ≤ ‖f‖∞ = supx|f(x)| = supx|〈f,Kx〉H| ≤ κ ‖f‖H (7)\nIn particular, we can plug f̂Si and f̂S into the above inequality, and the almost positivity of ERM (Mukherjee et al., 2006) will allow us to drop the absolute value on the left hand side. Finally the desired result follows by taking the expectation over S.\nNow that we have bounded the CVloo stability using the norms and the difference of the solutions, we can find a bound on the difference between the solutions to the kernel least squares problem. This is our main stability estimate.\nLemma 11 Let f̂S , f̂Si be any interpolating kernel least squares solutions on the full and leave one out datasets (as defined at the top of this section), then ∥∥∥f̂S − f̂Si∥∥∥H ≤ BCV (K†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . Also for some absolute constant C,∥∥∥f†S − f†Si∥∥∥H ≤ C × ∥∥∥K 12 ∥∥∥op ∥∥K†∥∥op × cond(K)× ‖y‖ (8) Since the minimum norm interpolating solutions minimize both ∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H and ∥∥∥f̂S − f̂Si∥∥∥H (from lemmas 10, 11), we can put them together to prove theorem 7. In the following section we provide the proof of Lemma 11.\nRemark 12 (Zero training loss) In Lemma 10 we use the locally Lipschitz property of the squared loss function to bound the leave one out stability in terms of the difference between the norms of the solutions. Under interpolating conditions, if we set the term V (f̂S , zi) = 0, the leave one\nout stability reduces to ES [ V (f̂Si , zi)− V (f̂S , zi) ] = ES [ V (f̂Si , zi) ] = ES [(f̂Si(xi)− yi)2] =\nES [(f̂Si(xi)− f̂S(xi))2] = ES [〈f̂Si(·)− f̂S(·),Kxi(·)〉2] ≤ ES [ ||f̂S − f̂Si ||2H × κ2 ] . We can plug\nin the bound from Lemma 11 to obtain similar qualitative and quantitative (up to constant factors) results as in Theorem 7.\nSimulation: In order to illustrate that the minimum norm interpolating solution is the best performing interpolating solution we ran a simple experiment on a linear regression problem. We synthetically generated data from a linear model y = w>X, where X ∈ Rd×n was i.i.d N (0, 1). The dimension of the data was d = 1000 and there were n = 200 samples in the training dataset. A held out dataset of 50 samples was used to compute the test mean squared error (MSE). Interpolating solutions were computed as ŵ> = y>X†+v>(I−XX†) and the norm of v was varied to obtain the plot. The results are shown in Figure 1, where we can see that the training loss is 0 for all interpolants, but test MSE increases as ||v|| increases, with (w†)> = y>X† having the best performance. The figure reports results averaged over 100 trials." }, { "heading": "4.2 PROOF OF LEMMA 11", "text": "We can write any interpolating solution to the kernel regression problem as f̂S(x) =∑n i=1 ĉS [i]K(xi,x) where ĉS = K\n†y + (I − K†K)v, and K ∈ Rn×n is the kernel matrix K on S and v is any vector in Rn. i.e. Kij = K(xi,xj), and y ∈ Rn is the vector y = [y1 . . . yn]>. Similarly, the coefficient vector for the corresponding interpolating solution to the problem over the leave one out dataset Si is ĉSi = (KSi)\n†yi + (I− (KSi)†KSi)vi. Where yi = [y1, . . . , 0, . . . yn]> and KSi is the kernel matrix K with the i\nth row and column set to zero, which is the kernel matrix for the leave one out training set.\nWe define a = [−K(x1,xi), . . . ,−K(xn,xi)]> ∈ Rn and b ∈ Rn as a one-hot column vector with all zeros apart from the ith component which is 1. Let a∗ = a +K(xi,xi)b. Then, we have:\nK∗ = K + ba > ∗\nKSi = K∗ + ab > (9)\nThat is, we can write KSi as a rank-2 update to K. This can be verified by simple algebra, and using the fact that K is a symmetric kernel. Now we are interested in bounding ||f̂S − f̂Si ||H. For a function h(·) = ∑m i=1 piK(xi, ·) ∈ H we have ||h||H = √ p>Kp = ||K 12p||. So we have:\n||f̂S − f̂Si ||H = ||K 1 2 (ĉS − ĉSi)||\n= ||K 12 (K†y + (I−K†K)v − (KSi)†yi − (I− (KSi)†KSi)vi)||\n= ||K 12 (K†y − (KSi)†y + yi(KSi)†b + (I−K†K)(v − vi) + (K†K− (KSi)†KSi)vi)||\n= ||K 12 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi]|| (10)\nHere we make use of the fact that (KSi) †b = 0. If K has full rank (as in Remark 2), we see that b lies in the column space of K and a∗ lies in the column space of K>. Furthermore, β∗ = 1 +a>∗K †b = 1 +a>K†b+K(xi,xi)b >K†b = Kii(K †)ii 6= 0. Using equation 2.2 of Baksalary\net al. (2003) we obtain:\nK†∗ = K † − (Kii(K†)ii)−1K†ba>∗K†\n= K† − (Kii(K†)ii)−1K†ba>K† − ((K†)ii)−1K†bb>K†\n= K† + (Kii(K †)ii) −1K†bb> − ((K†)ii)−1K†bb>K† (11)\nHere we make use of the fact that a>K† = −b. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have K†∗K∗ = K†K.\nNext, we see that since K∗ has the same rank as K, a lies in the column space of K∗, and b lies in the column space of K>∗ . Furthermore β = 1 + b\n>K∗a = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (KSi) †, with k = K†∗a and h = b>K † ∗.\n(KSi) † = K†∗ − kk†K†∗ −K†∗h†h + (k†K†∗h†)kh\n=⇒ (KSi)† −K†∗ = (k†K†∗h†)kh− kk†K†∗ −K†∗h†h =⇒ ||(KSi)† −K†∗||op ≤ 3||K†∗||op\n(12)\nAbove, we use the fact that the operator norm of a rank 1 matrix is given by ||uv>||op = ||u|| × ||v||. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have:\n(KSi) †KSi = K † ∗K∗ − kk†\n=⇒ K†K− (KSi)†KSi = kk† (13)\nPutting the two parts together we obtain the bound on ∥∥(KSi)† −K†∥∥op:\n||K† − (KSi)†||op = ||K† −K†∗ + K†∗ − (KSi)†||op ≤ 3||K†∗||op + ||K† −K†∗||op ≤ 3||K†||op + 4(Kii(K†)ii)−1||K†||op + 4((K†)ii)−1||K†||2op ≤ ||K†||op(3 + 8||K†||op||K||op)\n(14)\nThe last step follows from (Kii)−1 ≤ ||K†||op and ((K†)ii)−1 ≤ ||K||op. Plugging in these calculations into equation 10 we get:\n||f̂S − f̂Si ||H = ||K 1 2 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi]|| ≤ ||K 12 ||op ( ||(K† − (KSi)†)y||+ ||(I−K†K)(v − vi)||+ ||kk†vi|| ) ≤ ||K 12 ||op(B0 + ||I−K†K||op||v − vi||+ ||vi||)\n(15)\nWe see that the right hand side is minimized when v = vi = 0. We have also computed B0 = C × ||K†||op × cond(K)× ||y||, which concludes the proof of Lemma 11." }, { "heading": "5 REMARK AND RELATED WORK", "text": "In the previous section we obtained bounds on the CVloo stability of interpolating solutions to the kernel least squares problem. Our kernel least squares results can be compared with stability bounds for regularized ERM (see Remark 3). Regularized ERM has a strong stability guarantee in terms of a uniform stability bound which turns out to be inversely proportional to the regularization parameter λ and the sample size n (Bousquet & Elisseeff, 2001). However, this estimate becomes vacuous as λ→ 0. In this paper, we establish a bound on average stability, and show that this bound is minimized when the minimum norm ERM solution is chosen. We study average stability since one can expect\n20 40 60 80 100 120 140\nn\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nC on\ndi tio\nn nu\nm be\nr\nd=nd>n d<n\nFigure 2: Typical double descent of the condition number (y axis) of a radial basis function kernel K(x, x′) = exp ( − ||x−x ′||2\n2σ2\n) built from a random data matrix distributed asN (0, 1): as in the linear\ncase, the condition number is worse when n = d, better if n > d (on the right of n = d) and also better if n < d (on the left of n = d). The parameter σ was chosen to be 5. From Poggio et al. (2019)\nworst case scenarios where the minimum norm is arbitrarily large (when n ≈ d). One of our key findings is the relationship between minimizing the norm of the ERM solution and minimizing a bound on stability.\nThis leads to a second observation, namely, that we can consider the limit of our risk bounds as both the sample size (n) and the dimensionality of the data (d) go to infinity, but the ratio dn → γ > 1 as n, d→∞ . This is a classical setting in statistics which allows us to use results from random matrix theory (Marchenko & Pastur, 1967). In particular, for linear kernels the behavior of the smallest eigenvalue of the kernel matrix (which appears in our bounds) can be characterized in this asymptotic limit. In fact, under appropriate distributional assumptions, our bound for linear regression can be computed as (||X†|| × ||y||)2 ≈ √ n√\nd− √ n → 1√γ−1 . Here the dimension of the data coincides with\nthe number of parameters in the model. Interestingly, analogous results hold for more general kernels (inner product and RBF kernels) (El Karoui, 2010) where the asymptotics are taken with respect to the number and dimensionality of the data. These results predict a double descent curve for the condition number as found in practice, see Figure 2. While it may seem that our bounds in Theorem 7 diverge if d is held constant and n→∞, this case is not covered by our theorem, since when n > d we no longer have interpolating solutions.\nRecently, there has been a surge of interest in studying linear and kernel least squares models, since classical results focus on situations where constraints or penalties that prevent interpolation are added to the empirical risk. For example, high dimensional linear regression is considered in Mei & Montanari (2019); Hastie et al. (2019); Bartlett et al. (2019), and “ridgeless” kernel least squares is studied in Liang et al. (2019); Rakhlin & Zhai (2018) and Liang et al. (2020). While these papers study upper and lower bounds on the risk of interpolating solutions to the linear and kernel least squares problem, ours are the first to derived using stability arguments. While it might be possible to obtain tighter excess risk bounds through careful analysis of the minimum norm interpolant, our simple approach helps us establish a link between stability in statistical and in numerical sense.\nFinally, we can compare our results with observations made in Poggio et al. (2019) on the condition number of random kernel matrices. The condition number of the empirical kernel matrix is known to control the numerical stability of the solution to a kernel least squares problem. Our results show that the statistical stability is also controlled by the condition number of the kernel matrix, providing a natural link between numerical and statistical stability." }, { "heading": "6 CONCLUSIONS", "text": "In summary, minimizing a bound on cross validation stability minimizes the expected error in both the classical and the modern regime of ERM. In the classical regime (d < n), CVloo stability implies generalization and consistency for n→∞. In the modern regime (d > n), as described in this paper, CVloo stability can account for the double descent curve in kernel interpolants (Belkin et al., 2019) under appropriate distributional assumptions. The main contribution of this paper is characterizing stability of interpolating solutions, in particular deriving excess risk bounds via a stability argument. In the process, we show that among all the interpolating solutions, the one with minimum norm also minimizes a bound on stability. Since the excess risk bounds of the minimum norm interpolant depend on the pseudoinverse of the kernel matrix, we establish here an elegant link between numerical and statistical stability. This also holds for solutions computed by gradient descent, since gradient descent converges to minimum norm solutions in the case of “linear” kernel methods. Our approach is simple and combines basic stability results with matrix inequalities." }, { "heading": "A EXCESS RISK, GENERALIZATION, AND STABILITY", "text": "We use the same notation as introduced in Section 2 for the various quantities considered in this section. That is in the supervised learning setup V (f, z) is the loss incurred by hypothesis f on the sample z, and I[f ] = Ez[V (f, z)] is the expected error of hypothesis f . Since we are interested in different forms of stability, we will consider learning problems over the original training set S = {z1, z2, . . . , zn}, the leave one out training set Si = {z1, . . . , zi−1, zi+1, . . . , zn}, and the replace one training set (Si, z) = {z1, . . . , zi−1, zi+1, . . . , zn, z}\nA.1 REPLACE ONE AND LEAVE ONE OUT ALGORITHMIC STABILITY\nSimilar to the definition of expected CVloo stability in equation (3) of the main paper, we say an algorithm is cross validation replace one stable (in expectation), denoted as CVro, if there exists βro > 0 such that\nES,z[V (fS , z)− V (f(Si,z), z)] ≤ βro.\nWe can strengthen the above stability definition by introducing the notion of replace one algorithmic stability (in expectation) Bousquet & Elisseeff (2001). There exists αro > such that for all i = 1, . . . , n,\nES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ αro.\nWe make two observations: First, if the loss is Lipschitz, that is if there exists CV > 0 such that for all f, f ′ ∈ H\n‖V (f, z)− V (f ′, z)‖ ≤ CV ‖f − f ′‖ ,\nthen replace one algorithmic stability implies CVro stability with βro = CV αro. Moreover, the same result holds if the loss is locally Lipschitz and there exists R > 0, such that ‖fS‖∞ ≤ R almost surely. In this latter case the Lipschitz constant will depend on R. Later, we illustrate this situation for the square loss.\nSecond, we have for all i = 1, . . . , n, S and z, ES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ ES,z[‖fS − fSi‖∞] + ES,z[∥∥f(Si,z) − fSi∥∥∞].\nThis observation motivates the notion of leave one out algorithmic stability (in expectation) Bousquet & Elisseeff (2001)]\nES,z[‖fS − fSi‖∞] ≤ αloo.\nClearly, leave one out algorithmic stability implies replace one algorithmic stability with αro = 2αloo and it implies also CVro stability with βro = 2CV αloo.\nA.2 EXCESS RISK AND CVloo, CVro STABILITY\nWe recall the statement of Lemma 5 in section 3 that bounds the excess risk using the CVloo stability of a solution.\nLemma 13 (Excess Risk & CVloo Stability) For all i = 1, . . . , n,\nES [I[fSi ]− inf f∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (16)\nIn this section, two properties of ERM are useful, namely symmetry, and a form of unbiasedeness.\nSymmetry. A key property of ERM is that it is symmetric with respect to the data set S, meaning that it does not depend on the order of the data in S.\nA second property relates the expected ERM with the minimum of expected risk.\nERM Bias. The following inequality holds.\nE[[IS [fS ]]−min f∈H I[f ] ≤ 0. (17)\nTo see this, note that IS [fS ] ≤ IS [f ]\nfor all f ∈ H by definition of ERM, so that taking the expectation of both sides ES [IS [fS ]] ≤ ES [IS [f ]] = I[f ]\nfor all f ∈ H. This implies ES [IS [fS ]] ≤ min\nf∈H I[f ]\nand hence (17) holds.\nRemark 14 Note that the same argument gives more generally that\nE[ inf f∈H [IS [f ]]− inf f∈H I[f ] ≤ 0. (18)\nGiven the above premise, the proof of Lemma 5 is simple.\nProof [of Lemma 5] Adding and subtracting ES [IS [fS ]] from the expected excess risk we have that ES [I[fSi ]−min\nf∈H I[f ]] = ES [I[fSi ]− IS [fS ] + IS [fS ]−min f∈H I[f ]], (19)\nand since ES [IS [fS ]]−minf∈H I[f ]] is less or equal than zero, see (18), then ES [I[fSi ]−min\nf∈H I[f ]] ≤ ES [I[fSi ]− IS [fS ]]. (20)\nMoreover, for all i = 1, . . . , n\nES [I[fSi ]] = ES [EziV (fSi , zi)] = ES [V (fSi , zi)] and\nES [IS [fS ]] = 1\nn n∑ i=1 ES [V (fS , zi)] = ES [V (fS , zi)].\nPlugging these last two expressions in (20) and in (19) leads to (4).\nWe can prove a similar result relating excess risk with CVro stability.\nLemma 15 (Excess Risk & CVro Stability) Given the above definitions, the following inequality holds for all i = 1, . . . , n,\nES [I[fS ]− inf f∈H I[f ]] ≤ ES [I[fS ]− IS [fS ]] = ES,z[V (fS , z)− V (f(Si,z), z)]. (21)\nProof The first inequality is clear from adding and subtracting IS [fS ] from the expected risk I[fS ] we have that\nES [I[fS ]−min f∈H I[f ]] = ES [I[fS ]− IS [fS ] + IS [fS ]−min f∈H I[f ]],\nand recalling (18). The main step in the proof is showing that for all i = 1, . . . , n,\nE[IS [fS ]] = E[V (f(Si,z), z)] (22)\nto be compared with the trivial equality, E[IS [fS ] = E[V (fS , zi)]. To prove Equation (22), we have for all i = 1, . . . , n,\nES [IS [fS ]] = ES,z[ 1\nn n∑ i=1 V (fS , zi)] = 1 n n∑ i=1 ES,z[V (f(Si,z), z)] = ES,z[V (f(Si,z), z)]\nwhere we used the fact that by the symmetry of the algorithm ES,z[V (f(Si,z), z)] is the same for all i = 1, . . . , n. The proof is concluded noting that ES [I[fS ]] = ES,z[V (fS , z)].\nA.3 DISCUSSION ON STABILITY AND GENERALIZATION\nBelow we discuss some more aspects of stability and its connection to other quantities in statistical learning theory.\nRemark 16 (CVloo stability in expectation and in probability) In Mukherjee et al. (2006), CVloo stability is defined in probability, that is there exists βPCV > 0, 0 < δ P CV ≤ 1 such that\nPS{|V (fSi , zi)− V (fS , zi)| ≥ βPCV } ≤ δPCV .\nNote that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi , zi)− V (fS , zi) > 0. Then CVloo stability in probability and in expectation are clearly related and indeed equivalent for bounded loss functions. CVloo stability in expectation (3) is what we study in the following sections.\nRemark 17 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z |V (fSi , z) − V (fS , z)| ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nRemark 18 (CVloo Stability & Learnability) A natural question is to which extent suitable notions of stability are not only sufficient but also necessary for controlling the excess risk of ERM. Classically, the latter is characterized in terms of a uniform version of the law of large numbers, which itself can be characterized in terms of suitable complexity measures of the hypothesis class. Uniform stability is too strong to characterize consistency while CVloo stability turns out to provide a suitably weak definition as shown in Mukherjee et al. (2006), see also Kutin & Niyogi (2002), Mukherjee et al. (2006). Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM:\nTheorem 19 Mukherjee et al. (2006) For ERM and bounded loss functions, CVloo stability in probability with βPCV converging to zero for n→∞ is equivalent to consistency and generalization of ERM.\nRemark 20 (CVloo stability & in-sample/out-of-sample error) Let (S, z) = {z1, . . . , zn, z}, (z is a data point drawn according to the same distribution) and the corresponding ERM solution f(S,z), then (4) can be equivalently written as,\nES [I[fS ]− inf f∈F I[f ]] ≤ ES,z[V (fS , z)− V (f(S,z), z)].\nThus CVloo stability measures how much the loss changes when we test on a point that is present in the training set and absent from it. In this view, it can be seen as an average measure of the difference between in-sample and out-of-sample error.\nRemark 21 (CVloo stability and generalization) A common error measure is the (expected) generalization gap ES [I[fS ]−IS [fS ]]. For non-ERM algorithms, CVloo stability by itself not sufficient to control this term, and further conditions are needed Mukherjee et al. (2006), since\nES [I[fS ]− IS [fS ]] = ES [I[fS ]− IS [fSi ]] + ES [IS [fSi ]− IS [fS ]].\nThe second term becomes for all i = 1, . . . , n,\nES [IS [fSi ]− IS [fS ]] = 1\nn n∑ i=1 ES [V (fSi , zi)− V (fS , zi)] = ES [V (fSi , zi)− V (fS , zi)]\nand hence is controlled by CV stability. The first term is called expected leave one out error in Mukherjee et al. (2006) and is controlled in ERM as n→∞, see Theorem 19 above.\nB CVloo STABILITY OF LINEAR REGRESSION\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping w ∈ R d, that minimizes the empirical least squares risk. All interpolating solutions are of the form ŵS = y>X†+v>(I−XX†). Similarly, all interpolating solutions on the leave one out dataset Si can be written as ŵSi = y>i (Xi) † + v>i (I−Xi(Xi)†). Here X,Xi ∈ R d×n are the data matrices for the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nIn this section we want to estimate the CVloo stability of the minimum norm solution to the ERM problem in the linear regression case. This is the case outlined in Remark 9 of the main paper. In order to prove Remark 9, we only need to combine Lemma 10 with the linear regression analogue of Lemma 11. We state and prove that result in this section. This result predicts a double descent curve for the norm of the pseudoinverse as found in practice, see Figure 3.\nLemma 22 Let ŵS , ŵSi be any interpolating least squares solutions on the full and leave one out datasets S, Si, then ‖ŵS − ŵSi‖ ≤ BCV (X†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions w†S ,w † Si . Also, ∥∥∥w†S −w†Si∥∥∥ ≤ 3∥∥X†∥∥op × ‖y‖ (23) As mentioned before in section 2.1 of the main paper, linear regression can be viewed as a case of the kernel regression problem whereH = Rd, and the feature map Φ is the identity map. The inner product and norms considered in this case are also the usual Euclidean inner product and 2-norm for vectors in Rd. The notation ‖·‖ denotes the Euclidean norm for vectors both in Rd and Rn. The usage of the norm should be clear from the context. Also, ‖A‖op is the left operator norm for a matrix A ∈ Rn×d, that is ‖A‖op = supy∈Rn,||y||=1 ||y>A||.\nWe have n samples in the training set for a linear regression problem, {(xi, yi)}ni=1. We collect all the samples into a single matrix/vector X = [x1x2 . . .xn] ∈ Rd×n, and y = [y1y2 . . . yn]> ∈ Rn. Then any interpolating ERM solution wS satisfies the linear equation\nw>SX = y > (24)\nAny interpolating solution can be written as:\n(ŵS) > = y>X† + v>(I−XX†). (25)\nIf we consider the leave one out training set Si we can find the minimum norm ERM solution for Xi = [x1 . . .0 . . .xn] and yi = [y1 . . . 0 . . . yn]> as\n(ŵSi) > = y>i (Xi) † + v>i (I−Xi(Xi)†). (26) We can write Xi as:\nXi = X + ab > (27)\nwhere a ∈ Rd is a column vector representing the additive change to the ith column, i.e, a = −xi, and b ∈ Rn×1 is the i−th element of the canonical basis in Rn (all the coefficients are zero but the i−th which is one). Thus ab> is a d × n matrix composed of all zeros apart from the ith column which is equal to a.\nWe also have yi = y − yib. Now per Lemma 10 we are interested in bounding the quantity ||ŵSi − ŵS || = ||(ŵSi)> − (ŵS)>||. This simplifies to:\n||ŵSi − ŵS || = ||y>i (Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||(y> − yib>)(Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + yib>(Xi)† + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + v>i − v> + v>XX† − v>i Xi(Xi)†|| = ||y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†)|| (28)\nIn the above equation we make use of the fact that b>(Xi)† = 0. We use an old formula (Meyer, 1973; Baksalary et al., 2003) to compute (Xi)† from X†. We use the development of pseudo-inverses of perturbed matrices in Meyer (1973). We see that a = −xi is a vector in the column space of X and b is in the range space of XT (provided X has full column rank), with β = 1 + b>X†a = 1− b>X†xi = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (Xi)†\n(Xi) † = X† − kk†X† −X†h†h + (k†X†h†)kh (29)\nwhere k = X†a, and h = b>X†, and u† = u >\n||u||2 for any non-zero vector u.\n(Xi) † −X† = (k†X†h†)kh− kk†X† −X†h†h\n= a>(X†)>X†(X†)>b× kh ||k||2||h||2 − kk†X† −X†h†h\n=⇒ ||(Xi)† −X†||op ≤ |a>(X†)>X†(X†)>b| ||X†a||||b>X†|| + 2||X†||op\n≤ ||X †||op||X†a||||b>X†|| ||X†a||||b>X†|| + 2||X†||op\n= 3||X†||op\n(30)\nThe above set of inequalities follows from the fact that the operator norm of a rank 1 matrix is given by ||uv>||op = ||u|| × ||v||\nAlso, from List 2 of Baksalary et al. (2003) we have that Xi(Xi)† = XX† − h†h. Plugging in these calculations into equation 28 we get:\n||ŵSi − ŵS || = ||y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†)|| ≤ B0 + ||I−XX†||op||v − vi||+ ||vi|| × ||h†h||op ≤ B0 + 2||v − vi||+ ||vi|| (31)\nWe see that the right hand side is minimized when v = vi = 0. We can also compute B0 = 3||X†||op||y||, which concludes the proof of Lemma 22." } ]
[ "This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of gradient diversity to the performance of continual learning, and proposed a regularization that connects metric learning and continual learning. However, there are still some issues to be addressed as below." ]
"The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting. In recent studies, several gradientbased approaches have been developed to make more efficient use of compact episodic memories, which constrain the gradients resulting from new samples with those from memorized samples, aiming to reduce the diversity of gradients from different tasks. In this paper, we reveal the relation between diversity of gradients and discriminativeness of representations, demonstrating connections between Deep Metric Learning and continual learning. Based on these findings, we propose a simple yet highly efficient method – Discriminative Representation Loss (DRL) – for continual learning. In comparison with several state-of-theart methods, DRL shows effectiveness with low computational cost on multiple benchmark experiments in the setting of online continual learning."
[ { "authors": [ "Rahaf Aljundi", "Min Lin", "Baptiste Goujaud", "Yoshua Bengio" ], "title": "Gradient based sample selection for online continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Puneet K Dokania", "Thalaiyasingam Ajanthan", "Philip HS Torr" ], "title": "Riemannian walk for incremental learning: Understanding forgetting and intransigence", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-GEM", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Marcus Rohrbach", "Mohamed Elhoseiny", "Thalaiyasingam Ajanthan", "Puneet K Dokania", "Philip HS Torr", "Marc’Aurelio Ranzato" ], "title": "On tiny episodic memories in continual learning", "venue": "arXiv preprint arXiv:1902.10486,", "year": 2019 }, { "authors": [ "Yu Chen", "Tom Diethe", "Neil Lawrence" ], "title": "Facilitating bayesian continual learning by natural gradients and stein gradients", "venue": "Continual Learning Workshop of 32nd Conference on Neural Information Processing Systems (NeurIPS", "year": 2018 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Niannan Xue", "Stefanos Zafeiriou" ], "title": "Arcface: Additive angular margin loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tom Diethe", "Tom Borchert", "Eno Thereska", "Borja de Balle Pigem", "Neil Lawrence" ], "title": "Continual learning in practice", "venue": "In Continual Learning Workshop of 32nd Converence on Neural Information Processing Systems (NeurIPS", "year": 2018 }, { "authors": [ "Mehrdad Farajtabar", "Navid Azizan", "Alex Mott", "Ang Li" ], "title": "Orthogonal gradient descent for continual learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Ching-Yi Hung", "Cheng-Hao Tu", "Cheng-En Wu", "Chien-Hung Chen", "Yi-Ming Chan", "Chu-Song Chen" ], "title": "Compacting, picking and growing for unforgetting continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mahmut Kaya", "Hasan Şakir Bilge" ], "title": "Deep metric learning: A survey. Symmetry", "venue": null, "year": 2019 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Ya Le", "Xuan Yang" ], "title": "Tiny imagenet visual recognition challenge", "venue": "CS 231N,", "year": 2015 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "Christopher JC Burges" ], "title": "MNIST handwritten digit database", "venue": "AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "Timothée Lesort", "Vincenzo Lomonaco", "Andrei Stoian", "Davide Maltoni", "David Filliat", "Natalia Dı́az-Rodrı́guez" ], "title": "Continual learning for robotics", "venue": "arXiv preprint arXiv:1907.00182,", "year": 2019 }, { "authors": [ "Jinlong Liu", "Yunzhi Bai", "Guoqing Jiang", "Ting Chen", "Huayan Wang" ], "title": "Understanding why neural networks generalize well through GSNR of parameters", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "Sebastian Mika", "Gunnar Ratsch", "Jason Weston", "Bernhard Scholkopf", "Klaus-Robert" ], "title": "Mullers. Fisher discriminant analysis with kernels. In Neural networks for signal processing", "venue": "IX: Proceedings of the 1999 IEEE signal processing society workshop (cat. no", "year": 1999 }, { "authors": [ "Cuong V Nguyen", "Yingzhen Li", "Thang D Bui", "Richard E Turner" ], "title": "Variational continual learning", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matthew Riemer", "Ignacio Cases", "Robert Ajemian", "Miao Liu", "Irina Rish", "Yuhai Tu", "Gerald Tesauro" ], "title": "Learning to learn without forgetting by maximizing transfer and minimizing interference", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy Lillicrap", "Gregory Wayne" ], "title": "Experience replay for continual learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Karsten Roth", "Timo Milbich", "Samarth Sinha", "Prateek Gupta", "Bjoern Ommer", "Joseph Paul Cohen" ], "title": "Revisiting training strategies and generalization performance in deep metric learning", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "arXiv preprint arXiv:1805.06370,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Huangshi Tian", "Minchen Yu", "Wei Wang" ], "title": "Continuum: A platform for cost-aware, low-latency continual learning", "venue": "In Proceedings of the ACM Symposium on Cloud Computing,", "year": 2018 }, { "authors": [ "Jian Wang", "Feng Zhou", "Shilei Wen", "Xiao Liu", "Yuanqing Lin" ], "title": "Deep metric learning with angular loss", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Xun Wang", "Xintong Han", "Weilin Huang", "Dengke Dong", "Matthew R Scott" ], "title": "Multi-similarity loss with general pair weighting for deep metric learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kilian Q Weinberger", "John Blitzer", "Lawrence K Saul" ], "title": "Distance metric learning for large margin nearest neighbor classification", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Chao-Yuan Wu", "R Manmatha", "Alexander J Smola", "Philipp Krahenbuhl" ], "title": "Sampling matters in deep embedding learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In International Conference on Machine Learning,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the real world, we are often faced with situations where data distributions are changing over time, and we would like to update our models by new data in time, with bounded growth in system size. These situations fall under the umbrella of “continual learning”, which has many practical applications, such as recommender systems, retail supply chain optimization, and robotics (Lesort et al., 2019; Diethe et al., 2018; Tian et al., 2018). Comparisons have also been made with the way that humans are able to learn new tasks without forgetting previously learned ones, using common knowledge shared across different skills. The fundamental problem in continual learning is catastrophic forgetting (McCloskey & Cohen, 1989; Kirkpatrick et al., 2017), i.e. (neural network) models have a tendency to forget previously learned tasks while learning new ones.\nThere are three main categories of methods for alleviating forgetting in continual learning: i) regularization-based methods which aim in preserving knowledge of models of previous tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2018) ii) architecture-based methods for incrementally evolving the model by learning task-shared and task-specific components (Schwarz et al., 2018; Hung et al., 2019); iii) replay-based methods which focus in preserving knowledge of data distributions of previous tasks, including methods of experience replay by episodic memories or generative models (Shin et al., 2017; Rolnick et al., 2019), methods for generating compact episodic memories (Chen et al., 2018; Aljundi et al., 2019), and methods for more efficiently using episodic memories (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Riemer et al., 2019; Farajtabar et al., 2020).\nGradient-based approaches using episodic memories, in particular, have been receiving increasing attention. The essential idea is to use gradients produced by samples from episodic memories to constrain the gradients produced by new samples, e.g. by ensuring the inner product of the pair of gradients is non-negative (Lopez-Paz & Ranzato, 2017) as follows:\n〈gt, gk〉 = 〈 ∂L(xt, θ)\n∂θ , ∂L(xk, θ) ∂θ\n〉 ≥ 0, ∀k < t (1)\nwhere t and k are time indices, xt denotes a new sample from the current task, and xk denotes a sample from the episodic memory. Thus, the updates of parameters are forced to preserve the performance on previous tasks as much as possible.\nIn Gradient Episodic Memory (GEM) (Lopez-Paz & Ranzato, 2017), gt is projected to a direction that is closest to it in L2-norm whilst also satisfying Eq. (1): ming̃ 12 ||gt − g̃|| 2 2, s.t.〈g̃, gk〉 ≥ 0, ∀k < t. Optimization of this objective requires a high-dimensional quadratic program and thus is computationally expensive. Averaged-GEM (A-GEM) (Chaudhry et al., 2019a) alleviates the computational burden of GEM by using the averaged gradient over a batch of samples instead of individual gradients of samples in the episodic memory. This not only simplifies the computation, but also obtains comparable performance with GEM. Orthogonal Gradient Descent (OGD) (Farajtabar et al., 2020) projects gt to the direction that is perpendicular to the surface formed by {gk|k < t}. Moreover, Aljundi et al. (2019) propose Gradient-based Sample Selection (GSS), which selects samples that produce most diverse gradients with other samples into episodic memories. Here diversity is measured by the cosine similarity between gradients. Since the cosine similarity is computed using the inner product of two normalized gradients, GSS embodies the same principle as other gradient-based approaches with episodic memories. Although GSS suggests the samples with most diverse gradients are important for generalization across tasks, Chaudhry et al. (2019b) show that the average gradient over a small set of random samples may be able to obtain good generalization as well.\nIn this paper, we answer the following questions: i) Which samples tend to produce diverse gradients that strongly conflict with other samples and why are such samples able to help with generalization? ii) Why does a small set of randomly chosen samples also help with generalization? iii) Can we reduce the diversity of gradients in a more efficient way? Our answers reveal the relation between diversity of gradients and discriminativeness of representations, and further show connections between Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) and continual learning. Drawing on these findings we propose a new approach, Discriminative Representation Loss (DRL), for classification tasks in continual learning. Our methods show improved performance with relatively low computational cost in terms of time and RAM cost when compared to several state-of-theart (SOTA) methods across multiple benchmark tasks in the setting of online continual learning." }, { "heading": "2 A NEW PERSPECTIVE OF REDUCING DIVERSITY OF GRADIENTS", "text": "According to Eq. (1), negative cosine similarities between gradients produced by current and previous tasks result in worse performance in continual learning. This can be interpreted from the perspective of constrained optimization as discussed by Aljundi et al. (2019). Moreover, the diversity of gradients relates to the Gradient Signal to Noise Ratio (GSNR) (Liu et al., 2020), which plays a crucial role in the model’s generalization ability. Intuitively, when more of the gradients point in diverse directions, the variance will be larger, leading to a smaller GSNR, which indicates that reducing the diversity of gradients can improve generalization. This finding leads to the conclusion that samples with the most diverse gradients contain the most critical information for generalization which is consistent with in Aljundi et al. (2019)." }, { "heading": "2.1 THE SOURCE OF GRADIENT DIVERSITY", "text": "We first conducted a simple experiment on classification tasks of 2-D Gaussian distributions, and tried to identify samples with most diverse gradients in the 2-D feature space. We trained a linear model on the first task to discriminate between two classes (blue and orange dots in Fig. 1a). We then applied the algorithm Gradient-based Sample Selection with Interger Quadratic Programming (GSS-IQP) (Aljundi et al., 2019) to select 10% of the samples of training data that produce gradients with the lowest similarity (black dots in Fig. 1a), and denote this set of samples as M̂ = minM ∑ i,j∈M 〈gi,gj〉 ||gi||·||gj || .\nIt is clear from Fig. 1a that the samples in M̂ are mostly around the decision boundary between the two classes. Increasing the size of M̂ results in the inclusion of samples that trace the outer edges of the data distributions from each class. Clearly the gradients can be strongly opposed when samples from different classes are very similar. Samples close to decision boundaries are most likely to exhibit this characteristic. Intuitively, storing the decision boundaries of previously learned classes should be an effective way to preserve classification performance on those classes. However, if the episodic memory only includes samples representing the learned boundaries, it may miss important information when the model is required to incrementally learn new classes. We show this by introducing a second task - training the model above on a third class (green dots). We display the decision boundaries (which split the feature space in a one vs. all manner) learned by the model after\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 M\n(a) Samples with most diverse gradients (M̂ ) after learning task 1, the green line is the decision boundary.\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(b) Learned decision boundaries (purple lines) after task 2. Here the episodic memory includes samples in M̂ .\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(c) Learned decision boundaries (purple lines) after task 2. Here the episodic memory consists of random samples.\n(a) Splitting samples into several subsets in a 3-class classification task. Dots in different colors are from different classes.\n(b) Estimated distributions of β when drawing negative pairs from different subsets of samples.\n(c) Estimated distributions of α− δ when drawing negative pairs from different subsets of samples.\nFigure 2: Illustration of how Pr(2β > α − δ) in Theorem 1 behaves in various cases by drawing negative pairs from different subsets of a 3-class feature space which are defined in Fig. 2a. The classifier is a linear model. y-axis in the right side of (b) & (c) is for the case of x ∈ S1 ∪ S2. We see that α− δ behaves in a similar way with β but in a smaller range which makes β the key in studying Pr(2β > α − δ). In the case of x ∈ S3 the distribution of β has more mass on larger values than other cases because the predicted probabilities are mostly on the two classes in a pair, and it causes all 〈gn,gm〉 having the opposite sign of 〈xn,xm〉 as shown in Tab. 1.\ntask 2 with M̂ (Fig. 1b) and a random set of samples (Fig. 1c) from task 1 as the episodic memory. The random episodic memory shows better performance than the one selected by GSS-IQP, since the new decision boundaries rely on samples not included in M̂ . It explains why randomly selected memories may generalize better in continual learning. Ideally, with M̂ large enough, the model can remember all edges of each class, and hence learn much more accurate decision boundaries sequentially. However, memory size is often limited in practice, especially for high-dimensional data. A more efficient way could be learning more informative representations. The experimental results indicate that: 1) more similar representations in different classes result in more diverse gradients. 2) more diverse representations in a same class help with learning new boundaries incrementally.\nNow we formalise the connection between the diversity of gradients and the discriminativeness of representations for the linear model (proofs are in Appx. A). Notations: Negative pair represents two samples from different classes. Positive pair represents two samples from a same class. Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a one-hot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = W\nTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have: 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nTheorem 1. Suppose yn 6= ym, and let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , ||pn||2 + ||pm||2, β , pn,cm + pm,cn and δ , ||pn − pm||22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β > α− δ),\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have: sign(〈gn, gm〉) = sign(〈xn,xm〉)\nFor a better understanding of the theorems, we conduct empirical study by partitioning the feature space of three classes into several subsets as shown in Fig. 2a and examine four cases of pairwise samples by these subsets: 1). x ∈ S0, both samples in a pair are near the intersection of the three classes; 2). x ∈ S0∪S1, one sample is close to decision boundaries and the other is far away from the boundaries; 3). x ∈ S3, both samples close to the decision boundary between their true classes but away from the third class; 4). x ∈ S1 ∪ S2, both samples are far away from the decision boundaries. Theorem 1 says that for samples from different classes, 〈gn, gm〉 gets an opposite sign of 〈xn,xm〉 with a probability that depends on the predictions pn and pm. This probability of flipping the sign especially depends on β which reflects how likely to misclassify both samples to its opposite class. We show the empirical distributions of β and (α− δ) obtained by a linear model in Figs. 2b and 2c, respectively. In general, (α− δ) shows similar behaviors with β in the four cases but in a smaller range, which makes 2β > (α − δ) tends to be true except when β is around zero. Basically, a subset including more samples close to decision boundaries leads to more probability mass on large values of β, and the case of x ∈ S3 results in largest mass on large values of β because the predicted probabilities mostly concentrate on the two classes in a pair. As shown in Tab. 1, more mass on large values of β leads to larger probabilities of flipping the sign. These results demonstrate that samples with most diverse gradients (which gradients have largely negative similarities with other samples) are close to decision boundaries because they tend to have large β and 〈xn,xm〉 tend to be positive. In the case of x ∈ S1 ∪ S2 the probability of flipping the sign is zero because β concentrates around zero. According to Lemma 1 〈gn, gm〉 are very close to zero in this case because the predictions are close to true labels, hence, such samples are not considered as with most diverse gradients.\nTheorem 2 says 〈gn, gm〉 has the same sign as 〈xn,xm〉 when the two samples from a same class. We can see the results of positive pairs in Tab. 1 matches Theorem 2. In the case of S0 ∪ S1 the two probabilities do not add up to exactly 1 because the implementation of cross-entropy loss in tensorflow smooths the function by a small value for preventing numerical issues which slightly changes the gradients. As 〈xn,xm〉 is mostly positive for positive pairs, 〈gn, gm〉 hence is also mostly positive, which explains why samples with most diverse gradients are not sufficient to preserve information within classes in experiments of Fig. 1. On the other hand, if 〈xn,xm〉 is negative then 〈gn, gm〉 will be negative, which indicates representations within a class should not be too diverse. Extending this theoretical analysis based on a linear model, we also provide empirical study of non-linear models (Multi-layer Perceptrons (MLPs)). As demonstrated in Tab. 1, the probability of flipping the sign in MLPs are very similar with the linear model since it only depends on the predictions and all models have learned reasonable decision boundaries. The probability of getting\nnegative 〈gn, gm〉 is also similar with the linear model except in the case of S1 ∪ S2 for negative pairs, in which the MLP with ReLU gets much less negative 〈gn, gm〉. As MLP with tanh activations is still consistent with the linear model in this case, we consider the difference is caused by the representations always being positive due to ReLU activations. These results demonstrate that non-linear models exhibit similar behaviors with linear models that mostly align with the theorems.\nSince only negative 〈gn, gm〉 may cause conflicts, reducing the diversity of gradients hence relies on reducing negative 〈gn, gm〉. We consider to reduce negative 〈gn, gm〉 by two ways: 1).minimize the representation inner product of negative pairs, which pushes the inner product to be negative or zero (for positive representations); 2).optimize the predictions to decrease the probability of flipping the sign. In this sense, decreasing the representation similarity of negative pairs might help with both ways. In addition, according to Fig. 2b x ∼ S3 gets larger prediction similarity than x ∼ S0 due to the predictions put most probability mass on both classes of a pair, which indicates decreasing the similarity of predictions may decrease the probability of flipping the sign. Hence, we include logits in the representations. We verify this idea by training two binary classifiers for two groups of MNIST classes ({0, 1} and {7, 9}). The classifiers have two hidden layers each with 100 hidden units and ReLU activations. We randomly chose 100 test samples from each group to compute the pairwise cosine similarities. Representations are obtained by concatenating the output of all layers (including logits) of the neural network, gradients are computed by all parameters of the model. We display the similarities in Figs. 3a and 3b. The correlation coefficients between the gradient and representation similarities of negative pairs are -0.86 and -0.85, which of positive pairs are 0.71 and 0.79. In all cases, the similarities of representations show strong correlations with the similarities of gradients. The classifier for class 0 and 1 gets smaller representation similarities and much less negative gradient similarities for negative pairs (blue dots) and it also gains a higher accuracy than the other classifier (99.95% vs. 96.25%), which illustrates the potential of reducing the gradient diversity by decreasing the representation similarity of negative pairs." }, { "heading": "2.2 CONNECTING DEEP METRIC LEARNING TO CONTINUAL LEARNING", "text": "Reducing the representation similarity between classes shares the same concept as learning larger margins which has been an active research area for a few decades. For example, Kernel Fisher Discriminant analysis (KFD) (Mika et al., 1999) and distance metric learning (Weinberger et al., 2006) aim to learn kernels that can obtain larger margins in an implicit representation space, whereas Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) leverages deep neural networks to learn embeddings that maximize margins in an explicit representation space. In this sense, DML has the potential to help with reducing the diversity of gradients in continual learning.\nHowever, the usual concepts in DML may not entirely be appropriate for continual learning, as they also aim in learning compact representations within classes (Schroff et al., 2015; Wang et al., 2017; Deng et al., 2019). In continual learning, the unused information for the current task might be important for a future task, e.g. in the experiments of Fig. 1 the y-dimension is not useful for task 1 but useful for task 2. It indicates that learning compact representations in a current task might omit important dimensions in the representation space for a future task. In this case, even if we\nstore diverse samples into the memory, the learned representations may be difficult to generalize on future tasks as the omitted dimensions can only be relearned by using limited samples in the memory. We demonstrate this by training a model with and without L1 regulariztion on the first two tasks of split-MNIST and split-Fashion MNIST. The results are shown in Tab. 2. We see that with L1 regularization the model learns much more compact representations and gives a similar performance on task 1 but much worse performance on task 2 comparing to without L1 regularization. The results suggest that continual learning shares the interests of maximizing margins in DML but prefers less compact representation space to preserve necessary information for future tasks. We suggest an opposite way regarding the within-class compactness: minimizing the similarities within the same class for obtaining less compact representation space. Roth et al. (2020) proposed a ρ-spectrum metric to measure the information entropy contained in the representation space (details are provided in Appx. D) and introduced a ρ-regularization method to restrain over-compression of representations. The ρ-regularization method randomly replaces negative pairs by positive pairs with a pre-selected probability pρ. Nevertheless, switching pairs is inefficient and may be detrimental to the performance in an online setting because some negative pairs may never be learned in this way. Thus, we propose a different way to restrain the compression of representations which will be introduced in the following." }, { "heading": "3 DISCRIMINATIVE REPRESENTATION LOSS", "text": "Based on our findings in the above section, we propose an auxiliary objective Discriminative Representation Loss (DRL) for classification tasks in continual learning, which is straightforward, robust, and efficient. Instead of explicitly re-projecting gradients during training process, DRL helps with decreasing gradient diversity by optimizing the representations. As defined in Eq. (2), DRL consists of two parts: one is for minimizing the similarities of representations from different classes (Lbt) which can reduce the diversity of gradients from different classes, the other is for minimizing the similarities of representations from a same class (Lwi) which helps preserve discriminative information for future tasks in continual learning.\nmin Θ LDRL = min Θ (Lbt + αLwi), α > 0,\nLbt = 1\nNbt B∑ i=1 B∑ j 6=i,yj 6=yi 〈hi, hj〉, Lwi = 1 Nwi B∑ i=1 B∑ j 6=i,yj=yi 〈hi, hj〉, (2)\nwhere Θ denotes the parameters of the model, B is training batch size. Nbt, Nwi are the number of negative and positive pairs, respectively. α is a hyperparameter controlling the strength of Lwi, hi is the representation of xi, yi is the label of xi. The final loss function combines the commonly used softmax cross entropy loss for classification tasks (L) with DRL (LDRL) as shown in Eq. (3),\nL̂ = L+ λLDRL, λ > 0, (3)\nwhere λ is a hyperparameter controlling the strength of LDRL, which is larger for increased resistance to forgetting, and smaller for greater elasticity. We provide experimental results to verify the effects of DRL and an ablation study on Lbt and Lwi (Tab. 7) in Appx. E, according to which Lbt and Lwi\nhave shown effectiveness on improving forgetting and ρ-spectrum, respectively. We will show the correlation between ρ-spectrum and the model performance in Sec. 5.\nThe computational complexity of DRL isO(B2H), whereB is training batch size,H is the dimension of representations. B is small (10 or 20 in our experiments) and commonly H W , where W is the number of network parameters. In comparison, the computational complexity of A-GEM and GSS-greedy are O(BrW ) and O(BBmW ), respectively, where Br is the reference batch size in A-GEM and Bm is the memory batch size in GSS-greedy. The computational complexity discussed here is additional to the cost of common backpropagation. We compare the training time of all methods on MNIST tasks in Tab. 9 in Appx. H, which shows the representation-based methods require much lower computational cost than gradient-based approaches." }, { "heading": "4 ONLINE MEMORY UPDATE AND BALANCED EXPERIENCE REPLAY", "text": "We follow the online setting of continual learning as was done for other gradient-based approaches with episodic memories (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019a; Aljundi et al., 2019), in which the model only trained with one epoch on the training data.\nWe update the episodic memories by the basic ring buffer strategy: keep the last nc samples of class c in the memory buffer, where nc is the memory size of a seen class c. We have deployed the episodic memories with a fixed size, implying a fixed budget for the memory cost. Further, we maintain a uniform distribution over all seen classes in the memory. The buffer may not be evenly allocated to each class before enough samples are acquired for newly arriving classes. We show pseudo-code of the memory update strategy in Alg. 1 in Appx. B for a clearer explanation. For class-incremental learning, this strategy can work without knowing task boundaries. Since DRL and methods of DML depend on the pairwise similarities of samples, we would prefer the training batch to include as wide a variety of different classes as possible to obtain sufficient discriminative information. Hence, we adjust the Experience Replay (ER) strategy (Chaudhry et al., 2019b) for the needs of such methods. The idea is to uniformly sample from seen classes in the memory buffer to form a training batch, so that this batch can contain as many seen classes as possible. Moreover, we ensure the training batch includes at least one positive pair of each selected class (minimum 2 samples in each class) to enable the parts computed by positive pairs in the loss. In addition, we also ensure the training batch includes at least one class from the current task. We call this Balanced Experience Replay (BER). The pseudo code is in Alg. 2 of Appx. B. Note that we update the memory and form the training batch based on the task ID instead of class ID for instance-incremental tasks (e.g. permuted MNIST tasks), as in this case each task always includes the same set of classes." }, { "heading": "5 EXPERIMENTS", "text": "In this section we evaluate our methods on multiple benchmark tasks by comparing with several baseline methods in the setting of online continual learning.\nBenchmark tasks: We have conducted experiments on the following benchmark tasks: Permuted MNIST (10 tasks and each task includes the same 10 classes with different permutation of features), Split MNIST, Split Fashion-MNIST, and Split CIFAR-10 (all three having 5 tasks with two classes in each task), Split CIFAR-100 (10 tasks with 10 classes in each task), Split TinyImageNet (20 tasks with 10 classes in each task). All split tasks include disjoint classes. For tasks of MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017), the training size is 1000 samples per task, for CIFAR10 (Krizhevsky et al., 2009) the training size is 3000 per task, for CIFAR-100 and TinyImageNet (Le & Yang, 2015) it is 5000 per task. N.B.: We use single-head (shared output) models in all of our experiments, meaning that we do not require a task identifier at testing time. Such settings are more difficult for continual learning but more practical in real applications.\nBaselines: We compare our methods with: two gradient-based approaches (A-GEM (Chaudhry et al., 2019a) and GSS-greedy (Aljundi et al., 2019)), two standalone experience replay methods (ER (Chaudhry et al., 2019b) and BER), two SOTA methods of DML (Multisimilarity (Wang et al., 2019) and R-Margin (Roth et al., 2020)). We also trained a single task over all classes with one epoch for all benchmarks which performance can be viewed as a upper bound of each benchmark. N.B.: We deploy the losses of Multisimilarity and R-Margin as auxiliary objectives as the same as DRL\nbecause using standalone such losses causes difficulties of convergence in our experimental settings. We provide the definitions of these two losses in Appx. D.\nPerformance measures: We use the Average accuracy, Average forgetting, Average intransigence to evaluate the performance of all methods, the definition of these measures are provided in Appx. C\nExperimental settings: We use the vanilla SGD optimizer for all experiments without any scheduling. For tasks on MNIST and Fashion-MNIST, we use a MLP with two hidden layers and ReLU activations, and each layer has 100 hidden units. For tasks on CIFAR datasets and TinyImageNet, we use the same reduced Resnet18 as used in Chaudhry et al. (2019a). All networks are trained from scratch without regularization scheme. For the MLP, representations are the concatenation of outputs of all layers including logits; for reduced Resnet18, representations are the concatenation of the input of the final linear layer and output logits. We concatenate outputs of all layers as we consider they behave like different levels of representation, and when higher layers (layers closer to the input) generate more discriminative representations it would be easier for lower layers to learn more discriminative representations as well. This method also improves the performance of MLPs. For reduced ResNet18 we found that including outputs of all hidden layers performs almost the same as only including the final representations, so we just use the final layer for lower computational cost. We deploy BER as the replay strategy for DRL, Multisimilarity, and R-Margin. The memory size for tasks on MNIST and Fashion-MNIST is 300 samples. For tasks on CIFAR-10 and CIFAR-100 the memory size is 2000 and 5000 samples, respectively. For TinyImageNet it is also 5000 samples. The standard deviation shown in all results are evaluated over 10 runs with different random seeds. We use 10% of training set as validation set for choosing hyperparameters by cross validation. More details of experimental settings and hyperparameters are given in Appx. I.\nTabs. 3 to 5 give the averaged accuracy, forgetting, and intransigence of all methods on all benchmark tasks, respectively. As we can see, the forgetting and intransigence often conflict with each other which is the most common phenomenon in continual learning. Our method DRL is able to get a better trade-off between them and thus outperforms other methods over most benchmark tasks in terms of average accuracy. This could be because DRL facilitates getting a good intransigence and ρ-spectrum by Lwi and a good forgetting by Lbt. In DRL the two terms are complementary to each other and combining them brings benefits on both sides (an ablation study on the two terms are provide in Appx. E). According to Tabs. 4 and 5, Multisimilarity got better avg. intransigence and similar avg. forgetting on CIFAR-10 compared with DRL which indicates Multisimilarity learns better representations to generalize on new classes in this case. Roth (2020) also suggests Multisimilarity is a very strong baseline in deep metric learning which outperforms the proposed R-Margin on several datasets. And we use the hyperparameters of Multisimilarity recommended in Roth (2020) which generally perform well on multiple complex datasets. TinyImageNet gets much worse performance than other benchmarks because it has more classes (200), a longer task sequence (20 tasks), a larger feature space (64× 64× 3), and the accuracy of the single task on it is just about 17.8%. According to Tab. 3 the longer task sequence, more classes, and larger feature space all increase the gap between the performance of the single task and continual learning.\nAs shown in Tab. 6 the rho-spectrum shows high correlation to average accuracy on most benchmarks since it may help with learning new decision boundaries across tasks. Split MNIST has shown a low correlation between the ρ-spectrum and avg. accuracy due to the ρ-spectrum highly correlates with the avg. intransigence and consequently affect the avg. forgetting in an opposite direction so that causes a cancellation of effects on avg. accuracy. In addition, we found that GSS often obtains a smaller ρ than other methods without getting a better performance. In general, the ρ-spectrum is the smaller the better because it indicates the representations are more informative. However, it may be detrimental to the performance when ρ is too small as the learned representations are too noisy. DRL is more robust to this issue because ρ keeps relatively stable when α is larger than a certain value as shown in Fig. 4c in Appx. E." }, { "heading": "6 CONCLUSION", "text": "The two fundamental problems of continual learning with small episodic memories are: (i) how to make the best use of episodic memories; and (ii) how to construct most representative episodic memories. Gradient-based approaches have shown that the diversity of gradients computed on data from different tasks is a key to generalization over these tasks. In this paper we demonstrate that the\nmost diverse gradients are from samples that are close to class boundaries. We formally connect the diversity of gradients to discriminativeness of representations, which leads to an alternative way to reduce the diversity of gradients in continual learning. We subsequently exploit ideas from DML for learning more discriminative representations, and furthermore identify the shared and different interests between continual learning and DML. In continual learning we would prefer larger margins between classes as the same as in DML. The difference is that continual learning requires less compact representations for better compatibility with future tasks. Based on these findings, we provide a simple yet efficient approach to solving the first problem listed above. Our findings also shed light on the second problem: it would be better for the memorized samples to preserve as much variance as possible. In most of our experiments, randomly chosen samples outperform those selected by gradient diversity (GSS) due to the limit on memory size in practice. It could be helpful to select memorized samples by separately considering the representativeness of inter- and intra-class samples, i.e., those representing margins and edges. We will leave this for future work." }, { "heading": "A PROOF OF THEOREMS", "text": "Notations: Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a one-hot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = WTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nProof. Let ` ′\nn = ∂L(xn,yn;W)/∂on, by the chain rule, we have:\n〈gn, gm〉 = 〈xn,xm〉〈` ′ n, ` ′ m〉,\nBy the definition of L, we can find:\n` ′\nn = pn − yn, (4)\nTheorem 1. Suppose yn 6= ym, let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , ||pn||2 + ||pm||2, β , pn,cm + pm,cn and δ , ||pn − pm||22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β + δ > α),\nProof. According to Lemma 1 and yn 6= ym, we have\n〈` ′ n, ` ′ m〉 = 〈pn,pm〉 − pn,cm − pm,cn\nAnd\n〈pn,pm〉 = 1\n2 (||pn||2 + ||pm||2 − ||pn − pm||2) =\n1 2 (α− δ)\nwhich gives 〈`′n, ` ′ m〉 = 12 (α− δ)− β. When 2β > α− δ, we must have 〈` ′ n, ` ′\nm〉 < 0. According to Lemma 1, we prove this theorem.\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have:\nsign(〈gn, gm〉) = sign(〈xn,xm〉),\nProof. Because ∑K k=1 pn,k = 1, pn,k ≥ 0,∀k, and cn = cm = c,\n〈` ′ n, ` ′ m〉 = K∑ k 6=c pn,kpm,k + (pn,c − 1)(pm,c − 1) ≥ 0 (5)\nAccording to Lemma 1, we prove the theorem." }, { "heading": "B ALGORITHMS OF ONLINE MEMORY UPDATE", "text": "We provide the details of online ring buffer update and Balanced Experience Replay (BER) in Algs. 1 to 3. We directly load new data batches into the memory buffer without a separate buffer for the current task. The memory buffer works like a sliding window for each class in the data stream and we draw training batches from the memory buffer instead of directly from the data stream. In this case, one sample may not be seen only once as long as it stays in the memory buffer. This strategy is a more efficient use of the memory when |B| < nc, where |B| is the loading batch size of the data stream (i.e. the number of new samples added into the memory buffer at each iteration), we set |B| to 1 in all experiments (see Appx. I for a discussion of this).\nAlgorithm 1 Ring Buffer Update with Fixed Buffer Size\nInput: Bt - current data batch of the data stream, Ct - the set of classes in Bt,M - memory buffer, C - the set of classes in M, K - memory buffer size. for c in Ct do\nGet Bt,c - samples of class c in Bt, Mc - samples of class c inM, if c in C then Mc =Mc ∪ Bc else Mc = Bc, C = C ∪ {c}\nend if end for R = |M|+ |B| −K while R > 0 do c′ = arg maxc |Mc| remove the first sample inMc′ , R = R−1 end while returnM\nAlgorithm 2 Balanced Experience Replay Input: M - memory buffer, C - the set of classes in M, B - training batch size, Θ - model parameters, LΘ - loss function, Bt - current data batch from the data stream, Ct - the set of classes in Bt, K - memory buffer size.\nM←MemoryUpdate(Bt, Ct,M, C,K) nc, Cs, Cr ← ClassSelection(Ct, C, B) Btrain = ∅ for c in Cs do\nif c in Cr then mc = nc + 1 else mc = nc end if GetMc - samples of class c inM, Bc\nmc∼ Mc C sample mc samples fromMc Btrain = Btrain ∪ Bc\nend for Θ← Optimizer(Btrain,Θ,LΘ)\nAlgorithm 3 Class Selection for BER Input: Ct - the set of classes in current data batch Bt, C - the set of classes inM, B - training batch size, mp - minimum number of positive pairs of each selected class (mp ∈ {0, 1}) . Btrain = ∅, nc = bB/|C|c, rc = B mod |C|, if nc > 1 or mp == 0 then Cr\nrc∼ C C sample rc classes from all seen classes without replacement. Cs = C\nelse Cr = ∅, nc = 2, ns = bB/2c − |Ct|, C we ensure the training batch include samples from the current task. Cs\nns∼ (C − Ct) C sample ns classes from all seen classes except classes in Ct. Cs = Cs ⋃ Ct if B mod 2 > 0 then Cr\n1∼ Cs C sample one class in Cs to have an extra sample. end if\nend if Return: nc, Cs, Cr" }, { "heading": "C DEFINITION OF PERFORMANCE MEASURES", "text": "We use the following measures to evaluate the performance of all methods: Average accuracy, which is evaluated after learning all tasks: āt = 1t ∑t i=1 at,i, where t is the index of the latest task, at,i is the accuracy of task i after learning task t.\nAverage forgetting (Chaudhry et al., 2018), which measures average accuracy drop of all tasks after learning the whole task sequence: f̄t = 1t−1 ∑t−1 i=1 maxj∈{i,...,t−1}(aj,i − at,i).\nAverage intransigence (Chaudhry et al., 2018), which measures the inability of a model learning new tasks: Īt = 1t ∑t i=1 a ∗ i − ai, where ai is the accuracy of task i at time i. We use the best accuracy among all compared models as a∗i instead of the accuracy obtained by an extra model that is solely trained on task i." }, { "heading": "D RELATED METHODS FROM DML", "text": "ρ-spectrum metric (Roth et al., 2020): ρ = KL(U||SΦX ), which is proposed to measure the information entropy contained in the representation space. The ρ-spectrum computes the KLdivergence between a discrete uniform distribution U and the spectrum of data representations SΦX , where SΦX is normalized and sorted singular values of Φ(X ) , Φ denotes the representation extractor (e.g. a neural network) and X is input data samples. Lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nMultisimilarity(Wang et al., 2019): we adopt the loss function of Multisimilarity as an auxiliary objective in classfication tasks of continual learning, the batch mining process is omitted because we use labels for choosing positive and negative pairs. So the loss function is L̂ = L+ λLmulti, and:\nLmulti = 1\nB B∑ i=1 1 α log[1 + ∑\nj 6=i,yj=yi\nexp (−α(sc(hi, hj)− γ))]\n+ 1\nβ log [1 + ∑ yj 6=yi exp (β(sc(hi, hj)− γ))]\n (6)\nwhere sc(·, ·) is cosine similarity, α, β, γ are hyperparameters. In all of our experiments we set α = 2, β = 40, γ = 0.5 as the same as in Roth et al. (2020).\nR-Margin(Roth et al., 2020): we similarly deploy R-Margin for continual learning as the above, which uses the Margin loss (Wu et al., 2017) with the ρ regularization (Roth et al., 2020) as introduced in Sec. 2.2. So the loss function is L̂ = L+ λLmargin, and:\nLmargin = B∑ i=1 B∑ j=1 γ + Ij 6=i,yj=yi(d(hi, hj)− β)− Iyj 6=yi(d(hi, hj)− β) (7)\nwhere d(·, ·) is Euclidean distance, β is a trainable variable and γ is a hyperparameter. We follow the setting in Roth et al. (2020): γ = 0.2, the initialization of β is 0.6. We set pρ = 0.2 in ρ regularization." }, { "heading": "E ABLATION STUDY ON DRL", "text": "We verify the effects of LDRL by training a model with/without LDRL on Split-MNIST tasks: Fig. 4a shows that LDRL notably reduces the similarities of representations from different classes while making representations from a same class less similar; Fig. 4b shows the analogous effect on gradients from different classes and a same class. Fig. 4c demonstrates increasing α can effectively decrease ρ-spectrum to a low-value level, where lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nTab. 7 provides the results of an ablation study on the effects of the two terms in DRL. In general, Lbt gets a better performance in terms of forgetting, Lwi gets a better performance in terms of intransigence and a lower ρ-spectrum, and both of them show improvements on BER (without any regularization terms). Overall, combining the two terms obtains a better performance on forgetting than standalone Lbt and keeps the advantage on intransigence that brought by Lwi. It indicates preventing over-compact representations while maximizing margins can improve the learned representations that are easier for generalization over previous and new tasks. In addition, we found that using standalone Lbt we can only use a smaller λ otherwise the gradients will explode, and using Lwi together can stablize the gradients. We notice that the lower ρ-spectrum does not necessarily lead to a higher accuracy as it’s correlation coefficients with accuracy depends on datasets and is usually larger than -1." }, { "heading": "F COMPARING DIFFERENT MEMORY SIZES", "text": "Fig. 5 compares average accuracy of DRL+BER on MNIST tasks with different memory sizes. It appears the fixed memory size is more efficient than the incremental memory size. For example, the\n0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 pairwise similarity of representations\n0\n5\n10\n15\n20\npr ob\nab ilit\ny de\nns e\ndiff class sh diff class sDRh same class sh same class sDRh\n(a) Similarities of representations with and without LDRL\n−1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 pairwise similarity of gradients\n0\n1\n2\n3\n4\n5\n6\n7\n8\npr ob\nab ilit\ny de\nns e\ndiff class sg diff class sDRg same class sg same class sDRg\n(b) Similarities of gradients with and without LDRL\n0 2 4 6 8 10 α\n2\n3\n4\n5\n6\n7\nρ\n(c) Relation between α and ρspectrum.\nFigure 4: Effects of LDRL on reducing diveristy of gradients and ρ-spectrum. (a) and (b) display distributions of similarities of representations and gradients. sDRh and sh denote similarities of representations with and without LDRL, respectively, sDRg and sg denote similarities of gradients with and withoutLDRL, respectively. (c) demonstrates increasing α inLDRL can reduce ρ effectively.\nfixed memory size (M = 300) getting very similar average accuracy with memory M = 50 per class in Disjoint MNIST while it takes less cost of the memory after task 3. Meanwhile, the fixed memory size (M = 300) gets much better performance than M = 50 per task in most tasks of Permuted MNIST and it takes less cost of the memory after task 6. Since the setting of fixed memory size takes larger memory buffer in early tasks, the results indicate better generalization of early tasks can benefit later tasks, especially for more homogeneous tasks such as Permuted MNIST. The results also align with findings about Reservoir sampling (which also has fixed buffer size) in Chaudhry et al. (2019b) and we also believe a hybrid memory strategy can obtain better performance as suggested in Chaudhry et al. (2019b)." }, { "heading": "G COMPARING DIFFERENT REPLAY STRATEGY", "text": "We compare DRL with different memory replay strategies in Tab. 8 to show DRL has general improvement based on the applied replay strategy." }, { "heading": "H COMPARING TRAINING TIME", "text": "Tab. 9 compares the training time of MNIST tasks. All representation-based methods are much faster than gradient-based methods and close to the replay-based methods." }, { "heading": "I HYPER-PARAMETERS IN EXPERIMENTS", "text": "To make a fair comparison of all methods, we use following settings: i) The configurations of GSS-greedy are as suggested in Aljundi et al. (2019), with batch size set to 10 and each batch receives 10 iterations. ii) For the other methods, we use the ring buffer memory as described in Alg. 1, the loading batch size is set to 1, following with one iteration, the training batch size is provided in Tab. 10. More hyperparameters are given in Tab. 10 as well.\nIn the setting of limited training data in online continual learning, we either use a small batch size or iterate on one batch several times to obtain necessary steps for gradient optimization. We chose a small batch size with one iteration instead of larger batch size with multiple iterations because by our memory update strategy (Alg. 1) it achieves similar performance with fewer hyperparameters. Since GSS-greedy has a different strategy for updating memories, we leave it at its default settings.\nRegarding the two terms in DRL, a larger weight on Lwi is for less compact representations within classes, but a too dispersed representation space may include too much noise. For datasets that present more difficulty in learning compact representations, we would prefer a smaller weight on Lwi, we therefore set smaller α for CIFAR datasets in our experiments. A larger weight on Lbt is more resistant to forgetting but may be less capable of transferring to a new task, for datasets that are less compatible between tasks a smaller weight on Lbt would be preferred, as we set the largest λ on Permuted MNIST and the smallest λ on CIFAR-100 in our experiments." } ]
[ "This paper proposes a new framework that computes the task-specific representations to modulate the model parameters during the multi-task learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available, in such cases the proposed framework is useful. The proposed framework is evaluated on various datasets spanning multiple modalities, where the MTL model even achieves state-of-the-art results on some datasets. " ]
"Existing Multi-Task Learning(MTL) strategies like joint or meta-learning focus more on shared learning and have little to no scope for task-specific learning. This creates the need for a distinct shared pretraining phase and a task-specific finetuning phase. The finetuning phase creates separate models for each task, where improving the performance of a particular task necessitates forgetting some of the knowledge garnered in other tasks. Humans, on the other hand, perform task-specific learning in synergy with general domain-based learning. Inspired by these learning patterns in humans, we suggest a simple yet generic task aware framework to incorporate into existing MTL strategies. The proposed framework computes task-specific representations to modulate the model parameters during MTL. Hence, it performs both shared and task-specific learning in a single phase resulting in a single model for all the tasks. The single model itself achieves significant performance gains over the existing MTL strategies. For example, we train a model on Speech Translation (ST), Automatic Speech Recognition (ASR), and Machine Translation (MT) tasks using the proposed task aware multitask learning approach. This single model achieves a performance of 28.64 BLEU score on ST MuST-C English-German, WER of 11.61 on ASR TEDLium v3, and BLEU score of 23.35 on MT WMT14 English-German tasks. This sets a new state-of-the-art performance (SOTA) on the ST task while outperforming the existing end-to-end ASR systems with a competitive performance on the MT task."
[ { "authors": [ "Rosana Ardila", "Megan Branson", "Kelly Davis", "Michael Henretty", "Michael Kohler", "Josh Meyer", "Reuben Morais", "Lindsay Saunders", "Francis M. Tyers", "Gregor Weber" ], "title": "Common voice: A massivelymultilingual speech", "venue": null, "year": 2020 }, { "authors": [ "Craig Atkinson", "Brendan McCane", "Lech Szymanski", "Anthony V. Robins" ], "title": "Pseudo-recursal: Solving the catastrophic forgetting problem in deep neural networks", "venue": "CoRR, abs/1802.03875,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Computer Science Mathematics CoRR,", "year": 2015 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Brian Cheung", "Alexander Terekhov", "Yubei Chen", "Pulkit Agrawal", "Bruno Olshausen" ], "title": "Superposition of many models into one", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "L. Deng", "G. Hinton", "B. Kingsbury" ], "title": "New types of deep neural network learning for speech recognition and related applications: an overview", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Mattia A. Di Gangi", "Roldano Cattoni", "Luisa Bentivogli", "Matteo Negri", "Marco Turchi" ], "title": "MuSTC: a Multilingual Speech Translation Corpus", "venue": null, "year": 2019 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier" ], "title": "Understanding back-translation at scale", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "R. Girshick" ], "title": "Fast r-cnn", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Jiatao Gu", "Hany Hassan", "Jacob Devlin", "Victor O.K. Li" ], "title": "Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Jiatao Gu", "Yong Wang", "Yun Chen", "Victor O.K. Li", "Kyunghyun Cho" ], "title": "Meta-learning for low-resource neural machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Kazuma Hashimoto", "Caiming Xiong", "Yoshimasa Tsuruoka", "Richard Socher" ], "title": "A joint many-task model: Growing a neural network for multiple NLP", "venue": "tasks. CoRR,", "year": 2016 }, { "authors": [ "Tianxing He", "Jun Liu", "Kyunghyun Cho", "Myle Ott", "Bing Liu", "James Glass", "Fuchun Peng" ], "title": "Analyzing the forgetting problem in the pretrain-finetuning of dialogue response", "venue": null, "year": 2020 }, { "authors": [ "François Hernandez", "Vincent Nguyen", "Sahar Ghannay", "Natalia Tomashenko", "Yannick Estève" ], "title": "Ted-lium 3: Twice as much data and corpus repartition for experiments on speaker adaptation", "venue": "Lecture Notes in Computer Science,", "year": 2018 }, { "authors": [ "S. Indurthi", "H. Han", "N.K. Lakumarapu", "B. Lee", "I. Chung", "S. Kim", "C. Kim" ], "title": "End-end speech-to-text translation with modality agnostic meta-learning", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Sathish Reddy Indurthi", "Insoo Chung", "Sangha Kim" ], "title": "Look harder: A neural machine translation model with hard attention", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Javier Iranzo-Sánchez", "Joan Albert Silvestre-Cerdà", "Javier Jorge", "Nahuel Roselló", "Adrià Giménez", "Albert Sanchis", "Jorge Civera", "Alfons Juan" ], "title": "Europarl-st: A multilingual corpus for speech translation of parliamentary debates", "venue": null, "year": 1911 }, { "authors": [ "Nikhil Kumar Lakumarapu", "Beomseok Lee", "Sathish Reddy Indurthi", "Hou Jeung Han", "Mohd Abbas Zaidi", "Sangha Kim" ], "title": "End-to-end offline speech translation system for IWSLT 2020 using modality agnostic meta-learning", "venue": "In Proceedings of the 17th International Conference on Spoken Language Translation,", "year": 2020 }, { "authors": [ "Z. Li", "D. Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2018 }, { "authors": [ "Pierre Lison", "Jörg Tiedemann", "Milen Kouylekov" ], "title": "Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In LREC 2018", "venue": "Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA),", "year": 2019 }, { "authors": [ "Xiaodong Liu", "Jianfeng Gao", "Xiaodong He", "Li Deng", "Kevin Duh", "Ye-yi Wang" ], "title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval", "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2015 }, { "authors": [ "Xiaodong Liu", "Kevin Duh", "Liyuan Liu", "Jianfeng Gao" ], "title": "Very deep transformers for neural machine translation", "venue": "arXiv preprint arXiv:2008.07772,", "year": 2020 }, { "authors": [ "Yuchen Liu", "Hao Xiong", "Zhongjun He", "Jiajun Zhang", "Hua Wu", "Haifeng Wang", "Chengqing Zong" ], "title": "End-to-end speech translation with knowledge distillation", "venue": "CoRR, abs/1904.08075,", "year": 2019 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "V. Panayotov", "G. Chen", "D. Povey", "S. Khudanpur" ], "title": "Librispeech: An asr corpus based on public domain audio books", "venue": "In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2015 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm de Vries", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Ngoc-Quan Pham", "Thai-Son Nguyen", "Jan Niehues", "Markus Müller", "Alex Waibel" ], "title": "Very deep self-attention networks for end-to-end speech recognition", "venue": "CoRR, abs/1904.13377,", "year": 2019 }, { "authors": [ "Juan Pino", "Qiantong Xu", "Xutai Ma", "Mohammad Javad Dousti", "Yun Tang" ], "title": "Self-training for end-to-end speech translation", "venue": "arXiv preprint arXiv:2006.02490,", "year": 2020 }, { "authors": [ "Matt Post" ], "title": "A call for clarity in reporting BLEU scores", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Tomasz Potapczyk", "Pawel Przybysz", "Marcin Chochowski", "Artur Szumaczuk" ], "title": "Samsung’s system for the iwslt 2019 end-to-end speech translation task", "venue": "In 16th International Workshop on Spoken Language Translation (IWSLT). Zenodo,", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Bharath Ramsundar", "Steven Kearnes", "Patrick Riley", "Dale Webster", "David Konerding", "Vijay Pande" ], "title": "Massively multitask networks for drug", "venue": null, "year": 2015 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2016 }, { "authors": [ "Gjorgji Strezoski", "Nanne van Noord", "Marcel Worring" ], "title": "Many task learning with task", "venue": "routing. CoRR,", "year": 2019 }, { "authors": [ "Shubham Toshniwal", "Tara N Sainath", "Ron J Weiss", "Bo Li", "Pedro Moreno", "Eugene Weinstein", "Kanishka Rao" ], "title": "Multilingual speech recognition with a single end-to-end model", "venue": "In ICASSP,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Lijun Wu", "Yiren Wang", "Yingce Xia", "Fei Tian", "Fei Gao", "Tao Qin", "Jianhuang Lai", "Tie-Yan Liu" ], "title": "Depth growing for neural machine translation", "venue": null, "year": 1907 }, { "authors": [ "Zhanpeng Zhang", "Ping Luo", "Chen Change Loy", "Xiaoou Tang" ], "title": "Facial landmark detection by deep multi-task learning", "venue": "Computer Vision – ECCV", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "The process of Multi-Task Learning (MTL) on a set of related tasks is inspired by the patterns displayed by human learning. It involves a pretraining phase over all the tasks, followed by a finetuning phase. During pretraining, the model tries to grasp the shared knowledge of all the tasks involved, while in the finetuning phase, task-specific learning is performed to improve the performance. However, as a result of the finetuning phase, the model forgets the information about the other tasks that it learnt during pretraining. Humans, on the other hand, are less susceptible to forgetfulness and retain existing knowledge/skills while mastering a new task. For example, a polyglot who masters a new language learns to translate from this language without losing the ability to translate other languages. Moreover, the lack of task-based flexibility and having different finetuning/pretraining phases cause gaps in the learning process due to the following reasons:\nRole Mismatch: Consider the MTL system being trained to perform the Speech Translation(ST), Automatic Speech Recognition(ASR) and Machine Translation(MT) tasks. The Encoder block has a very different role in the standalone ASR, MT and ST models and hence we cannot expect a single encoder to perform well on all the tasks without any cues to identify/use task information. Moreover, there is a discrepancy between pretraining and finetuning hampering the MTL objective.\nTask Awareness: At each step in the MTL, the model tries to optimize over the task at hand. For tasks like ST and ASR with the same source language, it is impossible for the model to identify the task and alter its parameters accordingly, hence necessitating a finetuning phase. A few such examples have been provided in Table 1. Humans, on the other hand, grasp the task they have to perform by means of context or explicit cues.\nAlthough MTL strategies help the finetuned models to perform better than the models directly trained on those tasks, their applicability is limited to finding a good initialization point for the finetuning phase. Moreover, having a separate model for each task increases the memory requirements, which is detrimental in low resource settings.\nIn order to achieve the goal of jointly learning all the tasks, similar to humans, we need to perform shared learning in synergy with task-specific learning. Previous approaches such as Raffel et al. (2019) trained a joint model for a set of related text-to-text tasks by providing the task information along with the inputs during the joint learning phase. However, providing explicit task information is not always desirable, e.g., consider the automatic multilingual speech translation task. In order to ensure seamless user experience, it is expected that the model extracts the task information implicitly.\nThus, a holistic joint learning strategy requires a generic framework which learns task-specific information without any explicit supervision.\nIn this work, we propose a generic framework which can be easily integrated into the MTL strategies which can extract task-based characteristics. The proposed approach helps align existing MTL approaches with human learning processes by incorporating task information into the learning process and getting rid of the issues related to forgetfulness. We design a modulation network for learning the task characteristics and modulating the parameters of the model during MTL. As discussed above, the task information may or may not be explicitly available during the training. Hence, we propose two different designs of task modulation network to learn the task characteristics; one uses explicit task identities while the other uses the examples from the task as input. The model, coupled with the modulation network, jointly learns on all the tasks and at the same time, performs the task-specific learning. The proposed approach tackles issues related to forgetfulness by keeping a single model for all the tasks, and hence avoiding the expensive finetuning phase. Having a single model for all the tasks also reduces memory constraints, improving suitability for low resource devices.\nTo evaluate the proposed framework, we conduct two sets of experiments. First, we include the task information during MTL on text-to-text tasks to show the effect of task information. Secondly, we train a model on tasks with different modalities and end goals, with highly confounding tasks. Our proposed framework allows the model to learn the task characteristics without any explicit supervision, and hence train a single model which performs well on all the tasks. The main contributions of this work are as follows:\n• We propose an approach to tackle the issue of forgetfulness which occurs during the finetuning phase of existing MTL strategies.\n• Our model, without any finetuning, achieves superior performance on all the tasks which alleviates the need to keep separate task-specific models.\n• Our proposed framework is generic enough to be used with any MTL strategy involving tasks with multiple modalities." }, { "heading": "2 TASK-AWARE MULTITASK LEARNING", "text": "An overview of our proposed approach is shown in Figure 1." }, { "heading": "2.1 BASE MODEL", "text": "In general, the sequence-to-sequence architecture consists of two components: (1) an encoder which computes a set of representationsX = {x1, · · · ,xm} ∈ Rm×d corresponding to x, and a decoder coupled with attention mechanism (Bahdanau et al., 2015) dynamically reads encoder’s output and predicts target language sequence Y = {y1, · · · ,yn} ∈ Rn×d. It is trained on a dataset D to maximize the p (Y |X; θ), where θ are parameters of the model. We use the Transformer Vaswani et al. (2017) as our base model. Based on the task modalities, we choose the preprocessing layer in the Transformer, i.e., speech or the text (text-embedding) preprocessing layer. The speech preprocessing layer consists of a stack of k CNN layers with stride 2 for both time and frequency dimensions. This layer compresses the speech sequence and produces the output sequence such that input sequences corresponding to all the tasks have similar dimensions, d. The overview of the base sequence-to-sequence model is shown in the rightmost part of Figure 1." }, { "heading": "2.2 TASK MODULATION NETWORK", "text": "The task modulation network performs two operations. In the first step, it computes the task characteristics (te) using the task characteristics layer. It then modulates the model parameters θ using te in the second step." }, { "heading": "2.2.1 TASK CHARACTERISTICS NETWORK:", "text": "We propose two types of Task Characteristics Networks(TCN) to learn the task characteristics, where one uses explicit task identities while the other uses source-target sequences as input.\nExplicit Task Information: In this approach, the tasks involved are represented using different task identities and fed as input to this TCN as one hot vectors. This network consists of a feed-forward layer which produces the task embedding used for modulating the model parameters.\nte = FFN(e), (1)\nwhere e ∈ Rs is a one-hot encoding of s tasks used during joint learning. Implicit Task Information: The Implicit TCN computes the task embeddings using example sequences from the tasks without any external supervision. It consists of four sub-layers: (1) Sequence Representation Layer, (2) Bi-directional Attention Layer, (3) Sequence Summary Layer, and (4) Task Embedding Layer.\nThe sequence representation sub-layer consists of uni-directional Transformer Encoder (TE) blocks Vaswani et al. (2017). It takes the source and target sequences from the tasks as input and produces\nself-attended source and target sequences.\nXsa = TE(X), Y sa = TE(Y ), (2)\nwhereXsa ∈ RM×d, Y sa ∈ RN×d. This sub-layer computes the contextual representation of the sequences.\nThe Bi-directional Attention (BiA) sub-layer takes the self-attended source and target sequences from the previous layer as input and computes the relation between them using Dot-Product Attention Luong et al. (2015). As a result, we get target aware source (Xat ∈ RM×d) and source aware target (Y asRN×d) representations as outputs.\nXat = BiA(Xsa,Y sa), Y as = BiA(Y sa,Xsa). (3)\nThe sequence summary sub-layer is similar to the sequence representation sub layer and summarizes the sequences. The sequence summaries are given by:\nXs = TEu(X at), Y s = TEu(Y as), (4)\nwhereXs ∈ RM×d, Y s ∈ RN×d. The Equation 4 summarizes the sequencesXat and Y as which contain the contextual and attention information. We take the last tokens from both the xs ∈ Rd and ys ∈ Rd, since the last token can see the whole sequence and acts as a summary of the sequence. The task embedding layer computes te by taking the outputs of the sequence summary sub-layer and applying a feed-forward network:\nte = FFN([x s : ys]). (5)" }, { "heading": "2.2.2 MODULATING MODEL PARAMETERS", "text": "We modulate the parameters (θ) of the network (Section 2.1) to account for the task-specific variation during MTL over a set of tasks. We achieve this by scaling (γ) and shifting (β) the outputs of each layer (e.g., transformer block) including any preprocessing layers in the model adopted based on the Feature-wise Linear Modulation (FiLM; Perez et al. (2018)). The γ and β parameters are obtained from the task embedding te either by using equation 1 or 5.\nγ = te[: d], β = te[d :], (6)\nwhere te ∈ R2d, and d is the hidden dimension of the model. Once we have γ and β, we apply the feature-wise linear modulation (Perez et al., 2018) to compute the modulated output (Ol) for each block of the model.\nOl = γ ∗ fl(vl; θl) + β, l = 1, · · · , L, (7) where L is the total number of blocks in the model and fl represents the lth block of the model with parameters θl ∈ θ and inputs vl." }, { "heading": "2.3 TRAINING", "text": "MTL has been successfully applied across different applications of machine learning such as natural language processing (Hashimoto et al., 2016; Collobert & Weston, 2008), speech recognition (Liu et al., 2019; Deng et al., 2013), computer vision (Zhang et al., 2014; Liu et al., 2015; Girshick, 2015), and drug discovery (Ramsundar et al., 2015). It comes in many forms: joint learning, learning to learn, and learning with auxiliary tasks. We consider two MTL strategies: (1) joint learning and (2) learning to learn to train on set of S tasks, T = {τ1, · · · , τS} with corresponding datasets D = {D1, · · · , DS}. As our first training strategy, we use Joint Learning (JL) (Caruana, 1997), which is the most commonly used training strategy for MTL. In JL, the model parameters, including the output layer, are shared across all the tasks involved in the training. For the second training strategy under the learning-tolearn approach, we use a variant of meta-learning, Modality Agnostic Meta Learning (MAML) (Finn et al., 2017a). Even though MAML is mostly used in few-shot learning settings, we use it since it\nallows for task-specific learning during the meta-train step and it has also been shown to provide improvements in the field of speech translation(Indurthi et al., 2020).\nWe resolve the source-target vocabulary mismatch across different tasks in MTL by using a vocabulary of subwords (Sennrich et al., 2016) computed from all the tasks. We sample a batch of examples from Ds and use this as input to the TCN and the Transformer model. To ensure that each training example uses the task embedding computed using another example, we randomly shuffle this batch while using them as input to the TCN. This random shuffling improves the generalization performance by forcing the network to learn task-specific characteristics (te) in Equation 1 or 5. We compute the task embedding in the meta-train step as well; however, the parameters of the TCN are updated only during the meta-test step. During inference time, we use the precomputed task embeddings using a batch of examples randomly sampled from the training set." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 TASKS AND DATASETS", "text": "We conduct two sets of experiments, one with the tasks having the same input modality, i.e., text and another over tasks having different input modalities, i.e., speech and text. The main motivation behind the text-based experiments is to establish the importance of providing task information in MTL. Our main experiments, containing different input modalities involve highly confusing tasks. These experiments help us demonstrate the effectiveness of our approach in a generic setup. We incorporate the proposed task modulation framework into joint and meta-learning strategies and analyze its effects." }, { "heading": "3.1.1 SINGLE MODALITY EXPERIMENTS", "text": "We perform the small scale text-to-text machine translation task over three language pairs EnglishGerman/Romanian/Turkish (En-De/Ro/Tr). We keep English as the source language, which makes it crucial to use task information and produce different outputs from the same input. Since it is easier to provide task identity through one-hot vectors in text, we provide the task information by simply prepending the task identity to the source sequence of each task, e.g., ”translate from English to German”, ”translate from English to Turkish” similar to Raffel et al. (2019). We also train models using our proposed framework to learn the task information and shared knowledge jointly.\nFor En-De, we use 1.9M training examples from the Europarl v7 dataset. Europarl dev2006 and News Commentary nc-dev2007 are used as the dev and Europarl devtest2006, Europarl test2006 and News Commentary nc-devtest2007 as the test sets. For En-Tr we train using 200k training examples from the setimes2 dataset. We use newsdev2016 as the dev and newstest2017 as the test set. For En-Ro, we use 600k training examples from Europarl v8 and setimes2 datasets. We use newsdev2016 as dev and newstest2016 as the test set." }, { "heading": "3.1.2 MULTIPLE MODALITY EXPERIMENTS", "text": "To alleviate the data scarcity issue in Speech Translation (ST), several MTL strategies have been proposed to jointly train the ST task with Automatic Speech Recognition (ASR) and Machine Translation (MT) tasks. These MTL approaches lead to significant performance gains on both ST and ASR tasks after the finetuning phase. We evaluate our proposed framework based on this multimodal MTL setting since passing the task information explicitly via prepending labels(like the text-to-text case) in the source sequence is not possible. We use the following datasets for ST English-German, ASR English, MT English-German tasks:\nMT En-De: We use the Open Subtitles (Lison et al., 2019) and WMT 19 corpora. WMT 19 consists of Common Crawl, Europarl v9, and News Commentary v14 datasets(22M training examples).\nASR English: We used five different datasets namely LibriSpeech (Panayotov et al., 2015), MuST-C (Di Gangi et al., 2019), TED-LIUM (Hernandez et al., 2018), Common Voice (Ardila et al., 2020) and filtered IWSLT 19 (IWS, 2019) to train the English ASR task.\nST Task: We use the Europarl ST (Iranzo-Sánchez et al., 2019), IWSLT 2019 (IWS, 2019) and MuST-C (Di Gangi et al., 2019) datasets. Since ST task has lesser training examples, we use data augmentation techniques (Lakumarapu et al., 2020) to increase the number of training examples.\nPlease refer to the appendix for more details about the data statistics and data augmentation techniques used. All the models reported in this work use the same data settings for training and evaluation." }, { "heading": "3.2 IMPLEMENTATION DETAILS AND METRICS", "text": "We implemented all the models using Tensorflow 2.2 framework. For all our experiments, we use the Transformer(Vaswani et al., 2017) as our base model. The hyperparameter settings such as learning rate, scheduler, optimization algorithm, and dropout have been kept similar to the Transformer, other than the ones explicitly stated to be different. The ASR performance is measured using Word Error Rate (WER) while ST and MT performances are calculated using the detokenized cased BLEU score (Post, 2018). We generate word-piece based universal vocabulary (Gu et al., 2018a) of size 32k using source and target text sequences of all the tasks. For the task aware MTL strategies, we choose a single model to report the results rather than finding the best model for each task separately.\nWe train the text-to-text translation models using 6 Encoder and Decoder layers with a batch size of 2048 text tokens. The training is performed using NVIDIA P40 GPU for 400k steps.\nIn multi-modality experiments, the speech signals are represented using 80-dimensional log-Mel features and use 3 CNN layers in the preprocessing layer described in Section 2.1. We use 12 Encoder and Decoder layers and train for 600k steps using 8 NVIDIA V100 GPUs. For the systems without TCN, we perform finetuning for 10k steps on each task." }, { "heading": "3.3 RESULTS", "text": "" }, { "heading": "3.3.1 SINGLE MODALITY EXPERIMENTS", "text": "The results for the text-to-text translation models trained with different MTL strategies have been provided in Table 2. The MTL models with prepended task label (Raffel et al., 2019) are referred to as OHV (One Hot Vector). Unlike T5, we don’t initialize the models with the text embeddings from large pretrained language model (Devlin et al., 2018). Instead, we focus on establishing the importance of task information during MTL and having a single model for all the tasks. As we can see from the results, providing the task information via text labels or implicitly using the proposed task aware MTL leads to significant performance improvements compared to the MTL without the task information. The models trained using OHV have better performance than those trained using implicit TCN. However, providing OHV via text labels is not always possible for tasks involving non-text modalities such as speech and images." }, { "heading": "3.3.2 MULTI MODALITY EXPERIMENTS", "text": "We evaluate the proposed two TCNs and compare them with the vanilla MTL strategies. The performance of all the models is reported in Table 3. We also extended the T5 (Raffel et al., 2019) approach to the multi modality experiments and compare it with our approach.\nEffect of Task Information: The models trained using task aware MTL achieve significant performance gains over the models trained using vanilla MTL approach. Our single model achieves superior performance compared to the vanilla MTL models even after the finetuning. This shows that not only is the task information essential to identify the task, but also helps to extract the shared knowledge better. Our JL and MAML models trained with task aware MTL achieve improvements of (+2.65, +2.52) for ST, (-1.34, -1.18) for ASR, and (+0.72, +1.26) for the MT task. MAML has some scope for task-specific learning during its meta train step, which explains why the improvements for MAML are slightly lesser than JL for ST and ASR tasks.\nWe also report results using Direct Learning (DL) approach, where separate models are trained for each task, to compare with MTL models. All the MTL models outperform the DL models on ST and ASR tasks have comparable performance on MT task.\nExplicit v/s Implicit TCN: Our proposed implicit TCN learns the task characteristics directly from the examples of each task and achieves a performance comparable to the models trained using explicit TCN. This indicates that it is better to learn the task information implicitly, specifically for tasks having overlapping characteristics. Figure 2 contains the tSNE plots for task embeddings obtained from the implicit TCN for single and multi-modality experiments. We can observe that the implicit TCN is also able to separate all the three tasks effectively without any external supervision.\nSingle model for all tasks: We select one single model for reporting the results for our approach, since, having a single model for multiple tasks is favourable in low resource settings. However, we also report the best models corresponding to each task (row 8 and 11 of Table 3). We observe that choosing a single model over task-specific models did not result in any significant performance loss.\nFeature-wise v/s Input based modulation: We also implemented the input based conditioning (Toshniwal et al., 2018; Raffel et al., 2019) where we prepend the TCN output, i.e., task information to the source and target sequences. As compared to our approach, this approach provides a comparable performance on the ASR task. However, the ST performance is erratic and the output is mixed between ST and ASR tasks. This shows that the feature-wise modulation is more efficient way to carry out task-based conditioning for highly confusing tasks like ST and ASR.\nNumber of parameters added: The Explicit TCN, which is a dense layer, roughly 1500 new parameters are added. For the Implicit TCN, roughly 1 million new additional parameters are added. However, simply increasing the number of parameters is not sufficient to to improve the performance. For e.g., we trained several models by increasing the number of layers for encoder and decoder upto 16. However, these models gave inferior performance as compared to the reported models with 12 encoder and decoder layers.\nScaling with large number of tasks: The t-sne plots in the Figure 2b are drawn using the three test datasets. However, we used multiple datasets for each of the ASR(Librispeech, Common voice, TEDLIUM, MuSTC-ASR), ST (MuSTC, IWSLT20, Europarl), and MT (WMT19, OpenSubtitles) tasks in the multi-modality experiments. We analyze whether or not our proposed approach is able to separate the data coming from these different distributions. As compared to data coming from different tasks, separating the data coming from the same task(generated from different distributions) is more difficult. Earlier, in Figure 2b, we observed that the output is clustered based on the tasks. Figure 2c shows that within these task-based clusters, there are sub-clusters based on the source dataset. Hence, the model is able to identify each sub-task based on the source dataset. The model also gives decent performances on all of them. For example, the single model achieves a WER of 7.5 on the Librispeech tst-clean, 10.35 on MuSTC, 11.65 on the TEDLIUM v3 and 20.36 on the commonvoice test set. For the ST task, the same model gives a BLEU score of 28.64 on the MuSTC test set, 27.61 on the IWSLT tst-2010, and 27.57 on the Europarl test set. This shows that our proposed approach scales well with the total number of tasks.\nComparison with existing works: The design of our system, i.e., the parameters and the related tasks were fixed keeping the ST task in mind. We compare the results of our best systems(after checkpoint averaging) with the recent works in Table 4. We set a new state-of-the-art (SOTA) on the\nST En-De MuST-C task. For the ASR task, we outperform the very deep Transformer based model Pham et al. (2019). We achieve a 19.2% improvement in the WER as compared to the model with the same number of Encoder and Decoder blocks. The best transformer-based MT model achieves a BLEU score of 30.10, however, it uses 60 Encoder blocks. The performance drop on the MT task is attributed to simply training a bigger model without using any additional initialization techniques proposed in Liu et al. (2015); Wu et al. (2019). However, the MT task helps the other tasks and improves the overall performance of the system." }, { "heading": "4 RELATED WORK", "text": "Various MTL techniques have been widely used to improve the performance of end-to-end neural networks. These techniques are known to solve issues like overfitting and data scarcity. Joint learning (Caruana, 1997) improves the generalization by leveraging the shared information contained in the training signals of related tasks. MAML (Finn et al., 2017b) was proposed for training a joint model on a variety of tasks, such that it can quickly adapt to new tasks. Both the learning approaches require a finetuning phase resulting in different models for each task. Moreover, during finetuning phase the model substantially forgets the knowledge acquired during the large-scale pretraining.\nOne of the original solutions to this problem is pseudo-rehearsal, which involves learning the new task while rehearsing generated items representative of the previous task. This has been investigated and addressed to a certain extent in Atkinson et al. (2018) and Li & Hoiem (2018). He et al. (2020) address this by using a mix-review finetuning strategy, where they include the pretraining objective during the finetuning phase. Raffel et al. (2019) take a different approach by providing the task information to the model and achieve performance improvements on different text-to-text tasks. Although this alleviates the need for finetuning, it cannot be extended to the tasks involving complex modalities. In our work, we propose a generic framework on top of MTL to provide task information to the model which can be applied irrespective of the task modalities. It also removes the need for finetuning, tackling the issue of forgetfulness at its root cause.\nA few approaches have also tried to train multiple tasks with a single model, Cheung et al. (2019) project the input to orthogonal sub-spaces based on the task information. In the approach proposed by Li & Hoiem (2018), the model is trained on various image classification tasks having the same input modality. They preserve the output of the model on the training example such that the parameters don’t deviate much from the original tasks. This is useful when the tasks share the same goal, e.g. classification. However, we train on a much more varied set of tasks, which might also have the same inputs with different end goals. Strezoski et al. (2019) propose to apply a fixed mask based on the task identity. Our work can be seen as a generalization of this work. As compared to all these approaches, our model is capable of performing both task identification and the corresponding task learning simultaneously. It learns to control the interactions among various tasks based on the inter-task similarity without any explicit supervision.\nIn the domain of neural machine translation, several MTL approaches have been proposed (Gu et al., 2018a;b). Similarly, recent works have shown that jointly training ST, ASR, and MT tasks improved the overall performance (Liu et al., 2019; Indurthi et al., 2020). However, all these require a separate finetuning phase." }, { "heading": "5 CONCLUSION", "text": "This work proposes a task-aware framework which helps to improve the learning ability of the existing multitask learning strategies. It addresses the issues faced during vanilla multitask learning, which includes forgetfulness during finetuning and the problems associated with having separate models for each task. The proposed approach helps to align better the existing multitask learning strategies with human learning. It achieves significant performance improvements with a single model on a variety of tasks which is favourable in low resource settings." }, { "heading": "6 APPENDIX", "text": "" }, { "heading": "6.1 DATASETS", "text": "" }, { "heading": "6.1.1 DATA AUGMENTATION FOR SPEECH TRANSLATION", "text": "Table 5 provides details about the datasets used for the multi-modality experiments. Since En-De ST task has relatively fewer training examples compared to ASR and MT tasks, we augment the ST dataset with synthetic training examples. We generate the synthetic speech sequence and pair it with the synthetic German text sequences. obtained by using the top two beam search results of the two trained English-to-German NMT models. For speech sequence, we use the Sox library to generate the speech signal using different values of speed, echo, and tempo parameters similar to (Potapczyk et al., 2019). The parameter values are uniformly sampled using these ranges : tempo ∈ (0.85, 1.3), speed ∈ (0.95, 1.05), echo delay ∈ (20, 200), and echo decay ∈ (0.05, 0.2). We also train two NMT models on EN-De language pair to generate synthetic German sequence. The first model is based on Edunov et al. (2018) and the second model (Indurthi et al., 2019) is trained on WMT’18 En-De and OpenSubtitles datsets. We increase the size of the IWSLT 19(filtered) ST dataset to five times of the original size by augmenting 4x data – four text sequences using the top two beam results from each EN-De NMT model and four speech signals using the Sox parameter ranges. For the Europarl-ST, we augment 2x examples to triple the size. The TED-LIUM 3 dataset does not contain speech-to-text translation examples originally; hence, we create 2x synthetic speech-to-text translations using speech-to-text transcripts. Finally, for the MuST-C dataset, we only create synthetic speech and pair it with the original translation to increase the dataset size to 4x. The Overall, we created the synthetic training data of size approximately equal to four times of the original data for the ST task." }, { "heading": "6.1.2 TASK IDENTIFICATION WITHOUT TASK INFORMATION", "text": "Under the multi-modality setting, we conducted smaller scale experiments using only one dataset for each ST, ASR, and ST tasks. The details of the datasets used have been provided in Table 7. We trained on single p40 GPU for 400k steps. The corresponding results have been reported in Table 6. All the results have been obtained without any finetuning. Even though our task-aware MTL model achieves significant performance improvement over vanilla MTL models, we can observe that the vanilla MTL models are also able to give a decent performance on all tasks without any finetuning. An explanation for this is that we used MuST-C dataset for the En-De ST task and TEDLium v3 for the ASR task, which means that the source speech is coming from 2 different sources. However, if we use the same datasets for both the tasks(after data augmentation), the MTL models get confused and the ST, ASR outputs are mixed. The MTL models might be able to learn the task identities simply based on the source speech sequences, since these sequence are coming from different datasets for each task type–MuST-C for ST and TED-LIUM v3 for ASR. However, this does not mean that vanilla MTL models perform joint learning effectively. A human who can perform multiple tasks from the same input is aware of the task he has to perform beforehand. Similarly, it is unreasonable to expect different outputs (translation, transcription) from a model to the same type of input (English speech) without any explicit task information." }, { "heading": "6.1.3 IMPLEMENTATION DETAILS", "text": "The detailed hyperparameters settings used for the single modality and multi modality experiments have been provided in the Table 8.\nS No. MTL Strategy MT BLEU (↑) ASR(WER (↓) ST(BLEU (↑)Test Dev Test Dev Test 1 Joint Learning 14.77 29.56 30.87 13.10 12.70 2 Meta Learning 14.74 28.58 29.92 13.89 13.67\nThis Work" } ]
[ "This work proposes a deep reinforcement learning-based optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A continuous time representation of the problem is also used compared to conventional techniques which mostly use a discrete time formulation. " ]
"This paper deals with the fuel optimization problem for hybrid electric vehicles in reinforcement learning framework. Firstly, considering the hybrid electric vehicle as a completely observable non-linear system with uncertain dynamics, we solve an open-loop deterministic optimization problem to determine a nominal optimal state. This is followed by the design of a deep reinforcement learning based optimal controller for the non-linear system using concurrent learning based system identifier such that the actual states and the control policy are able to track the optimal state and optimal policy, autonomously even in the presence of external disturbances, modeling errors, uncertainties and noise and signigicantly reducing the computational complexity at the same time, which is in sharp contrast to the conventional methods like PID and Model Predictive Control (MPC) as well as traditional RL approaches like ADP, DDP and DQN that mostly depend on a set of pre-defined rules and provide sub-optimal solutions under similar conditions. The low value of the H-infinity (H∞) performance index of the proposed optimization algorithm addresses the robustness issue. The optimization technique thus proposed is compared with the traditional fuel optimization strategies for hybrid electric vehicles to illustate the efficacy of the proposed method."
[ { "authors": [ "R. Akrour", "A. Abdolmaleki", "H. Abdulsamad", "G. Neumann" ], "title": "Model Free Trajectory Optimization for Reinforcement Learning", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "A. Barto", "R. Sutton", "C. Anderson" ], "title": "Neuron-like adaptive elements that can solve difficult learning control problems", "venue": "IEEE Transaction on Systems, Man, and Cybernetics,", "year": 1983 }, { "authors": [ "R. Bellman" ], "title": "The theory of dynamic programming", "venue": "DTIC Document, Technical Representations", "year": 1954 }, { "authors": [ "D. Bertsekas" ], "title": "Dynamic Programming and Optimal Control", "venue": "Athena Scientific,", "year": 2007 }, { "authors": [ "S. Bhasin", "R. Kamalapurkar", "M. Johnson", "K. Vamvoudakis", "F.L. Lewis", "W. Dixon" ], "title": "A novel actor-critic-identifier architecture for approximate optimal control of uncertain nonlinear systems", "venue": null, "year": 2013 }, { "authors": [ "R.P. Bithmead", "V. Wertz", "M. Gerers" ], "title": "Adaptive Optimal Control: The Thinking Man’s G.P.C", "venue": "Prentice Hall Professional Technical Reference,", "year": 1991 }, { "authors": [ "A. Bryson", "H.Y.-C" ], "title": "Applied Optimal Control: Optimization, Estimation and Control. Washington: Hemisphere", "venue": "Publication Corporation,", "year": 1975 }, { "authors": [ "G.V. Chowdhary", "E.N. Johnson" ], "title": "Theory and flight-test validation of a concurrent-learning adaptive controller", "venue": "Journal of Guidance Control and Dynamics,34:,", "year": 2011 }, { "authors": [ "G. Chowdhary", "T. Yucelen", "M. Mühlegg", "E.N. Johnson" ], "title": "Concurrent learning adaptive control of linear systems with exponentially convergent bounds", "venue": "International Journal of Adaptive Control and Signal Processing,", "year": 2013 }, { "authors": [ "P. Garcı́a", "J.P. Torreglosa", "L.M. Fernández", "F. Jurado" ], "title": "Viability study of a FC-batterySC tramway controlled by equivalent consumption minimization strategy", "venue": "International Journal of Hydrogen Energy,", "year": 2012 }, { "authors": [ "A. Gosavi" ], "title": "Simulation-based optimization: Parametric optimization techniques and reinforcement learning", "venue": null, "year": 2003 }, { "authors": [ "J. Han", "Y. Park" ], "title": "A novel updating method of equivalent factor in ECMS for prolonging the lifetime of battery in fuel cell hybrid electric vehicle", "venue": "In IFAC Proceedings,", "year": 2012 }, { "authors": [ "J. Han", "J.F. Charpentier", "T. Tang" ], "title": "An Energy Management System of a Fuel Cell/Battery", "venue": "Hybrid Boat. Energies,", "year": 2014 }, { "authors": [ "J. Han", "Y. Park", "D. Kum" ], "title": "Optimal adaptation of equivalent factor of equivalent consumption minimization strategy for fuel cell hybrid electric vehicles under active state inequality constraints", "venue": "Journal of Power Sources,", "year": 2014 }, { "authors": [ "R. Kamalapurkar", "L. Andrews", "P. Walters", "W.E. Dixon" ], "title": "Model-based reinforcement learning for infinite-horizon approximate optimal tracking", "venue": "In Proceedings of the IEEE Conference on Decision and Control (CDC),", "year": 2014 }, { "authors": [ "R. Kamalapurkar", "H. Dinh", "S. Bhasin", "W.E. Dixon" ], "title": "Approximate optimal trajectory tracking for continuous-time nonlinear systems", "venue": null, "year": 2015 }, { "authors": [ "S.G. Khan" ], "title": "Reinforcement learning and optimal adaptive control: An overview and implementation examples", "venue": "Annual Reviews in Control,", "year": 2012 }, { "authors": [ "M.J. Kim", "H. Peng" ], "title": "Power management and design optimization of fuel cell/battery hybrid vehicles", "venue": "Journal of Power Sources,", "year": 2007 }, { "authors": [ "D. Kirk" ], "title": "Optimal Control Theory: An Introduction", "venue": "Mineola, NY,", "year": 2004 }, { "authors": [ "V. Konda", "J. Tsitsiklis" ], "title": "On actor-critic algorithms", "venue": "SIAM Journal on Control and Optimization,", "year": 2004 }, { "authors": [ "S. Levine", "P. Abbeel" ], "title": "Learning Neural Network Policies with Guided Search under Unknown Dynamics", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "F.L. Lewis", "S. Jagannathan", "A. Yesildirak" ], "title": "Neural network control of robot manipulators and nonlinear systems", "venue": null, "year": 1998 }, { "authors": [ "F.L. Lewis", "D. Vrabie", "V.L. Syrmos" ], "title": "Optimal Control, 3rd edition", "venue": null, "year": 2012 }, { "authors": [ "H. Li", "A. Ravey", "A. N’Diaye", "A. Djerdir" ], "title": "A Review of Energy Management Strategy for Fuel Cell Hybrid Electric Vehicle", "venue": "In IEEE Vehicle Power and Propulsion Conference (VPPC),", "year": 2017 }, { "authors": [ "W.S. Lin", "C.H. Zheng" ], "title": "Energy management of a fuel cell/ultracapacitor hybrid power system using an adaptive optimal control method", "venue": "Journal of Power Sources,", "year": 2011 }, { "authors": [ "P. Mehta", "S. Meyn" ], "title": "Q-learning and pontryagin’s minimum principle", "venue": "In Proceedings of IEEE Conference on Decision and Control,", "year": 2009 }, { "authors": [ "D. Mitrovic", "S. Klanke", "S. Vijayakumar" ], "title": "Adaptive Optimal Feedback Control with Learned Internal Dynamics Models", "venue": null, "year": 2010 }, { "authors": [ "H. Modares", "F.L. Lewis" ], "title": "Optimal tracking control of nonlinear partially-unknown constrainedinput systems using integral reinforcement", "venue": "learning. Automatica,", "year": 2014 }, { "authors": [ "S.J. Moura", "D.S. Callaway", "H.K. Fathy", "J.L. Stein" ], "title": "Tradeoffs between battery energy capacity and stochastic optimal power management in plug-in hybrid electric vehicles", "venue": "Journal of Power Sources,", "year": 1959 }, { "authors": [ "S.N. Motapon", "L. Dessaint", "K. Al-Haddad" ], "title": "A Comparative Study of Energy Management Schemes for a Fuel-Cell Hybrid Emergency Power System of More-Electric Aircraft", "venue": "IEEE Transactions on Industrial Electronics,", "year": 2014 }, { "authors": [ "G. Paganelli", "S. Delprat", "T.M. Guerra", "J. Rimaux", "J.J. Santin" ], "title": "Equivalent consumption minimization strategy for parallel hybrid powertrains", "venue": "In IEEE 55th Vehicular Technology Conference, VTC Spring 2002 (Cat. No.02CH37367),", "year": 2002 }, { "authors": [ "F. Segura", "J.M. Andújar" ], "title": "Power management based on sliding control applied to fuel cell systems: A further step towards the hybrid control concept", "venue": "Applied Energy,", "year": 2012 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 1998 }, { "authors": [ "E. Theoddorou", "Y. Tassa", "E. Todorov" ], "title": "Stochastic Differential Dynamic Programming", "venue": "In Proceedings of American Control Conference,", "year": 2010 }, { "authors": [ "E. Todorov", "Y. Tassa" ], "title": "Iterative Local Dynamic Programming", "venue": "In Proceedings of the IEEE International Symposium on ADP and RL,", "year": 2009 }, { "authors": [ "J.P. Torreglosa", "P. Garcı́a", "L.M. Fernández", "F. Jurado" ], "title": "Predictive Control for the Energy Management of a Fuel-Cell-Battery-Supercapacitor Tramway", "venue": "IEEE Transactions on Industrial Informatics,", "year": 2014 }, { "authors": [ "D. Vrabie" ], "title": "Online adaptive optimal control for continuous-time systems", "venue": "Ph.D. dissertation, University of Texas at Arlington,", "year": 2010 }, { "authors": [ "Dan Yu", "Mohammadhussein Rafieisakhaei", "Suman Chakravorty" ], "title": "Stochastic Feedback Control of Systems with Unknown Nonlinear Dynamics", "venue": "In IEEE Conference on Decision and Control,", "year": 2017 }, { "authors": [ "M.K. Zadeh" ], "title": "Stability Analysis Methods and Tools for Power Electronics-Based DC Distribution Systems: Applicable to On-Board Electric Power Systems and Smart Microgrids", "venue": null, "year": 2016 }, { "authors": [ "X. Zhang", "C.C. Mi", "A. Masrur", "D. Daniszewski" ], "title": "Wavelet transform-based power management of hybrid vehicles with multiple on-board energy sources including fuel cell, battery and ultracapacitor", "venue": "Journal of Power Sources,", "year": 2008 }, { "authors": [ "C. Zheng", "S.W. Cha", "Y. Park", "W.S. Lim", "G. Xu" ], "title": "PMP-based power management strategy of fuel cell hybrid vehicles considering multi-objective optimization", "venue": "International Journal Precision Engineering and Manufacturing,", "year": 2013 }, { "authors": [ "C.H. Zheng", "G.Q. Xu", "Y.I. Park", "W.S. Lim", "S.W. Cha" ], "title": "Prolonging fuel cell stack lifetime based on Pontryagin’s Minimum Principle in fuel cell hybrid vehicles and its economic influence evaluation", "venue": "Journal Power Sources,", "year": 2014 }, { "authors": [ "X. Zhong", "H. He", "H. Zhang", "Z. Wang" ], "title": "Optimal Control for Unknown Diiscrete-Time Nonlinear Markov Jump Systems Using Adaptive Dynamic Programming", "venue": "IEEE Transactions on Neural networks and learning systems,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Hybrid electric vehicles powered by fuel cells and batteries have attracted great enthusiasm in modern days as they have the potential to eliminate emissions from the transport sector. Now, both the fuel cells and batteries have got several operational challenges which make the separate use of each of them in automotive systems quite impractical. HEVs and PHEVs powered by conventional diesel engines and batteries merely reduce the emissions, but cannot eliminate completely. Some of the drawbacks include carbon emission causing environmental pollution from fuel cells and long charging times, limited driving distance per charge, non-availability of charging stations along the driving distance for the batteries. Fuel Cell powered Hybrid Electric Vehicles (FCHEVs) powered by fuel cells and batteries offer emission-free operation while overcoming the limitations of driving distance per charge and long charging times. So, FCHEVs have gained significant attention in recent years. As we find, most of the existing research which studied and developed several types of Fuel and Energy Management Systems (FEMS) for transport applications include Sulaiman et al. (2018) who has presented a critical review of different energy and fuel management strategies for FCHEVs. Li et al. (2017) has presented an extensive review of FMS objectives and strategies for FCHEVs. These strategies, however can be divided into two groups, i.e., model-based and modelfree. The model-based methods mostly depend on the discretization of the state space and therefore suffers from the inherent curse of dimensionality. The coumputational complexity increases in an exponential fashion with the increase in the dimension of the state space. This is quite evident in the methods like state-based EMS (Jan et al., 2014; Zadeh et al., 2014; 2016), rule-based fuzzy logic strategy (Motapon et al., 2014), classical PI and PID strategies (Segura et al., 2012), Potryagin’s minimum principle (PMP) (Zheng et al., 2013; 2014), model predictive control (MPC) (Kim et al., 2007; Torreglosa et al., 2014) and differential dynamic programming (DDP) (Kim et al., 2007). Out of all these methods, differential dynamic programming is considered to be computationally quite\nefficient which rely on the linearization of the non-linear system equations about a nominal state trajectory followed by a policy iteration to improve the policy. In this approach, the control policy for fuel optimization is used to compute the optimal trajectory and the policy is updated until the convergence is achieved.\nThe model-free methods mostly deal with the Adaptive Dynamic Programming (Bithmead et al., 1991; Zhong et al., 2014) and Reinforcement Learning (RL) based strategies (Mitrovic et al., 2010; Khan et al., 2012) icluding DDP (Mayne et al., 1970). Here, they tend to compute the control policy for fuel optimization by continous engagement with the environment and measuring the system response thus enabling it to achieve at a solution of the DP equation recursively in an online fashion. In deep reinforcement learning, multi-layer neural networks are used to represent the learning function using a non-linear parameterized approximation form. Although a compact paremeterized form do exist for the learning function, the inability to know it apriori renders the method suffer from the curse of dimensionality (O(d2) where, d is the dimension of the state space), thus making it infeasible to apply to a high-dimemsional fuel managememt system.\nThe problem of computational complexity of the traditional RL methods like policy iteration (PI) and value iteration (VI) (Bellman et al., 1954; 2003; Barto et al., 1983; Bartsekas, 2007) can be overcome by a simulation based approach (Sutton et al., 1998) where the policy or the value function can be parameterized with sufficient accuracy using a small number of parameters. Thus, we will be able to transform the optimal control problem to an approximation problem in the parameter space (Bartesekas et al., 1996; Tsitsiklis et al., 2003; Konda et al., 2004) side stepping the need for model knowledge and excessive computations. However, the convergence requires sufficient exploration of the state-action space and the optimality of the obtained policy depends primarily on the accuracy of the parameterization scheme.\nAs a result, a good approximation of the value function is of utmost importance to the stability of the closed-loop system and it requires convergence of the unknown parameters to their optimal values. Hence, this sufficient exploration condition manifests itself as a persistence of excitation (PE) condition when RL is implemented online (Mehta et al., 2009; Bhasin et al., 2013; Vrabie, 2010) which is impossible to be guaranteed a priori.\nMost of the traditional approaches for fuel optimization are unable to adrress the robustness issue. The methods described in the literature including those of PID (Segura et al.,2012), Model Predictive Control (MPC) (Kim et al.,2007;Torreglosa et al., 2014) and Adaptive Dynamic Programming (Bithmead et al.,1991;Zhong et al., 2014) as well as the simulation based RL strategies (Bartesekas et al., 1996; Tsitsiklis et al., 2003; Konda et al., 2004 ) suffer from the drawback of providing a suboptimal solution in the presence of external disturbances and noise. As a result, application of these methods for fuel optimization for hybrid electric vehicles that are plagued by various disturbances in the form of sudden charge and fuel depletion, change in the environment and in the values of the parameters like remaining useful life, internal resistance, voltage and temperature of the battery, are quite impractical.\nThe fuel optimization problem for the hybrid electric vehicle therefore have been formulated as a fully observed stochastic Markov Decision Process (MDP). Instead of using Trajectory-optimized LQG (T-LQG) or Model Predictive Control (MPC) to provide a sub-optimal solution in the presence of disturbances and noice, we propose a deep reinforcement learning-based optimization strategy using concurrent learning (CL) that uses the state-derivative-action-reward tuples to present a robust optimal solution. The convergence of the weight estimates of the policy and the value function to their optimal values justifies our claim. The two major contributions of the proposed approch can be therefore be summarized as follows:\n1) The popular methods in RL literature including policy iteration and value iteration suffers from the curse of dimensionality owing to the use of a simulation based technique which requires sufficient exploration of the state space (PE condition). Therefore, the proposed model-based RL scheme aims to relax the PE condition by using a concurrent learning (CL)-based system identifier to reduce the computational complexity. Generally, an estimate of the true controller designed using the CLbased method introduces an approximate estimation error which makes the stability analysis of the system quite intractable. The proposed method, however, has been able to establish the stability of the closed-loop system by introducing the estimation error and analyzing the augmented system trajectory obtained under the influnece of the control signal.\n2) The proposed optimization algorithm implemented for fuel management in hybrid electric vehicles will nullify the limitations of the conventional fuel management approaches (PID, Model Predictive Control, ECMS, PMP) and traditional RL approaches (Adaptive Dynamic Proagramming, DDP, DQN), all of which suffers from the problem of sub-optimal behaviour in the presence of external disturbances, model-uncertainties, frequent charging and discharging, change of enviroment and other noises. The H-infinity (H∞) performance index defined as the ratio of the disturbance to the control energy has been established for the RL based optimization technique and compared with the traditional strategies to address the robustness issue of the proposed design scheme.\nThe rest of the paper is organised as follows: Section 2 presents the problem formulation including the open-loop optimization and reinforcement learning-based optimal controller design which have been described in subsections 2.1 and 2.2 respectively. The parametric system identification and value function approximation have been detailed in subsections 2.2.1 and 2.2.2. This is followed by the stability and robustness analysis (using the H-infinity (H∞) performance index ) of the closed loop system in subsection 2.2.4. Section 3 provides the simulation results and discussion followed by the conclusion in Section 4." }, { "heading": "2 PROBLEM FORMULATION", "text": "Considering the fuel management system of a hybrid electric vehicle as a continous time affine non-linear dynamical system:\nẋ = f(x,w) + g(x)u, y = h(x, v) (1)\nwhere, x ∈ Rnx , y ∈ Rny , u ∈ Rnu are the state, output and the control vectors respectively, f(.) denotes the drift dynamics and g(.) denotes the control effectivenss matrix. The functions f and h are assumed to be locally Lipschitz continuous functions such that f(0) = 0 and∇f(x) is continous for every bounded x ∈Rnx . The process noise w and measurement noise v are assumed to be zero-mean, uncorrelated Gausssian white noise with covariances W and V, respectively.\nAssumption 1: We consider the system to be fully observed:\ny = h(x, v) = x (2)\nRemark 1: This assumption is considered to provide a tractable formulation of the fuel management problem to side step the need for a complex treatment which is required when a stochastic control problem is treated as partially observed MDP (POMDP).\nOptimal Control Problem: For a continous time system with unknown nonlinear dynamics f(.), we need to find an optimal control policy πt in a finite time horizon [0, t] where πt is the control policy at\ntime t such that πt = u(t) to minimize the cost function given by J = ∫ t\n0\n(xTQx+uRuT )dt+xTFx\nwhere, Q,F > 0 and R ≥ 0." }, { "heading": "2.1 OPEN LOOP OPTIMIZATION", "text": "Considering a noise-free non-linear stochastic dynamical system with unknown dynamics:\nẋ = f(x, 0) + g(x)u, y = h(x, v) = x (3)\nwhere, x0 ∈ Rnx , y ∈ Rny , u ∈ Rnu are the initial state, output and the control vectors respectively, f(.) have their usual meanings and the corresponding cost function is given by Jd (x0, ut) =∫ t\n0\n(xTQx+ uRuT )dt+ xTFx.\nRemark: We have used piecewise convex function to approximate the non-convex fuel function globally which has been used to formulate the cost function for the fuel optimization.\nThe open loop optimization problem is to find the control sequence ut such that for a given initial state x0,\nūt = arg min Jd(x0, ut),\nsubject to ẋ = f(x, 0) + g(x)u,\ny = h(x, v) = x.\n(4)\nThe problem is solved using the gradient descent approach (Bryson et al., 1962; Gosavi et al., 2003), and the procedure is illustrated as follows: Starting from a random initial value of the control sequence U(0) = [ut(0)] the control policy is updated iteratively as U (n+1) = U (n) − α∇UJd(x0, U (n)), (5) until the convergence is achieved upto a certain degree of accuracy where U (n) denotes the control value at the nth iteration and α is the step size parameter. The gradient vector is given by:\n∇UJd(x0, U (n)) = ( ∂Jd ∂u0 , ∂Jd ∂u1 , ∂Jd ∂u2 , ....., ∂Jd ∂ut )|(x0,ut) (6)\nThe Gradient Descent Algorithm showing the approach has been detailed in the Appendix A.1.\nRemark 2: The open loop optimization problem is thus solved using the gradient descent approach considering a black-box model of the underlying system dynamics using a sequence of input-output tests without having the perfect knowlegde about the non-linearities in the model at the time of the design. This method proves to be a very simple and useful strategy for implementation in case of complex dynamical systems with complicated cost-to-go functions and suitable for parallelization." }, { "heading": "2.2 REINFORCEMENT LEARNING BASED OPTIMAL CONTROLLER DESIGN", "text": "Considering the affine non-linear dynamical system given by equation (1), our objective is to design a control law to track the optimal time-varying trajectory x̄(t) ∈ Rnx . A novel cost function is formulated in terms of the tracking error defined by e = x(t) − x̄(t) and the control error defined by the difference between the actual control signal and the desired optimal control signal. This formulation helps to overcome the challenge of the infinte cost posed by the cost function when it is defined in terms of the tarcking error e(t) and the actual control signal signal u(t) only (Zhang et al., 2011; Kamalapurkar et al., 2015). The following assumptions is made to determine the desired steady state control.\nAssumption 2: (Kamalapurkar et al., 2015) The function g(x) in equation (1) is bounded, the matrix g(x) has full column rank for all x(t) ∈ Rnx and the function g+ : Rn→ RmXn which is defined as g+ = (gT g)−1 is bounded and locally Lipschitz.\nAssumption 3: (Kamalapurkar et al., 2015) The optimal trajectory is bounded by a known positive constant b R such that ‖x̄‖ ≤ b and there exists a locally Lipschitz function hd such that ˙̄x = hd (x̄) and g(x̄) g+(x̄)(hd(x̄) - f(x̄)) = hd(x̄) - f(x̄).\nUsing the Assumption 2 and Assumption 3, the control signal ud required to track the desired trajectory x̄(t) is given as ud(x̄) = g+d (hd(x̄) − fd) where fd = f(x̄) and g + d = g\n+(x̄). The control error is given by µ = u(t) - ud(x̄). The system dynamics can now be expressed as\nζ̇ = F (ζ) +G(ζ)µ (7)\nwhere, the merged state ζ(t) ∈ R2n is given by ζ(t) = [eT , x̄T ]T and the functions F (ζ) and G(ζ) are defined as F (ζ) = [fT (e+ x̄)− hTd + uTd (x̄)gT (e+ x̄), hTd ]T and G(ζ) = [gT (e+ x̄), 0mXn]T where, 0mXn denotes a matrix of zeroes. The control error µ is treated hereafter as the design variable. The control objective is to solve a finite-horizon optimal tracking problem online, i.e., to design a control signal µ that will minimize the cost-to-go function, while tracking the desired\ntrajectory, is given by J(ζ, µ) = ∫ t\n0\nr(ζ(τ), µ(τ))dτ where, the local cost r : R2nXRm → R is\ngiven as r(ζ, τ) = Q(e) + µTRµ, R RmXm is a positive definite symmetric matrix and Q : Rn → R is a continous positive definite function. Based on the assumption of the existence of an optimal policy, it can be characterized in terms of the value function V ∗ : R2n → R which is defined as V ∗(ζ) =\nminµ(τ) U |τ Rt>0 ∫ t 0 r(φu(π, t, ζ), µ(τ))dτ , where U ∈ Rm is the action space and φu(t; t0, ζ0) is the trajectory of the system defined by equation (10) with the control effort µ : R>0 → Rm with the initial condition ζ0 ∈ R2n and the initial time t0 ∈ R>0. Taking into consideration that an optimal policy exists and that V ∗ is continously differentiable everywhere, the closed-form solution\n(Kirk, 2004) is given as µ∗(ζ) = -1/2 R−1GT (ζ)(∇ζV ∗(ζ))T where, ∇ζ(.) = ∂(.)\n∂x . This satisfies\nthe Hamilton-Jacobi-Bellman (HJB) equation (Kirk, 2004) given as ∇ζV ∗(ζ)(F (ζ) +G(ζ)µ∗(ζ)) + Q̄(ζ) + µ∗T (ζ)Rµ∗(ζ) = 0 (8) where, the initial condition V ∗ = 0, and the funtion Q̄ : R2n → R is defined as Q̄([eT , x̂T ]T ) = Q(e) where, (e(t), x̂(t)) ∈ Rn. Since, a closed-form solution of the HJB equation is generally infeasible to obtain, we sought an approximate solution. Therefore, an actor-critic based method is used to obtain the parametric estimates of the optimal value function and the optimal policy which are given as V̂ (ζ, Ŵc) and µ̂(ζ, Ŵa) where, Ŵc ∈ RL and Ŵa ∈ RL define the vector paramater estimates. The task of the actor and critic is to learn the corresponding parameters. Replacing the estimates V̂ and µ̂ for V ∗ and µ̂∗ in the HJB equation, we obtain the residual error, also known as the Bell Error (BE) as δ(ζ, Ŵc, Ŵa) = Q̄(ζ) + µ̂T (ζ, Ŵa)Rµ̂(ζ, Ŵa) +∇ζ V̂ (ζ, Ŵc)(F (ζ) +G(ζ)µ̂(ζ, Ŵa)) where, δ : R2n X RL X RL → R. The solution of the problem requires the actor and the critic to find a set of parameters Ŵa and Ŵc respectively such that δ(ζ, Ŵc, Ŵa) = 0 and µ̂T (ζ, Ŵa) = -1/2 R−1GT (ζ)(∇ζV ∗(ζ))T where, ∀ζ ∈ Rn. As the exact basis fucntion for the approximation is not known apriori, we seek to find a set of approximate parameters that minimizes the BE. However, an uniform approximation of the value function and the optimal control policy over the entire operating domain requires to find parameters that will able to minimize the error Es : RL X RL → R defined as Es(Ŵc, Ŵa) = supζ(|δ, Ŵc, Ŵa|) thus, making it necessary to have an exact knowledge of the system model. Two of the most popular methods used to render the design of the control strategy robust to system uncertainties in this context are integral RL (Lewis et al., 2012; Modares et al., 2014) and state derivative estimation (Bhasin et al., 2013; Kamalapurkar et al., 2014). Both of these methods suffer from the persistence of exitation(PE) condition that requires the state trajectory φû(t; t0, ζ0) to cover the entire operating domain for the convergence of the parameters to their optimal values. We have relaxed this condition where the integral technique is used in augmentation with the replay of the experience where every evaluation of the BE is intuitively formalized as a gained experience, and these experiences are kept in a history stack so that they can be iteratively used by the learning algorithm to improve data efficiency.\nTherefore, to relax the PE condition, the we have developed a CL-based system identifier which is used to model the parametric estimate of the system drift dynamics and is used to simulate the experience by extrapolating the Bell Error (BE) over the unexplored territory in the operating domain thereby, prompting an exponential convergence of the parameters to their optimal values." }, { "heading": "2.2.1 PARAMETRIC SYSTEM IDENTIFICATION", "text": "Defined by any compact set C ⊂ R, the function f can be defined using a neural network (NN) as f(x) = θTσf (Y Tx1) + 0(x) where, x1 = [1, xT ]T ∈ Rn+1, θ ∈ Rn+1Xp and Y ∈ Rn+1Xp indicates the constant unknown output-layer and hidden-layer NN weight, σf : Rp→ Rp+1 denotes a bounded NN activation function, θ: Rn → Rnis the function reconstruction error, p ∈ N denotes the number of NN neurons. Using the universal functionional approximation property of single layer NNs, given a constant matrix Y such that the rows of σf (Y Tx1) form a proper basis, there exist constant ideal weights θ and known constants θ̄, ̄θ, ̄′θ ∈ R such that ||θ|| < θ̄ <∞, supx C || θ(x)|| < ̄θ, supx C ||∇x θ(x)|| < ̄θ where, ||.|| denotes the Euclidean norm for vectors and the Frobenius norm for matrix (Lewis et al., 1998). Taking into consideration an estimate θ̂ ∈ Rp+1Xn of the weight matrix θ , the function f can be approximated by the function f̂ : R2n X Rp+1Xn → Rn which is defined as f̂(ζ, θ̂) = θ̂Tσθ(ζ), where σθ : R2n → Rp+1 can be defined as σθ(ζ) = σf (Y T [1, eT + x̄T ]T ). An estimator for online identification of the drift dynamics is developed\n˙̂x = θ̂Tσθ(ζ) + g(x)u+ kx̃ (9)\nwhere, x̃ = x− x̂ and k R R is a positive constant learning gain.\nAssumption 4: A history stack containing recorded state-action pairs {xj , uj}Mj=1 along with numerically computed state derivatives { ˙̄xj} M j=1 that satisfies λmin (∑M j=1 σfjσ T fj ) = σθ >\n0, ‖ ˙̄xj − ẋj‖ < d̄,∀j is available a priori, where σfj , σf ( Y T [ 1, xTj ]T) , d̄ ∈ R is a known\npositive constant, ẋj = f (xj) + g (xj)uj and λmin(·) denotes the minimum eigenvalue.\nThe weight estimates θ̂ are updated using the following CL based update law:\n˙̂ θ = Γθσf ( Y Tx1 ) x̃T + kθΓθ M∑ j=1 σfj ( ˙̄xj − gjuj − θ̂Tσfj )T (10)\nwhere kθ ∈ R is a constant positive CL gain, and Γθ ∈ Rp+1×p+1 is a constant, diagonal, and positive definite adaptation gain matrix. Using the identifier, the BE in (3) can be approximated as\nδ̂ ( ζ, θ̂, Ŵc, Ŵa ) = Q̄(ζ) + µ̂T ( ζ, Ŵa ) Rµ̂ ( ζ, Ŵa ) +∇ζ V̂ ( ζ, Ŵa )( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa\n)) (11) The BE is now approximated as\nδ̂ ( ζ, θ̂, Ŵc, Ŵa ) = Q̄(ζ) + µ̂T ( ζ, Ŵa ) Rµ̂ ( ζ, Ŵa ) +∇ζ V̂ ( ζ, Ŵa )( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa\n)) (12) In equation (12), Fθ(ζ, θ̂) = θ̂Tσθ(ζ)− g(x)g+ (xd) θ̂Tσθ ([ 0n×1xd ])\n0n×1 , and F1(ζ) =[ (−hd + g (e+ xd) g+ (xd)hd) T , hTd ]T ." }, { "heading": "2.2.2 VALUE FUNCTION APPROXIMATION", "text": "As V ∗ and µ∗ are functions of the state ζ, the optimization problem as defined in Section 2.2 is quite an intractable one, so the optimal value function is now represented as C ⊂ R2n using a NN as V ∗(ζ) = WTσ(ζ)+ (ζ),whereW ∈ RL denotes a vector of unknown NN weights, σ : R2n →RL indicates a bounded NN activation function, : R2n → R defines the function reconstruction error, and L ∈ N denotes the number of NN neurons. Considering the universal function approximation property of single layer NNs, for any compact set C ⊂ R2n, there exist constant ideal weights W and known positive constants W̄ , ̄, and ′ ∈ R such that ‖W‖ ≤ W̄ <∞ supζ∈C ‖ (ζ)‖ ≤ ̄, and supζ∈C ‖∇ζ (ζ)‖ ≤ ̄′ (Lewis et al., 1998). A NN representation of the optimal policy is obtained as\nµ∗(ζ) = −1 2 R−1GT (ζ)\n( ∇ζσT (ζ)W +∇ζ T (ζ) ) (13)\nTaking the estimates Ŵc and Ŵa for the ideal weightsW , the optimal value function and the optimal policy are approximated as V̂ ( ζ, Ŵc ) = ŴTc σ(ζ), µ̂ ( ζ, Ŵa ) = − 12R\n−1GT (ζ)∇ζσT (ζ)Ŵa. The optimal control problem is therefore recast as to find a set of weights Ŵc and Ŵa online to minimize the error Êθ̂ ( Ŵc, Ŵa ) = supζ∈χ\n∣∣∣δ̂ (ζ, θ̂, Ŵc, Ŵa)∣∣∣ for a given θ̂, while simultaneously improving θ̂ using the CL-based update law and ensuring stability of the system using the control law\nu = µ̂ ( ζ, Ŵa ) + ûd(ζ, θ̂) (14)\nwhere, ûd(ζ, θ̂) = g+d ( hd − θ̂Tσθd ) , and σθd = σθ ([ 01×n x T d ]T) . σθ ([ 01×n x T d ]T) . The error between ud and ûd is included in the stability analysis based on the fact that the error trajectories generated by the system ė = f(x)+g(x)u− ẋd under the controller in (14) are identical to the error trajectories generated by the system ζ̇ = F (ζ) + G(ζ)µ under the control law µ = µ̂ ( ζ, Ŵa ) + g+d θ̃ Tσθd + g + d θd, where θd , θ (xd)." }, { "heading": "2.2.3 EXPERIENCE SIMULATION", "text": "The simulation of experience is implemented by minimizing a squared sum of BEs over finitely many points in the state space domain as the calculation of the extremum (supremum) in Êθ̂ is not tractable. The details of the analysis has been explained in Appendix A.2 which facilitates the aforementioned approximation." }, { "heading": "2.2.4 STABILITY AND ROBUSTNESS ANALYSIS", "text": "To perform the stability analysis, we take the non-autonomous form of the value function (Kamalapurkar et al., 2015) defined by V ∗t : Rn X R → R which is defined as V ∗t (e, t) = V ∗ ([ eT , xTd (t) ]T) ,∀e ∈ Rn, t ∈ R, is positive definite and decrescent. Now, V ∗t (0, t) = 0,∀t ∈ R and there exist class K functions v : R → R and v̄ : R → R such that v(‖e‖) ≤ V ∗t (e, t) ≤ v̄(‖e‖), for all e ∈ Rn and for all t ∈ R. We take an augemented state given as Z ∈ R2n+2L+n(p+1) is defined as\nZ = [ eT , W̃Tc , W̃ T a , x̃ T , (vec(θ̃))T ]T\n(15)\nand a candidate Lyapunov function is defined as\nVL(Z, t) = V ∗ t (e, t) +\n1 2 W̃Tc Γ −1W̃c + 1 2 W̃Ta W̃a 1 2 x̃T x̃+ 1 2 tr ( θ̃TΓ−1θ θ̃ ) (16)\nwhere, vec (·) denotes the vectorization operator. From the weight update in Appendix A.2 we get positive constants γ, γ̄ ∈ R such that γ ≤ ∥∥Γ−1(t)∥∥ ≤ γ̄,∀t ∈ R. Taking the bounds on Γ and V ∗t and the fact that tr ( θ̃TΓ−1θ θ̃ ) = (vec(θ̃))T ( Γ−1θ ⊗ Ip+1 ) (vec(θ̃)) the candidate Lyapunov function be bounded as vl(‖Z‖) ≤ VL(Z, t) ≤ v̄l(‖Z‖) (17) for all Z ∈ R2n+2L+n(p+1) and for all t ∈ R, where vl : R → R and vl : R → R are class K functions. Now, Using (1) and the fact that V ∗t (e(t), t) = V̇\n∗(ζ(t)),∀t ∈ R, the time-derivative of the candidate Lyapunov function is given by\nV̇L = ∇ζV ∗ (F +Gµ∗)− W̃Tc Γ−1 ˙̂ Wc −\n1 2 W̃Tc Γ −1Γ̇Γ−1W̃c\n−W̃Ta ˙̂ Wa + V̇0 +∇ζV ∗Gµ−∇ζV ∗Gµ∗\n(18)\nUnder sufficient gain conditions (Kamalapurkar et al., 2014), using (9), (10)-(13), and the update laws given by Ŵc, Γ̇ and Ŵa the time-derivative of the candidate Lyapunov function can be bounded as V̇L ≤ −vl(‖Z‖),∀‖Z‖ ≥ v−1l (ι),∀Z ∈ χ (19) where ι is a positive constant, and χ ⊂ R2n+2L+n(p+1) is a compact set. Considering (13) and (15), the theorem 4.18 in (Khalil., 2002) can be used to establish that every trajectory Z(t) satisfying ‖Z (t0)‖ ≤ vl−1 (vl(ρ)) , where ρ is a positive constant, is bounded for all t ∈ R and satisfies lim supt→∞ ‖Z(t)‖ ≤ vl−1 ( vl ( v−1l (ι) )) . This aforementioned analysis addresses the stability issue of the closed loop system.\nThe robustness criterion requires the algorithm to satisfy the following inequality (Gao et al., 2014) in the presence of external disturbances with a pre-specified performance index γ known as the H-infinity (H∞) performance index, given by∫ t\n0\n‖y(t)‖2dt < γ2 ∫ t\n0\n‖w(t)‖dt (20)\nwhere, y(t) is the output of the system, w(t) is the factor that accounts for the modeling errors, parameter uncertainties and external disturbances and γ is the ratio of the output energy to the disturbance in the system.\nUsing (1) and the fact that V ∗t (e(t), t) = V̇ ∗(ζ(t)),∀t ∈ R, the time-derivative of the candidate Lyapunov function is given by\nV̇L = ∇ζV ∗ (F +Gµ∗)− W̃Tc Γ−1 ˙̂ Wc −\n1 2 W̃Tc Γ −1Γ̇Γ−1W̃c\n−W̃Ta ˙̂ Wa + V̇0 +∇ζV ∗Gµ−∇ζV ∗Gµ∗\n(21)\nGao et al. (2014) has shown if (22) and (23) is satisfied, then it can written that\n0 < VL(T ) = ∫ t 0 V̇L(t) ≤ − ∫ t 0 yT (t)y(t)dt+ γ2 ∫ t 0 wT (t)w(t)dt (22)\nThus, the performance inequality constraint given by ∫ t\n0 ‖y(t)‖2dt < γ2 ∫ t 0 ‖w(t)‖dt in terms of γ\nis satisfied." }, { "heading": "3 SIMULATION RESULTS AND DISCUSSION", "text": "Here, we are going to present the simulation results to demonstrate the performance of the proposed method with the fuel management system of the hybrid electric vehicle. The proposed concurrent learning based RL optimization architecture has been shown in the Figure 1.\nIn this architecture, the simulated state-action-derivative triplets performs the action of concurrent learning to approximate the value function weight estimates to minimize the bell error (BE). The history stack is used to store the evaluation of the bell error which is carried out by a dynamic system identifier as a gained experience so that it can iteratively used to reduce the computational burden.\nA simple two dimensional model of the fuel management system is being considered for the simulation purpose to provide a genralized solution that can be extended in other cases of high dimensional system.\nWe consider a two dimensional non-linear model given by\nf = [ x1 x2 0 0 0 0 x1 x2(1− (cos(2x1 + 2)2)) ] ∗ abc d , g = [ 0cos(2x1 + 2) ] , w(t) = sin(t) (23) where a, b, c, d ∈ R are unknown positive parameters whose values are selected as a=−1, b= 1, c = −0.5, d = −0.5, x1 and x2 are the two states of the hybrid electric vehicle given by the charge present in the battery and the amount of fuel in the car respectively and w(t) = sin(t) is a sinusoidal disturbance that is used to model the external disturbance function. The control\nobjective is to minimize the cost function given by J(ζ, µ) = ∫ t\n0\nr(ζ(τ), µ(τ))dτ where, the local\ncost r : R2nXRm → R is given as r(ζ, τ) = Q(e) + µTRµ, R RmXm is a positive definite symmetric matrix and Q : Rn → R is a continous positive definite function, while following the desired trajectory x̄ We chhose Q = I2x2 and R = 1. The optimal value function and optimal control for the system (15) are V ∗(x) = 1\n2 x21 +\n1 2 x22 and u ∗(x) = −cos(2(x1) + 2)x2. The basis\nfunction σ : R2 → R3 for value function approximation is σ = [x21, x21x22, x22]. The ideal weights are W = [0.5, 0, 1]. The initial value of the policy and the value function weight estimates are Ŵc = Ŵa = [1, 1, 1]T , least square gain is Γ(0) = 100I3X3 and that of the system states are x(0) = [−1,−1]T . The state estimates x̂ and θ̂ are initialized to 0 and 1 respectively while the history stack for the CL is updated online. Here, Figure 2 and Figure 3 shows the state trajectories obtained by the\ntraditional RL methods and that obtained by the CL-based RL optimization technique respectively in the presence of disturbances. It can be stated that settling time of trajectories obtained by the proposed method is significantly less (almost 40 percent) as compared with that of the conventional RL strategies thus justifying the uniqueness of the method and causing a saving in fuel consumption by about 40-45 percent. Figure 4 shows the corresponding control inputs whereas Figure 5 and\nFigure 6 indicates the convergence of the NN weight functions to their optimal values. The H∞ performance index in Figure 7 shows a value of 0.3 for the CL-based RL method in comparison to 0.45 for the traditional RL-based control design which clearly establishes the robustness of our proposed design." }, { "heading": "4 CONCLUSION", "text": "In this paper, we have proposed a robust concurrent learning based deep Rl optimization strategy for hybrid electric vehicles. The uniqueness of this method lies in use of a concurrent learning based RL optimization strategy that reduces the computational complexity significanty in comparison to the traditional RL approaches used for the fuel management system mentioned in the literature. Also, the use of the the H-infinity (H∞) performance index in case of RL optimization for the first time takes care of the robustness problems that most the fuel optimization nethods suffer from. The simulation results validate the efficacy of the method over the conventional PID, MPC as well as traditional RL based optimization techniques. Future work will generalize the approach for largescale partially observed uncertain systems and it will also incorporate the movement of neighbouring RL agents." }, { "heading": "A APPENDIX", "text": "A.1 THE GRADIENT DESCENT ALGORITHM\nThe Gradient Descent Algorithm has been explained as follows:\nAlgorithm; Gradient Descent\nInput : Design Parameters U (0) = u0t , α, h R\nOutput : Optimal control sequence {ūt} 1. n← 0,∇UJd ( x0, U (0) ) ←\n2. while∇UJd ( x0, U (n) ) ≥ do\n3. Evaluate the cost function with control U (n)\n4. Perturb each control variable u(n)i by h, i = 0, · · · , t, and calculate the gradient vector ∇UJd ( x0, U (n) ) using (7) and (8)\n5. Update the control policy: U (n+1) ← U (n) − α∇UJd ( x0, U (n) )\n6. n← n+ 1" }, { "heading": "7. end", "text": "8. {ūt} ← U (n)\nA.2 EXPERIENCE SIMULATION\nAssumption 5: (Kamalapurkar et al., 2014) There exists a finite set of points {ζi ∈ C | i = 1, · · · , N} and a constant c ∈ R such that 0 < c = 1N ( inft∈R≥t0 ( λmin {∑N i=1 ωiω T i ρi })) where ρi =1 +\nνωTi Γωi ∈ R, and ωi =∇ζσ (ζi) ( Fθ ( ζi, θ̂ ) + F1 (ζi) +G (ζi) µ̂ ( ζi, Ŵa )) .\nUsing Assumption 5, simulation of experience is implemented by the weight update laws given by\nŴc = −ηc1Γ ω\nρ δ̂t − ηc2 N Γ N∑ i=1 ωi ρi δ̂ti (24)\nΓ̇ = ( βΓ− ηc1Γ ωωT\nρ2 Γ\n) 1{‖Γ‖≤Γ̄}, ‖Γ (t0)‖ ≤ Γ̄, (25)\n˙̂ Wa = −ηa1 ( Ŵa − Ŵc ) − ηa2Ŵa + ( ηc1G T σ Ŵaω T\n4ρ + N∑ i=1 ηc2G T σiŴaω T i 4Nρi\n) Ŵc (26)\nwhere, ω = ∇ζσ(ζ) ( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa )) , Γ ∈ RL×L is the least-squares gain\nmatrix, Γ̄ ∈ R denotes a positive saturation constant, β ∈ R indicates a constant forgetting factor, ηc1, ηc2, ηa1, ηa2 ∈ R defines constant positive adaptation gains, 1{·} denotes the indicator function of the set {·}, Gσ = ∇ζσ(ζ)G(ζ)R−1GT (ζ)∇ζσT (ζ), and ρ = 1 + νωTΓω, where ν ∈ R is a positive normalization constant. In the above weight update laws, for any function ξ(ζ, ·), the notation ξi, is defined as ξi = ξ (ζi, ·) , and the instantaneous BEs δ̂t and δ̂ti are given as δ̂t = δ̂ ( ζ, Ŵc, Ŵa, θ̂ ) and δ̂ti = δ̂ ( ζi, Ŵc, Ŵa, θ̂ ) ." } ]
["This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RN(...TRUNCATED)
"Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecu(...TRUNCATED)
[ { "affiliations": [], "name": "Zichao Yan" }, { "affiliations": [], "name": "William L. Hamilton" } ]
[{"authors":["Bronwen L Aken","Premanand Achuthan","Wasiu Akanni","M Ridwan Amode","Friederike Berns(...TRUNCATED)
[{"heading":"1 INTRODUCTION","text":"There is an increasing interest in developing deep generative m(...TRUNCATED)
["This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty l(...TRUNCATED)
"Despite increasing instances of machine translation (MT) systems including extrasentential context (...TRUNCATED)
[ { "affiliations": [], "name": "MARKS FOR" }, { "affiliations": [], "name": "DISCOURSE PHENOMENA" } ]
[{"authors":["Rachel Bawden","Rico Sennrich","Alexandra Birch","Barry Haddow"],"title":"Evaluating d(...TRUNCATED)
[{"heading":"1 INTRODUCTION AND RELATED WORK","text":"The advances in neural machine translation (NM(...TRUNCATED)
["The authors present a framework that uses a combination of VAE and GAN to recover private user i(...TRUNCATED)
"System side channels denote effects imposed on the underlying system and hardware when running a pr(...TRUNCATED)
[{"affiliations":[],"name":"Yuanyuan Yuan"},{"affiliations":[],"name":"Shuai Wang"},{"affiliations":(...TRUNCATED)
[{"authors":["Onur Aciicmez","Cetin Kaya Koc"],"title":"Trace-driven cache attacks on AES","venue":"(...TRUNCATED)
[{"heading":"1 INTRODUCTION","text":"Side channel analysis (SCA) recovers program secrets based on t(...TRUNCATED)
["This paper proposes a method of learning ensembles that adhere to an \"ensemble version\" of the i(...TRUNCATED)
"Deep ensembles perform better than a single network thanks to the diversity among their members. Re(...TRUNCATED)
[ { "affiliations": [], "name": "Alexandre Rame" } ]
[{"authors":["Arturo Hernández Aguirre","Carlos A Coello Coello"],"title":"Mutual information-based(...TRUNCATED)
[{"heading":null,"text":"Deep ensembles perform better than a single network thanks to the diversity(...TRUNCATED)
["The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enab(...TRUNCATED)
"Visual cognition of primates is superior to that of artificial neural networks in its ability to (...TRUNCATED)
[{"affiliations":[],"name":"Yunhao Ge"},{"affiliations":[],"name":"Sami Abu-El-Haija"},{"affiliation(...TRUNCATED)
[{"authors":["Yuval Atzmon","Gal Chechik"],"title":"Probabilistic and-or attribute grouping for zero(...TRUNCATED)
[{"heading":"1 INTRODUCTION","text":"Primates perform well at generalization tasks. If presented wit(...TRUNCATED)
["This paper presents an approach to learn goal conditioned policies by relying on self-play which s(...TRUNCATED)
"We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, includin(...TRUNCATED)
[{"authors":["Marcin Andrychowicz","Filip Wolski","Alex Ray","Jonas Schneider","Rachel Fong","Peter (...TRUNCATED)
[{"heading":"1 INTRODUCTION","text":"We are motivated to train a single goal-conditioned policy (Kae(...TRUNCATED)

MuP - Multi Perspective Scientific Document Summarization

Generating summaries of scientific documents is known to be a challenging task. Majority of existing work in summarization assumes only one single best gold summary for each given document. Having only one gold summary negatively impacts our ability to evaluate the quality of summarization systems as writing summaries is a subjective activity. At the same time, annotating multiple gold summaries for scientific documents can be extremely expensive as it requires domain experts to read and understand long scientific documents. This shared task will enable exploring methods for generating multi-perspective summaries. We introduce a novel summarization corpus, leveraging data from scientific peer reviews to capture diverse perspectives from the reader's point of view.

For more information about the dataset please refer to:

Downloads last month